一、介绍
The Elastic Stack - 它不是一个软件,而是Elasticsearch,Logstash,Kibana 开源软件的集合,对外是作为一个日志管理系统的开源方案。它可以从任何来源,任何格式进行日志搜索,分析获取数据,并实时进行展示。像盾牌(安全),监护者(警报)和Marvel(监测)一样为你的产品提供更多的可能。
Elasticsearch:搜索,提供分布式全文搜索引擎
Logstash: 日志收集,管理,存储
Kibana :日志的过滤web 展示
Filebeat:监控日志文件、转发
二、测试环境规划图
环境:ip、主机名按照如上规划,系统已经 update. 所有主机时间一致。防火墙测试环境已关闭。下面是这次elk学习的部署安装
目的:通过elk 主机收集监控主要server的系统日志、以及线上应用服务日志。
三、Elasticsearch+Logstash+Kibana的安装(在 elk.test.com 上进行操作)
3.1.基础环境检查
[root@elk ~]# hostname
elk.test.com
[root@elk ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
:: localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.30.67 elk.test.com
192.168.30.99 rsyslog.test.com
192.168.30.64 nginx.test.com
3.2.软件包
[root@elk ~]# cd elk/
[root@elk elk]# wget -c https://download.elastic.co/elasticsearch/release/org/elasticsearch/distribution/rpm/elasticsearch/2.3.3/elasticsearch-2.3.3.rpm
[root@elk elk]# wget -c https://download.elastic.co/logstash/logstash/packages/centos/logstash-2.3.2-1.noarch.rpm
[root@elk elk]# wget https://download.elastic.co/kibana/kibana/kibana-4.5.1-1.x86_64.rpm
[root@elk elk]# wget -c https://download.elastic.co/beats/filebeat/filebeat-1.2.3-x86_64.rpm
3.3.检查
[root@elk elk]# ls
elasticsearch-2.3..rpm filebeat-1.2.-x86_64.rpm kibana-4.5.-.x86_64.rpm logstash-2.3.-.noarch.rpm
服务器只需要安装e、l、k, 客户端只需要安装filebeat。
3.4.安装elasticsearch,先安装jdk,elk server 需要java 开发环境支持,由于客户端上使用的是filebeat软件,它不依赖java环境,所以不需要安装。
[root@elk elk]# yum install java-1.8.-openjdk -y
安装es
[root@elk elk]# yum localinstall elasticsearch-2.3..rpm -y
.....
Installing : elasticsearch-2.3.-.noarch /
### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd
sudo systemctl daemon-reload
sudo systemctl enable elasticsearch.service
### You can start elasticsearch service by executing
sudo systemctl start elasticsearch.service
Verifying : elasticsearch-2.3.-.noarch / Installed:
elasticsearch.noarch :2.3.-1
重新载入 systemd,扫描新的或有变动的单元;启动并加入开机自启动
[root@elk elk]# systemctl daemon-reload
[root@elk elk]# systemctl enable elasticsearch
Created symlink from /etc/systemd/system/multi-user.target.wants/elasticsearch.service to /usr/lib/systemd/system/elasticsearch.service.
[root@elk elk]# systemctl start elasticsearch
[root@elk elk]# systemctl status elasticsearch
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
Active: active (running) since Fri -- :: CST; 12s ago
Docs: http://www.elastic.co
Process: ExecStartPre=/usr/share/elasticsearch/bin/elasticsearch-systemd-pre-exec (code=exited, status=/SUCCESS)
Main PID: (java)
CGroup: /system.slice/elasticsearch.service
└─ /bin/java -Xms256m -Xmx1g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancy... May :: elk.test.com elasticsearch[]: [-- ::,][INFO ][env ] [James Howlett] heap...[true]
May :: elk.test.com elasticsearch[]: [-- ::,][WARN ][env ] [James Howlett] max ...]
May :: elk.test.com elasticsearch[]: [-- ::,][INFO ][node ] [James Howlett] initialized
May :: elk.test.com elasticsearch[]: [-- ::,][INFO ][node ] [James Howlett] starting ...
May :: elk.test.com elasticsearch[]: [-- ::,][INFO ][transport ] [James Howlett] publ...:}
May :: elk.test.com elasticsearch[]: [-- ::,][INFO ][discovery ] [James Howlett] elas...xx35hw
May :: elk.test.com elasticsearch[]: [-- ::,][INFO ][cluster.service ] [James Howlett] new_...eived)
May :: elk.test.com elasticsearch[]: [-- ::,][INFO ][gateway ] [James Howlett] reco..._state
May :: elk.test.com elasticsearch[]: [-- ::,][INFO ][http ] [James Howlett] publ...:}
May :: elk.test.com elasticsearch[]: [-- ::,][INFO ][node ] [James Howlett] started
Hint: Some lines were ellipsized, use -l to show in full.
检查服务
[root@elk elk]# rpm -qc elasticsearch
/etc/elasticsearch/elasticsearch.yml
/etc/elasticsearch/logging.yml
/etc/init.d/elasticsearch
/etc/sysconfig/elasticsearch
/usr/lib/sysctl.d/elasticsearch.conf
/usr/lib/systemd/system/elasticsearch.service
/usr/lib/tmpfiles.d/elasticsearch.conf
[root@elk elk]# netstat -nltp | grep java
tcp6 127.0.0.1: :::* LISTEN /java
tcp6 ::: :::* LISTEN /java
tcp6 127.0.0.1: :::* LISTEN /java
tcp6 ::: :::* LISTEN /java
修改防火墙,将9200、9300 端口对外开放
[root@elk elk]# firewall-cmd --permanent --add-port={/tcp,/tcp}
success
[root@elk elk]# firewall-cmd --reload
success
[root@elk elk]# firewall-cmd --list-all
public (default, active)
interfaces: eno16777984 eno33557248
sources:
services: dhcpv6-client ssh
ports: /tcp /tcp
masquerade: no
forward-ports:
icmp-blocks:
rich rules:
3.5 安装kibana
[root@elk elk]# yum localinstall kibana-4.5.-.x86_64.rpm –y
[root@elk elk]# systemctl enable kibana
Created symlink from /etc/systemd/system/multi-user.target.wants/kibana.service to /usr/lib/systemd/system/kibana.service.
[root@elk elk]# systemctl start kibana [root@elk elk]# systemctl status kibana
● kibana.service - no description given
Loaded: loaded (/usr/lib/systemd/system/kibana.service; enabled; vendor preset: disabled)
Active: active (running) since Fri -- :: CST; 20s ago
Main PID: (node)
CGroup: /system.slice/kibana.service
└─ /opt/kibana/bin/../node/bin/node /opt/kibana/bin/../src/cli May :: elk.test.com kibana[]: {"type":"log","@timestamp":"2016-05-20T07:49:05+00:00","tags":["status","plugin:elasticsearch...
May :: elk.test.com kibana[]: {"type":"log","@timestamp":"2016-05-20T07:49:05+00:00","tags":["status","plugin:kbn_vi...lized"}
May :: elk.test.com kibana[]: {"type":"log","@timestamp":"2016-05-20T07:49:05+00:00","tags":["status","plugin:markdo...lized"}
May :: elk.test.com kibana[]: {"type":"log","@timestamp":"2016-05-20T07:49:05+00:00","tags":["status","plugin:metric...lized"}
May :: elk.test.com kibana[]: {"type":"log","@timestamp":"2016-05-20T07:49:05+00:00","tags":["status","plugin:spyMod...lized"}
May :: elk.test.com kibana[]: {"type":"log","@timestamp":"2016-05-20T07:49:05+00:00","tags":["status","plugin:status...lized"}
May :: elk.test.com kibana[]: {"type":"log","@timestamp":"2016-05-20T07:49:05+00:00","tags":["status","plugin:table_...lized"}
May :: elk.test.com kibana[]: {"type":"log","@timestamp":"2016-05-20T07:49:05+00:00","tags":["listening","info"],"pi...:5601"}
May :: elk.test.com kibana[]: {"type":"log","@timestamp":"2016-05-20T07:49:10+00:00","tags":["status","plugin:elasticsearch...
May :: elk.test.com kibana[]: {"type":"log","@timestamp":"2016-05-20T07:49:14+00:00","tags":["status","plugin:elasti...found"}
Hint: Some lines were ellipsized, use -l to show in full.
检查kibana服务运行(Kibana默认 进程名:node ,端口5601)
[root@elk elk]# netstat -nltp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0.0.0.0: 0.0.0.0:* LISTEN /sshd
tcp 127.0.0.1: 0.0.0.0:* LISTEN /master
tcp 0.0.0.0: 0.0.0.0:* LISTEN /node
修改防火墙,对外开放tcp/5601
[root@elk elk]# firewall-cmd --permanent --add-port=/tcp
Success
[root@elk elk]# firewall-cmd --reload
success
[root@elk elk]# firewall-cmd --list-all
public (default, active)
interfaces: eno16777984 eno33557248
sources:
services: dhcpv6-client ssh
ports: /tcp /tcp /tcp
masquerade: no
forward-ports:
icmp-blocks:
rich rules:
这时,我们可以打开浏览器,测试访问一下kibana服务器http://192.168.30.67:5601/,确认没有问题,如下图:
在这里,我们可以修改防火墙,将用户访问80端口连接转发到5601上,这样可以直接输入网址不用指定端口了,如下:
[root@elk elk]# firewall-cmd --permanent --add-forward-port=port=:proto=tcp:toport=
[root@elk elk]# firewall-cmd --reload
[root@elk elk]# firewall-cmd --list-all
public (default, active)
interfaces: eno16777984 eno33557248
sources:
services: dhcpv6-client ssh
ports: /tcp /tcp /tcp
masquerade: no
forward-ports: port=:proto=tcp:toport=:toaddr=
icmp-blocks:
rich rules:
3.6 安装logstash,以及添加配置文件
[root@elk elk]# yum localinstall logstash-2.3.-.noarch.rpm –y
生成证书
[root@elk elk]# cd /etc/pki/tls/
[root@elk tls]# ls
cert.pem certs misc openssl.cnf private [root@elk tls]# openssl req -subj '/CN=elk.test.com/' -x509 -days -batch -nodes -newkey rsa: -keyout private/logstash-forwarder.key -out
certs/logstash-forwarder.crt
Generating a bit RSA private key
...................................................................+++
......................................................+++
writing new private key to 'private/logstash-forwarder.key'
-----
之后创建logstash 的配置文件。如下:
[root@elk ~]# cat /etc/logstash/conf.d/-logstash-initial.conf
input {
beats {
port =>
type => "logs"
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
} filter {
if [type] == "syslog-beat" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
geoip {
source => "clientip"
}
syslog_pri {}
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
} output {
elasticsearch { }
stdout { codec => rubydebug }
}
启动logstash,并检查端口,配置文件里,我们写的是5000端口
[root@elk conf.d]# systemctl start logstash
[root@elk elk]# /sbin/chkconfig logstash on
[root@elk conf.d]# netstat -ntlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0.0.0.0: 0.0.0.0:* LISTEN /sshd
tcp 127.0.0.1: 0.0.0.0:* LISTEN /master
tcp 0.0.0.0: 0.0.0.0:* LISTEN /node
tcp 0.0.0.0: 0.0.0.0:* LISTEN /rsyslogd
tcp6 ::: :::* LISTEN /java
tcp6 ::: :::* LISTEN /mysqld
tcp6 127.0.0.1: :::* LISTEN /java
tcp6 ::: :::* LISTEN /java
tcp6 127.0.0.1: :::* LISTEN /java
tcp6 ::: :::* LISTEN /java
tcp6 ::: :::* LISTEN /sshd
tcp6 ::: :::* LISTEN /master
tcp6 ::: :::* LISTEN /rsyslogd
修改防火墙,将5000端口对外开放。
[root@elk ~]# firewall-cmd --permanent --add-port=/tcp
success
[root@elk ~]# firewall-cmd --reload
success
[root@elk ~]# firewall-cmd --list-all
public (default, active)
interfaces: eno16777984 eno33557248
sources:
services: dhcpv6-client ssh
ports: /tcp /tcp /tcp /tcp
masquerade: no
forward-ports: port=:proto=tcp:toport=:toaddr=
icmp-blocks:
rich rules:
3.7 修改elasticsearch 配置文件
查看目录,创建文件夹es-01(名字不是必须的),logging.yml是自带的,elasticsearch.yml是创建的文件,内如见下:
[root@elk ~]# cd /etc/elasticsearch/
[root@elk elasticsearch]# tree
.
├── es-
│ ├── elasticsearch.yml
│ └── logging.yml
└── scripts
[root@elk elasticsearch]# cat es-/elasticsearch.yml
----
http:
port:
network:
host: elk.test.com
node:
name: elk.test.com
path:
data: /etc/elasticsearch/data/es-
3.8 重启elasticsearch、logstash服务。
3.9 将 fiebeat安装包拷贝到 rsyslog、nginx 客户端上
[root@elk elk]# scp filebeat-1.2.-x86_64.rpm root@rsyslog.test.com:/root/elk
[root@elk elk]# scp filebeat-1.2.-x86_64.rpm root@nginx.test.com:/root/elk
[root@elk elk]# scp /etc/pki/tls/certs/logstash-forwarder.crt rsyslog.test.com:/root/elk
[root@elk elk]# scp /etc/pki/tls/certs/logstash-forwarder.crt nginx.test.com:/root/elk
四、客户端部署filebeat(在rsyslog、nginx客户端上操作)
filebeat客户端是一个轻量级的,从服务器上的文件收集日志资源的工具,这些日志转发到处理到Logstash服务器上。该Filebeat客户端使用安全的Beats协议与Logstash实例通信。lumberjack协议被设计为可靠性和低延迟。Filebeat使用托管源数据的计算机的计算资源,并且Beats输入插件尽量减少对Logstash的资源需求。
4.1.(node1)安装filebeat,拷贝证书,创建收集日志配置文件
[root@rsyslog elk]# yum localinstall filebeat-1.2.-x86_64.rpm -y
#拷贝证书到本机指定目录中
[root@rsyslog elk]# cp logstash-forwarder.crt /etc/pki/tls/certs/.
[root@rsyslog elk]# cd /etc/filebeat/
[root@rsyslog filebeat]# tree
.
├── conf.d
│ ├── authlogs.yml
│ └── syslogs.yml
├── filebeat.template.json
└── filebeat.yml directory, files
修改的文件有3个,filebeat.yml,是定义连接logstash 服务器的配置。conf.d目录下的2个配置文件是自定义监控日志的,下面看下各自的内容:
filebeat.yml
[root@rsyslog filebeat]# cat filebeat.yml
filebeat:
spool_size:
idle_timeout: 5s
registry_file: .filebeat
config_dir: /etc/filebeat/conf.d
output:
logstash:
hosts:
- elk.test.com:
tls:
certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
enabled: true
shipper: {}
logging: {}
runoptions: {}
authlogs.yml & syslogs.yml
[root@rsyslog filebeat]# cat conf.d/authlogs.yml
filebeat:
prospectors:
- paths:
- /var/log/secure
encoding: plain
fields_under_root: false
input_type: log
ignore_older: 24h
document_type: syslog-beat
scan_frequency: 10s
harvester_buffer_size:
tail_files: false
force_close_files: false
backoff: 1s
max_backoff: 1s
backoff_factor:
partial_line_waiting: 5s
max_bytes: [root@rsyslog filebeat]# cat conf.d/syslogs.yml
filebeat:
prospectors:
- paths:
- /var/log/messages
encoding: plain
fields_under_root: false
input_type: log
ignore_older: 24h
document_type: syslog-beat
scan_frequency: 10s
harvester_buffer_size:
tail_files: false
force_close_files: false
backoff: 1s
max_backoff: 1s
backoff_factor:
partial_line_waiting: 5s
max_bytes:
修改完成后,启动filebeat服务
[root@rsyslog filebeat]# service filebeat start
Starting filebeat: [ OK ]
[root@rsyslog filebeat]# chkconfig filebeat on [root@rsyslog filebeat]# netstat -altp
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp localhost: *:* LISTEN /python2
tcp *:ssh *:* LISTEN /sshd
tcp localhost:ipp *:* LISTEN /cupsd
tcp localhost:smtp *:* LISTEN /master
tcp rsyslog.test.com: elk.test.com:commplex-main ESTABLISHED /filebeat
tcp rsyslog.test.com:ssh 192.168.30.65: ESTABLISHED /sshd
tcp *:ssh *:* LISTEN /sshd
tcp localhost:ipp *:* LISTEN /cupsd
tcp localhost:smtp *:* LISTEN /master
如果连接不上,状态不正常的话,检查下客户端的防火墙。
4.2. (node2)安装filebeat,拷贝证书,创建收集日志配置文件
[root@nginx elk]# yum localinstall filebeat-1.2.-x86_64.rpm -y
[root@nginx elk]# cp logstash-forwarder.crt /etc/pki/tls/certs/.
[root@nginx elk]# cd /etc/filebeat/
[root@nginx filebeat]# tree
.
├── conf.d
│ ├── nginx.yml
│ └── syslogs.yml
├── filebeat.template.json
└── filebeat.yml directory, files
修改filebeat.yml 内容如下:
[root@rsyslog filebeat]# cat filebeat.yml
filebeat:
spool_size:
idle_timeout: 5s
registry_file: .filebeat
config_dir: /etc/filebeat/conf.d
output:
logstash:
hosts:
- elk.test.com:
tls:
certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
enabled: true
shipper: {}
logging: {}
runoptions: {}
syslogs.yml & nginx.yml
[root@nginx filebeat]# cat conf.d/syslogs.yml
filebeat:
prospectors:
- paths:
- /var/log/messages
encoding: plain
fields_under_root: false
input_type: log
ignore_older: 24h
document_type: syslog-beat
scan_frequency: 10s
harvester_buffer_size:
tail_files: false
force_close_files: false
backoff: 1s
max_backoff: 1s
backoff_factor:
partial_line_waiting: 5s
max_bytes: [root@nginx filebeat]# cat conf.d/nginx.yml
filebeat:
prospectors:
- paths:
- /var/log/nginx/access.log
encoding: plain
fields_under_root: false
input_type: log
ignore_older: 24h
document_type: syslog-beat
scan_frequency: 10s
harvester_buffer_size:
tail_files: false
force_close_files: false
backoff: 1s
max_backoff: 1s
backoff_factor:
partial_line_waiting: 5s
max_bytes:
修改完成后,启动filebeat服务,并检查filebeat进程
[root@nginx filebeat]# service filebeat start
Starting filebeat: [ OK ]
[root@nginx filebeat]# chkconfig filebeat on [root@nginx filebeat]# netstat -aulpt
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp *:ssh *:* LISTEN /sshd
tcp localhost:smtp *:* LISTEN /master
tcp *:http *:* LISTEN /nginx
tcp nginx.test.com:ssh 192.168.30.65: ESTABLISHED /sshd
tcp nginx.test.com: elk.test.com:commplex-main ESTABLISHED /filebeat
tcp nginx.test.com:ssh 192.168.30.65: ESTABLISHED /sshd
tcp nginx.test.com:ssh 192.168.30.65: ESTABLISHED /sshd
tcp *:ssh *:* LISTEN /sshd
通过上面可以看出,客户端filebeat进程已经和 elk 服务器连接了。下面去验证。
五、验证,访问kibana http://192.168.30.67
5.1 设置下
查看下两台机器的系统日志:node1的
node2的nginx 访问日志
六、体验
之前在学习rsyslog +LogAnalyzer,然后又学了这个之后,发现elk 不管从整体系统,还是体验都是不错的,而且更新快。后续会继续学习,更新相关的监控过滤日志方法,日志分析,以及使用kafka 来进行存储的架构。
本文章属于原创,如果觉得有价值,转载时请注明出处。谢谢
参考网站:https://www.elastic.co/products/elasticsearch
https://www.elastic.co/downloads
Centos7 之安装Logstash ELK stack 日志管理系统的更多相关文章
-
CentOS 7下安装Logstash ELK Stack 日志管理系统(上)
介绍 The Elastic Stack - 它不是一个软件,而是Elasticsearch,Logstash,Kibana 开源软件的集合,对外是作为一个日志管理系统的开源方案.它可以从任何来源,任 ...
-
CentOS 7下安装Logstash ELK Stack 日志管理系统(下)
修改防火墙,对外开放tcp/5601 [root@elk elk]# firewall-cmd --permanent --add-port=5601/tcpSuccess[root@elk elk] ...
-
170228、Linux操作系统安装ELK stack日志管理系统--(1)Logstash和Filebeat的安装与使用
安装测试环境:Ubuntu 16.04.2 LTS 前言 (1)ELK是Elasticsearch,Logstash,Kibana 开源软件的集合,对外是作为一个日志管理系统的开源方案.它可以从任何来 ...
-
离线部署ELK+kafka日志管理系统【转】
转自 离线部署ELK+kafka日志管理系统 - xiaoxiaozhou - 51CTO技术博客http://xiaoxiaozhou.blog.51cto.com/4681537/1854684 ...
-
centos7下安装docker(18docker日志---docker logs)
在微服务架构中,由于容器的数量众多以及快速变化的特性使得记录日志和监控变得越来越重要,考虑到容器的短暂和不固定周期,当我们需要排查问题的时候容器可能不在了.因此,一套集中式的日志管理系统是生产环境中不 ...
-
Centos6.5使用ELK(Elasticsearch + Logstash + Kibana) 搭建日志集中分析平台实践
Centos6.5安装Logstash ELK stack 日志管理系统 概述: 日志主要包括系统日志.应用程序日志和安全日志.系统运维和开发人员可以通过日志了解服务器软硬件信息.检查配置过程中的 ...
-
Kubernetes实战之部署ELK Stack收集平台日志
主要内容 1 ELK概念 2 K8S需要收集哪些日志 3 ELK Stack日志方案 4 容器中的日志怎么收集 5 K8S平台中应用日志收集 准备环境 一套正常运行的k8s集群,kubeadm安装部署 ...
-
centos7下安装docker(18.2docker日志---ELK)
ELK是三个软件得组合:Elasticsearch,Logstash,Kibana Elasticsearch:实时查询的全文搜索引擎.Elasticsearch的设计目的就是能够处理和搜索巨量的日志 ...
-
ELK Stack 介绍 &; Logstash 日志收集
ELK Stack 组成 Software Description Function E:Elasticsearch Java 程序 存储,查询日志 L:Logstash Java 程序 收集.过滤日 ...
随机推荐
-
Django分析之如何自定义manage命令
我们都用过Django的manage.py的命令,而manage.py是在我们创建Django项目的时候就自动生成在根目录下的一个命令行工具,它可以执行一些简单的命令,其功能是将Django proj ...
-
.htaccess的301重定向代码
把不带www的域名301到带www的域名 RewriteEngine On RewriteCond %{http_host} ^example.com$ [NC] RewriteRule ^(.*)$ ...
-
【poj2234】 Matches Game
http://poj.org/problem?id=2234 (题目链接) 题意 经典取火柴游戏 Solution 裸的Nim游戏,也就是取石子. 整个游戏的sg值为每一堆火柴(子游戏)的异或和. 代 ...
-
Linux下设置定期执行脚本
下面针对的是非ubuntu环境,会在文章末尾介绍ubuntu的一些区别. 在Linux下,经常需要定期的执行一些脚本从而来实现一些功能. 在Linux下我们用crontab来实现定期的执行脚本这个功能 ...
-
C#用注册表开机自动启动某某软件
代码如下: public static void chkAutoRun(bool isRun) { if (isRun)//开机自动启动 { try { RegistryKey runKey = Re ...
-
2016腾讯";创益24小时";互联网公益创新大赛总结
上周末参加了腾讯的"创益24小时"互联网公益大赛,和两个小伙伴(设计师Beryl和产品经理Benny)浴血奋战两天一夜,完成了一个叫"彩虹桥"的公益项目. (一 ...
-
Tree Cutting POJ - 2378 (树形DP)
题目链接:POJ - 2378 题目大意:给你n个点,然后问你这n个点中 ,去除哪些点能够使得剩下的图中最大的连通块中点的个数不超过n/2. 具体思路:第一遍dfs记录每一个点代表的子树大小,第二遍d ...
-
Spring Advisor
SpringAdvisor 顾问:在通知的基础之上,在细入我们的切面AOP 通知和顾问都是切面的实现方式 通知是顾问的一个属性 顾问会通过我们的设置,将不同的通知,在不同的时间点把切面织入不同的切入点 ...
-
Enum,Int,String的互相转换
Enum为枚举提供基类,其基础类型可以是除 Char 外的任何整型.如果没有显式声明基础类型,则使用Int32.编程语言通常提供语法来声明由一组已命名的常数和它们的值组成的枚举. 注意:枚举类型的基类 ...
-
Delphi记录record中的变体
program Day4; {$APPTYPE CONSOLE} uses SysUtils, Util in 'Util.pas'; type TPerson = packed record ID ...