2 storm的topology提交执行

时间:2024-12-23 11:36:44

本博文的主要内容有

  .storm单机模式,打包,放到storm集群

  .Storm的并发机制图

  .Storm的相关概念

  .附PPT

打包,放到storm集群去。我这里,是单机模式下的storm。

weekend110-storm  ->   Export   ->   JAR file   ->

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

当然,这边,肯定是,准备工作已经做好了。如启动了zookeeper,storm集群。

2  storm的topology提交执行

2  storm的topology提交执行

上传导出的jar

2  storm的topology提交执行

sftp> cd /home/hadoop/

sftp> put c:/d

demotop.jar           Documents and Settings/

sftp> put c:/demotop.jar

Uploading demotop.jar to /home/hadoop/demotop.jar

100% 8KB      8KB/s 00:00:00

c:/demotop.jar: 9199 bytes transferred in 0 seconds (8 KB/s)

sftp>

新建输出目录

/home/hadoop/stormoutput/

2  storm的topology提交执行

2  storm的topology提交执行

[hadoop@weekend110 ~]$ cd /home/hadoop/app/apache-storm-0.9.2-incubating/

[hadoop@weekend110 apache-storm-0.9.2-incubating]$ cd bin

[hadoop@weekend110 bin]$ ls

storm  storm.cmd  storm-config.cmd

[hadoop@weekend110 bin]$ mkdir -p /home/hadoop/stormoutput/

[hadoop@weekend110 bin]$ ./storm jar ~/demotop.jar cn.itcast.stormdemo.TopoMain

2  storm的topology提交执行

[hadoop@weekend110 bin]$ ./storm jar ~/demotop.jar cn.itcast.stormdemo.TopoMain

Running: /home/hadoop/app/jdk1.7.0_65/bin/java -client -Dstorm.options= -Dstorm.home=/home/hadoop/app/apache-storm-0.9.2-incubating -Djava.library.path=/usr/local/lib:/opt/local/lib:/usr/lib -Dstorm.conf.file= -cp /home/hadoop/app/apache-storm-0.9.2-incubating/lib/hiccup-0.3.6.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/log4j-over-slf4j-1.6.6.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/chill-java-0.3.5.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/httpcore-4.3.2.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/zookeeper-3.4.5.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/ring-core-1.1.5.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/clj-time-0.4.1.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/storm-core-0.9.2-incubating.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/httpclient-4.3.3.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/curator-framework-2.4.0.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/minlog-1.2.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/commons-codec-1.6.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/core.incubator-0.1.0.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/tools.macro-0.1.0.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/ring-servlet-0.3.11.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/netty-3.6.3.Final.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/commons-lang-2.5.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/logback-core-1.0.6.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/jgrapht-core-0.9.0.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/slf4j-api-1.6.5.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/kryo-2.21.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/clout-1.0.1.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/objenesis-1.2.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/tools.cli-0.2.4.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/jetty-6.1.26.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/joda-time-2.0.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/servlet-api-2.5-20081211.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/carbonite-1.4.0.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/curator-client-2.4.0.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/ring-jetty-adapter-0.3.11.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/commons-exec-1.1.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/reflectasm-1.07-shaded.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/commons-logging-1.1.3.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/clojure-1.5.1.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/guava-13.0.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/disruptor-2.10.1.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/compojure-1.1.3.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/netty-3.2.2.Final.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/servlet-api-2.5.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/clj-stacktrace-0.2.4.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/commons-io-2.4.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/logback-classic-1.0.6.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/ring-devel-0.3.11.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/snakeyaml-1.11.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/jline-2.11.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/tools.logging-0.2.3.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/asm-4.0.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/json-simple-1.1.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/jetty-util-6.1.26.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/math.numeric-tower-0.0.1.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/commons-fileupload-1.2.1.jar:/home/hadoop/demotop.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/conf:/home/hadoop/app/apache-storm-0.9.2-incubating/bin -Dstorm.jar=/home/hadoop/demotop.jar cn.itcast.stormdemo.TopoMain

2495 [main] INFO  backtype.storm.StormSubmitter - Jar not uploaded to master yet. Submitting jar...

2566 [main] INFO  backtype.storm.StormSubmitter - Uploading topology jar /home/hadoop/demotop.jar to assigned location: /home/hadoop/data/apache-storm-0.9.2-incubating/tmp/storm/nimbus/inbox/stormjar-67666aeb-2578-43c5-a328-e91d30b25a36.jar

2664 [main] INFO  backtype.storm.StormSubmitter - Successfully uploaded topology jar to assigned location: /home/hadoop/data/apache-storm-0.9.2-incubating/tmp/storm/nimbus/inbox/stormjar-67666aeb-2578-43c5-a328-e91d30b25a36.jar

2665 [main] INFO  backtype.storm.StormSubmitter - Submitting topology demotopo in distributed mode with conf {"topology.workers":4,"topology.acker.executors":0,"topology.debug":true}

4171 [main] INFO  backtype.storm.StormSubmitter - Finished submitting topology: demotopo

[hadoop@weekend110 bin]$

http://weekend110:8080/

2  storm的topology提交执行

Storm UI

Cluster Summary

Version

Nimbus uptime

Supervisors

Used slots

Free slots

Total slots

Executors

Tasks

0.9.2-incubating

5h 4m 2s

1

4

0

4

12

16

Topology summary

Name

Id

Status

Uptime

Num workers

Num executors

Num tasks

demotopo

demotopo-1-1476517821

ACTIVE

55s

16

4

12

Supervisor summary

Id

Host

Uptime

Slots

Used slots

3a41e7dd-0160-4ad0-bad5-096cdba4647e

weekend110

5h 2m 51s

4

4

Nimbus Configuration

Key

Value

dev.zookeeper.path

/tmp/dev-storm-zookeeper

topology.tick.tuple.freq.secs

 

topology.builtin.metrics.bucket.size.secs

60

topology.fall.back.on.java.serialization

true

topology.max.error.report.per.interval

5

zmq.linger.millis

5000

topology.skip.missing.kryo.registrations

false

storm.messaging.netty.client_worker_threads

1

ui.childopts

-Xmx768m

storm.zookeeper.session.timeout

20000

nimbus.reassign

true

topology.trident.batch.emit.interval.millis

500

storm.messaging.netty.flush.check.interval.ms

10

nimbus.monitor.freq.secs

10

logviewer.childopts

-Xmx128m

java.library.path

/usr/local/lib:/opt/local/lib:/usr/lib

topology.executor.send.buffer.size

1024

storm.local.dir

/home/hadoop/data/apache-storm-0.9.2-incubating/tmp/storm

storm.messaging.netty.buffer_size

5242880

supervisor.worker.start.timeout.secs

120

topology.enable.message.timeouts

true

nimbus.cleanup.inbox.freq.secs

600

nimbus.inbox.jar.expiration.secs

3600

drpc.worker.threads

64

topology.worker.shared.thread.pool.size

4

nimbus.host

weekend110

storm.messaging.netty.min_wait_ms

100

storm.zookeeper.port

2181

transactional.zookeeper.port

 

topology.executor.receive.buffer.size

1024

transactional.zookeeper.servers

 

storm.zookeeper.root

/storm

storm.zookeeper.retry.intervalceiling.millis

30000

supervisor.enable

true

storm.messaging.netty.server_worker_threads

1

storm.zookeeper.servers

weekend110

transactional.zookeeper.root

/transactional

topology.acker.executors

 

topology.transfer.buffer.size

1024

topology.worker.childopts

 

drpc.queue.size

128

worker.childopts

-Xmx768m

supervisor.heartbeat.frequency.secs

5

topology.error.throttle.interval.secs

10

zmq.hwm

0

drpc.port

3772

supervisor.monitor.frequency.secs

3

drpc.childopts

-Xmx768m

topology.receiver.buffer.size

8

task.heartbeat.frequency.secs

3

topology.tasks

 

storm.messaging.netty.max_retries

30

topology.spout.wait.strategy

backtype.storm.spout.SleepSpoutWaitStrategy

nimbus.thrift.max_buffer_size

1048576

topology.max.spout.pending

 

storm.zookeeper.retry.interval

1000

topology.sleep.spout.wait.strategy.time.ms

1

nimbus.topology.validator

backtype.storm.nimbus.DefaultTopologyValidator

supervisor.slots.ports

6700,6701,6702,6703

topology.debug

false

nimbus.task.launch.secs

120

nimbus.supervisor.timeout.secs

60

topology.message.timeout.secs

30

task.refresh.poll.secs

10

topology.workers

1

supervisor.childopts

-Xmx256m

nimbus.thrift.port

6627

topology.stats.sample.rate

0.05

worker.heartbeat.frequency.secs

1

topology.tuple.serializer

backtype.storm.serialization.types.ListDelegateSerializer

topology.disruptor.wait.strategy

com.lmax.disruptor.BlockingWaitStrategy

topology.multilang.serializer

backtype.storm.multilang.JsonSerializer

nimbus.task.timeout.secs

30

storm.zookeeper.connection.timeout

15000

topology.kryo.factory

backtype.storm.serialization.DefaultKryoFactory

drpc.invocations.port

3773

logviewer.port

8000

zmq.threads

1

storm.zookeeper.retry.times

5

topology.worker.receiver.thread.count

1

storm.thrift.transport

backtype.storm.security.auth.SimpleTransportPlugin

topology.state.synchronization.timeout.secs

60

supervisor.worker.timeout.secs

30

nimbus.file.copy.expiration.secs

600

storm.messaging.transport

backtype.storm.messaging.netty.Context

logviewer.appender.name

A1

storm.messaging.netty.max_wait_ms

1000

drpc.request.timeout.secs

600

storm.local.mode.zmq

false

ui.port

8080

nimbus.childopts

-Xmx1024m

storm.cluster.mode

distributed

topology.max.task.parallelism

 

storm.messaging.netty.transfer.batch.size

262144

2  storm的topology提交执行

[hadoop@weekend110 apache-storm-0.9.2-incubating]$ jps

4065 worker

2116 QuorumPeerMain

4067 worker

4236 Jps

3220 supervisor

3160 nimbus

4059 worker

3210 core

4061 worker

[hadoop@weekend110 apache-storm-0.9.2-incubating]$

2  storm的topology提交执行

若是3节点的,分布式storm集群,则

2  storm的topology提交执行

2  storm的topology提交执行

[hadoop@weekend110 apache-storm-0.9.2-incubating]$ jps

4065 worker

2116 QuorumPeerMain

4067 worker

4236 Jps

3220 supervisor

3160 nimbus

4059 worker

3210 core

4061 worker

[hadoop@weekend110 apache-storm-0.9.2-incubating]$ cd /home/hadoop/stormoutput/

[hadoop@weekend110 stormoutput]$ ll

total 32

-rw-rw-r--. 1 hadoop hadoop 7741 Oct 15 15:57 148996a9-4c34-498b-8199-5c887cd4a7f0

-rw-rw-r--. 1 hadoop hadoop 7683 Oct 15 15:57 4a71fb82-1562-45dd-886c-b5610a202fd0

-rw-rw-r--. 1 hadoop hadoop 7681 Oct 15 15:57 71b93a13-4b79-460f-a1c9-b454d24e925d

-rw-rw-r--. 1 hadoop hadoop 7744 Oct 15 15:57 b20451ec-9bdd-4f92-a295-814a69b1a6e8

[hadoop@weekend110 stormoutput]$

得到的4个输出文件。

2  storm的topology提交执行

2  storm的topology提交执行

[hadoop@weekend110 apache-storm-0.9.2-incubating]$ cd /home/hadoop/stormoutput/

[hadoop@weekend110 stormoutput]$ ll

total 32

-rw-rw-r--. 1 hadoop hadoop 7741 Oct 15 15:57 148996a9-4c34-498b-8199-5c887cd4a7f0

-rw-rw-r--. 1 hadoop hadoop 7683 Oct 15 15:57 4a71fb82-1562-45dd-886c-b5610a202fd0

-rw-rw-r--. 1 hadoop hadoop 7681 Oct 15 15:57 71b93a13-4b79-460f-a1c9-b454d24e925d

-rw-rw-r--. 1 hadoop hadoop 7744 Oct 15 15:57 b20451ec-9bdd-4f92-a295-814a69b1a6e8

[hadoop@weekend110 stormoutput]$ tail -f 148996a9-4c34-498b-8199-5c887cd4a7f0

XIAOMI_itisok

MOTO_itisok

MOTO_itisok

MOTO_itisok

MOTO_itisok

MATE_itisok

MEIZU_itisok

XIAOMI_itisok

SONY_itisok

MATE_itisok

MEIZU_itisok

IPHONE_itisok

MEIZU_itisok

XIAOMI_itisok

MATE_itisok

MOTO_itisok

MOTO_itisok

SONY_itisok

MEIZU_itisok

MOTO_itisok

MATE_itisok

MEIZU_itisok

MATE_itisok

SUMSUNG_itisok

MATE_itisok

MATE_itisok

MEIZU_itisok

SONY_itisok

MEIZU_itisok

MATE_itisok

MOTO_itisok

SONY_itisok

XIAOMI_itisok

SONY_itisok

MOTO_itisok

MATE_itisok

IPHONE_itisok

SONY_itisok

XIAOMI_itisok

SUMSUNG_itisok

SUMSUNG_itisok

SONY_itisok

MEIZU_itisok

IPHONE_itisok

MATE_itisok

MATE_itisok

MOTO_itisok

XIAOMI_itisok

SUMSUNG_itisok

MATE_itisok

MOTO_itisok

MATE_itisok

SUMSUNG_itisok

SONY_itisok

XIAOMI_itisok

IPHONE_itisok

SUMSUNG_itisok

MEIZU_itisok

MOTO_itisok

SUMSUNG_itisok

MOTO_itisok

MATE_itisok

XIAOMI_itisok

MOTO_itisok

IPHONE_itisok

MATE_itisok

SONY_itisok

XIAOMI_itisok

IPHONE_itisok

IPHONE_itisok

XIAOMI_itisok

SONY_itisok

MATE_itisok

MOTO_itisok

SUMSUNG_itisok

SONY_itisok

MATE_itisok

XIAOMI_itisok

SONY_itisok

XIAOMI_itisok

2  storm的topology提交执行

2  storm的topology提交执行

[hadoop@weekend110 apache-storm-0.9.2-incubating]$ cd bin/

[hadoop@weekend110 bin]$ clear

[hadoop@weekend110 bin]$ cd /home/hadoop/stormoutput/

[hadoop@weekend110 stormoutput]$ clear

[hadoop@weekend110 stormoutput]$ pwd

/home/hadoop/stormoutput

[hadoop@weekend110 stormoutput]$ ll

total 64

-rw-rw-r--. 1 hadoop hadoop 12868 Oct 15 16:00 148996a9-4c34-498b-8199-5c887cd4a7f0

-rw-rw-r--. 1 hadoop hadoop 12885 Oct 15 16:00 4a71fb82-1562-45dd-886c-b5610a202fd0

-rw-rw-r--. 1 hadoop hadoop 12863 Oct 15 16:00 71b93a13-4b79-460f-a1c9-b454d24e925d

-rw-rw-r--. 1 hadoop hadoop 12903 Oct 15 16:00 b20451ec-9bdd-4f92-a295-814a69b1a6e8

[hadoop@weekend110 stormoutput]$ tail -f 4a71fb82-1562-45dd-886c-b5610a202fd0

MEIZU_itisok

SONY_itisok

MOTO_itisok

MOTO_itisok

MOTO_itisok

SONY_itisok

MEIZU_itisok

SUMSUNG_itisok

XIAOMI_itisok

XIAOMI_itisok

MEIZU_itisok

SONY_itisok

SUMSUNG_itisok

XIAOMI_itisok

SONY_itisok

MEIZU_itisok

SUMSUNG_itisok

MEIZU_itisok

SUMSUNG_itisok

IPHONE_itisok

SUMSUNG_itisok

SONY_itisok

MOTO_itisok

XIAOMI_itisok

SONY_itisok

MOTO_itisok

SONY_itisok

MOTO_itisok

MATE_itisok

MOTO_itisok

MATE_itisok

MEIZU_itisok

MATE_itisok

SONY_itisok

SUMSUNG_itisok

MATE_itisok

XIAOMI_itisok

SUMSUNG_itisok

SUMSUNG_itisok

SUMSUNG_itisok

MATE_itisok

SONY_itisok

MEIZU_itisok

2  storm的topology提交执行

2  storm的topology提交执行

[hadoop@weekend110 stormoutput]$ pwd

/home/hadoop/stormoutput

[hadoop@weekend110 stormoutput]$ ll

total 64

-rw-rw-r--. 1 hadoop hadoop 14265 Oct 15 16:01 148996a9-4c34-498b-8199-5c887cd4a7f0

-rw-rw-r--. 1 hadoop hadoop 14282 Oct 15 16:01 4a71fb82-1562-45dd-886c-b5610a202fd0

-rw-rw-r--. 1 hadoop hadoop 14263 Oct 15 16:01 71b93a13-4b79-460f-a1c9-b454d24e925d

-rw-rw-r--. 1 hadoop hadoop 14334 Oct 15 16:01 b20451ec-9bdd-4f92-a295-814a69b1a6e8

[hadoop@weekend110 stormoutput]$ tail -f 71b93a13-4b79-460f-a1c9-b454d24e925d

MEIZU_itisok

SUMSUNG_itisok

SUMSUNG_itisok

SUMSUNG_itisok

MOTO_itisok

SUMSUNG_itisok

MOTO_itisok

SONY_itisok

SUMSUNG_itisok

IPHONE_itisok

MOTO_itisok

SUMSUNG_itisok

MATE_itisok

MATE_itisok

MOTO_itisok

MOTO_itisok

IPHONE_itisok

XIAOMI_itisok

XIAOMI_itisok

SUMSUNG_itisok

XIAOMI_itisok

MOTO_itisok

SONY_itisok

SUMSUNG_itisok

IPHONE_itisok

IPHONE_itisok

MEIZU_itisok

SONY_itisok

MOTO_itisok

SUMSUNG_itisok

IPHONE_itisok

XIAOMI_itisok

MEIZU_itisok

MOTO_itisok

MEIZU_itisok

XIAOMI_itisok

IPHONE_itisok

SONY_itisok

MATE_itisok

2  storm的topology提交执行

2  storm的topology提交执行

[hadoop@weekend110 stormoutput]$ pwd

/home/hadoop/stormoutput

[hadoop@weekend110 stormoutput]$ ll

total 64

-rw-rw-r--. 1 hadoop hadoop 15994 Oct 15 16:02 148996a9-4c34-498b-8199-5c887cd4a7f0

-rw-rw-r--. 1 hadoop hadoop 15985 Oct 15 16:02 4a71fb82-1562-45dd-886c-b5610a202fd0

-rw-rw-r--. 1 hadoop hadoop 15989 Oct 15 16:02 71b93a13-4b79-460f-a1c9-b454d24e925d

-rw-rw-r--. 1 hadoop hadoop 16051 Oct 15 16:02 b20451ec-9bdd-4f92-a295-814a69b1a6e8

[hadoop@weekend110 stormoutput]$ tail -f b20451ec-9bdd-4f92-a295-814a69b1a6e8

XIAOMI_itisok

XIAOMI_itisok

MEIZU_itisok

SUMSUNG_itisok

XIAOMI_itisok

MOTO_itisok

MATE_itisok

SUMSUNG_itisok

SUMSUNG_itisok

MATE_itisok

IPHONE_itisok

XIAOMI_itisok

MEIZU_itisok

IPHONE_itisok

SUMSUNG_itisok

XIAOMI_itisok

SUMSUNG_itisok

MEIZU_itisok

SONY_itisok

MEIZU_itisok

MOTO_itisok

SONY_itisok

MOTO_itisok

MOTO_itisok

MOTO_itisok

MEIZU_itisok

MOTO_itisok

XIAOMI_itisok

SUMSUNG_itisok

MATE_itisok

MOTO_itisok

MATE_itisok

XIAOMI_itisok

MEIZU_itisok

SONY_itisok

MOTO_itisok

MOTO_itisok

SUMSUNG_itisok

SONY_itisok

XIAOMI_itisok

XIAOMI_itisok

SUMSUNG_itisok

MOTO_itisok

SUMSUNG_itisok

MOTO_itisok

2  storm的topology提交执行

由此可见,是随机分数据的。

Storm的并发机制图,如下

2  storm的topology提交执行

Storm的相关概念

2  storm的topology提交执行

storm的深入学习:

分布式共享锁的实现

事务topology的实现机制及开发模式

在具体场景中的跟其他框架的整合(  入口: flume/activeMQ/kafka(分布式的消息队列系统)     出口:  redis/hbase/mysql cluster)

注意,storm往往不是独立的,在实际业务里,数据来,数据走,

入口: 如:flume/activeMQ/kafka等分布式的消息队列系统。

当前,storm + kafka,是黄金组合。

2  storm的topology提交执行

出口:如edis/hbase/mysql cluster

附PPT:

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

conf.setNumWorkers(4) 表示设置了4个worker来执行整个topology的所有组件

builder.setBolt("boltA",new BoltA(),  4)  ---->指明 boltA组件的线程数excutors总共有4个

builder.setBolt("boltB",new BoltB(),  4) ---->指明 boltB组件的线程数excutors总共有4个

builder.setSpout("randomSpout",new RandomSpout(),  2) ---->指明randomSpout组件的线程数excutors总共有4个

-----意味着整个topology中执行所有组件的总线程数为4+4+2=10个

----worker数量是4个,有可能会出现这样的负载情况,  worker-1有2个线程,worker-2有2个线程,worker-3有3个线程,worker-4有3个线程

如果指定某个组件的具体task并发实例数

builder.setSpout("randomspout", new RandomWordSpout(), 4).setNumTasks(8);

----意味着对于这个组件的执行线程excutor来说,一个excutor将执行8/4=2个task

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

2  storm的topology提交执行

相关文章