//open()方法中
int spoutsSize = context.getComponentTasks(context.getThisComponentId()).size();
int myIdx = context.getThisTaskIndex();
String[] tracks = ((String) conf.get("track")).split(",");
一个完整的Spout code:输入参数track代表了多个流,在open()方法中用取模%初始化track,在execute()方法读取track的数据,发送。由于spout的多个实例的myIdx不同,它们可以获得各自的一个track,可以实现一个spout读取多个流。
//ApiStreamingSpout.java
package twitter.streaming; import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.util.Map;
import java.util.concurrent.LinkedBlockingQueue; import org.apache.http.HttpResponse;
import org.apache.http.StatusLine;
import org.apache.http.auth.AuthScope;
import org.apache.http.auth.UsernamePasswordCredentials;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.impl.client.BasicCredentialsProvider;
import org.apache.http.impl.client.DefaultHttpClient;
import org.apache.log4j.Logger;
import org.json.simple.parser.JSONParser;
import org.json.simple.parser.ParseException; import backtype.storm.spout.SpoutOutputCollector;
import backtype.storm.task.TopologyContext;
import backtype.storm.topology.OutputFieldsDeclarer;
import backtype.storm.topology.base.BaseRichSpout;
import backtype.storm.tuple.Fields;
import backtype.storm.tuple.Values; public class ApiStreamingSpout extends BaseRichSpout { static String STREAMING_API_URL = "https://stream.twitter.com/1/statuses/filter.json?track=";
private String track;
private String user;
private String password;
private DefaultHttpClient client;
private SpoutOutputCollector collector;
private UsernamePasswordCredentials credentials;
private BasicCredentialsProvider credentialProvider; LinkedBlockingQueue<String> tweets = new LinkedBlockingQueue<String>(); static Logger LOG = Logger.getLogger(ApiStreamingSpout.class);
static JSONParser jsonParser = new JSONParser(); @Override
public void nextTuple() {
/*
* Create the client call
*/
client = new DefaultHttpClient();
client.setCredentialsProvider(credentialProvider);
HttpGet get = new HttpGet(STREAMING_API_URL + track); // 每个spout实例track是唯一的。
HttpResponse response;
try {
// Execute
response = client.execute(get);
StatusLine status = response.getStatusLine();
if (status.getStatusCode() == 200) {
InputStream inputStream = response.getEntity().getContent();
BufferedReader reader = new BufferedReader(
new InputStreamReader(inputStream));
String in;
// Read line by line
while ((in = reader.readLine()) != null) {
try {
// Parse and emit
Object json = jsonParser.parse(in);
collector.emit(new Values(track, json));
} catch (ParseException e) {
LOG.error("Error parsing message from twitter", e);
}
}
}
} catch (IOException e) {
LOG.error("Error in communication with twitter api ["
+ get.getURI().toString() + "]");
try {
Thread.sleep(10000);
} catch (InterruptedException e1) {
}
}
} /**
* spoutsSize、myIdx实现了一个spout读取多个流tracks。
*/
@Override
public void open(Map conf, TopologyContext context,
SpoutOutputCollector collector) {
int spoutsSize = context
.getComponentTasks(context.getThisComponentId()).size();
int myIdx = context.getThisTaskIndex();
String[] tracks = ((String) conf.get("track")).split(",");
StringBuffer tracksBuffer = new StringBuffer();
for (int i = 0; i < tracks.length; i++) {
if (i % spoutsSize == myIdx) {
tracksBuffer.append(",");
tracksBuffer.append(tracks[i]);
}
} if (tracksBuffer.length() == 0)
throw new RuntimeException("No track found for spout"
+ " [spoutsSize:" + spoutsSize + ", tracks:"
+ tracks.length + "] the amount"
+ " of tracks must be more then the spout paralellism"); this.track = tracksBuffer.substring(1).toString(); user = (String) conf.get("user");
password = (String) conf.get("password"); credentials = new UsernamePasswordCredentials(user, password);
credentialProvider = new BasicCredentialsProvider();
credentialProvider.setCredentials(AuthScope.ANY, credentials);
this.collector = collector;
} @Override
public void declareOutputFields(OutputFieldsDeclarer declarer) {
declarer.declare(new Fields("criteria", "tweet"));
}
}
通过这种技术,可以在数据源间分布收集器。相同的技术可以被应用在其他的场景-例如,从web服务器收集日志文件。PS:没有试过。
2.Bolt可以使用emit(streamId, tuple)发射元组到多条流,每条流由字符串streamId来识别。然后,在TopologyBuilder 中,你可以决定订阅哪条流。
没有试过。2个疑问:如何declare呢?spout有这个功能么?
解答:1.declareOutputFields()方法中声明多条流,不就可以了。
public void declareOutputFields(OutputFieldsDeclarer declarer) {
declarer.declare(new Fields("line"));
declarer.declareStream("second", new Fields("line2"));
}
2.Bolt和spout的实现来看,应该都是可以的。
3.BaseRichSpout是否是自动调用ack方法的,实现IBasicBolt接口可以自动ack?
BaseBasicBolt, is used to do the acking automatically.意思就是说,这个是自动调用ack的 (测试结果:使用BaseBasicBolt,不主动使用input.ack(),那么storm ui界面上看不到ack的个数。)so,最好使用input.ack().。PS:目前的项目编程是如下方法。
collector.emit();
input.ack();
通过IBasicBolt可以自动ack,用法如下。Storm UI可以看到该bolt的ack数目。
public class TotalBolt implements IBasicBolt{ private static final long serialVersionUID = 1L;
static Integer Total = new Integer(0);
//必须要实现的方法,使用BasicOutputCollector
public void execute(Tuple input,BasicOutputCollector collector) {
try {
String clear = input.getStringByField("PROVINCE_ID");
Total++;
collector.emit(new Values(Total));
} catch (IllegalArgumentException e) {
if (input.getSourceStreamId().equals("signals24Hour")) {
Total = 0;
}
}
}
}
4.关于bolt的锚定
记录原始的spout实例的最好方式是在消息元组中包含一个原始spout的引用。这个技术叫做锚定。
collector.emit(tuple,new Values(word));
每个消息都通过这种方式被锚定:把输入消息作为emit方法的第一个参数。因为word消息被锚定在了输入消息上,这个输入消息是spout发送过来的tuple tree的根节点,如果任意一个word消息处理失败,派生这个tuple tree那个spout 消息将会被重新发送。(锚定的好处是什么)
但是,这样会不会导致 元组被重发,计数重复? 会。
系统使用一种哈希算法来根据spout消息的messageId确定由哪个acker跟踪此消息派生出来的tuple tree。
下面这张图是OutputCollector(Bolt)中的emit方法:
测试结果:做了anchors锚定后,没有看到实际的效果。
A lot of bolts follow a common pattern of reading an input tuple, emitting tuples based on it, and then acking the tuple at the end of the execute
method. These bolts fall into the categories of filters and simple functions. Storm has an interface called IBasicBolt
that encapsulates this pattern for you. The SplitSentence
example can be written as a IBasicBolt
like follows:
public class SplitSentence implements IBasicBolt {
public void execute(Tuple tuple, BasicOutputCollector collector) {
String sentence = tuple.getString(0);
for(String word: sentence.split(" ")) {
collector.emit(new Values(word));
}
} public void declareOutputFields(OutputFieldsDeclarer declarer) {
declarer.declare(new Fields("word"));
}
}
This implementation is simpler than the implementation from before and is semantically identical. Tuples emitted to BasicOutputCollector
are automatically anchored to the input tuple, and the input tuple is acked for you automatically when the execute method completes.
In contrast, bolts that do aggregations or joins may delay acking a tuple until after it has computed a result based on a bunch of tuples. Aggregations and joins will commonly multi-anchor their output tuples as well. These things fall outside the simpler pattern of IBasicBolt
.
5 集群的各级容错
5. 疑问:主动调用input.ack(),日志中没有看到调用了spout的ack方法?这是为什么呢?
原因:Spout中SpoutOutputCollector.emit()方法中,没有加入messageID,也就是说采用了以下第一种emit方法,用第二种方法便可以看到。加入后在storm ui界面看到。
SpoutOutputCollector的emit方法:(messageID用long或者Integer型都是可以的。If the messageId was null, Storm will not track the tuple and no callback will be received. The emitted values must be immutable.)
collector.emit(new Values(str), messageID++);//messageID is type long.
一个消息默认被成功处理的timeOut是30s,超过30s就会触发spout的fail方法。这个值可以根据实际的集群情况进行调整。在Topology中 Conf conf.setMessageTimeoutSecs(int secs) 设置。
storm ui界面: