I was reading about AWS Kinesis
. In the following program, I write data into the stream named TestStream
. I ran this piece of code 10 times, inserting 10 records into the stream.
我正在阅读AWS Kinesis。在以下程序中,我将数据写入名为TestStream的流中。我运行了这段代码10次,在流中插入10条记录。
var params = {
Data: 'More Sample data into the test stream ...',
PartitionKey: 'TestKey_1',
StreamName: 'TestStream'
};
kinesis.putRecord(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
All the records were inserted successfully. What does partition key
really mean here? What is it doing in the background? I read its documentation but did not understand what it meant.
所有记录都已成功插入。分区键在这里真正意味着什么?它在后台做什么?我阅读了它的文档,但不明白它的含义。
1 个解决方案
#1
9
Partition keys only matter when you have multiple shards in a stream (but they're required always). Kinesis computes the MD5 hash of a partition key to decide what shard to store the record on (if you describe the stream you'll see the hash range as part of the shard decription).
分区键仅在流中有多个分片时才起作用(但总是需要它们)。 Kinesis计算分区键的MD5哈希值,以决定存储记录的分片(如果您描述流,您将看到哈希值作为分片描述的一部分)。
So why does this matter?
那为什么这很重要?
Each shard can only accept 1,000 records and/or 1 MB per second (see PutRecord doc). If you write to a single shard faster than this rate you'll get a ProvisionedThroughputExceededException
.
每个分片只能接受1,000条记录和/或每秒1 MB(请参阅PutRecord doc)。如果您以比此速率更快的速度写入单个分片,则会获得ProvisionedThroughputExceededException。
With multiple shards, you scale this limit: 4 shards gives you 4,000 records and/or 4 MB per second. Of course, there are caveats.
使用多个分片时,可以缩放此限制:4个分片可为您提供4,000条记录和/或每秒4 MB。当然,有一些警告。
The biggest is that you must use different partition keys. If all of your records use the same partition key then you're still writing to a single shard, because they'll all have the same hash value. How you solve this depends on your application: if you're writing from multiple processes then it might be sufficient to use the process ID, server's IP address, or hostname. If you're writing from a single process then you can either use information that's in the record (for example, a unique record ID) or generate a random string.
最大的是你必须使用不同的分区键。如果所有记录都使用相同的分区键,那么您仍然在写一个分片,因为它们都具有相同的哈希值。如何解决这个问题取决于您的应用程序:如果您是从多个进程编写的,那么使用进程ID,服务器的IP地址或主机名就足够了。如果您是从单个进程编写的,那么您可以使用记录中的信息(例如,唯一的记录ID)或生成随机字符串。
Second caveat is that the partition key counts against the total write size, and is stored in the stream. So while you could probably get good randomness by using some textual component in the record, you'd be wasting space. On the other hand, if you have some random textual component, you could calculate your own hash from it and then stringify that for the partition key.
第二个警告是分区键计入总写入大小,并存储在流中。因此,虽然你可能通过在记录中使用一些文本组件来获得良好的随机性,但你会浪费空间。另一方面,如果您有一些随机文本组件,您可以从中计算自己的哈希值,然后将其用于分区键。
Lastly, if you're using PutRecords (which you should, if you're writing a lot of data), individual records in the request may be rejected while others are accepted. This happens because those records went to a shard that was already at its write limits, and you have to re-send them (after a delay).
最后,如果您使用PutRecords(如果您正在编写大量数据,则应该使用PutRecords),请求中的个别记录可能会被拒绝,而其他记录则被接受。发生这种情况是因为这些记录转到已经处于其写入限制的分片,并且您必须重新发送它们(在延迟之后)。
#1
9
Partition keys only matter when you have multiple shards in a stream (but they're required always). Kinesis computes the MD5 hash of a partition key to decide what shard to store the record on (if you describe the stream you'll see the hash range as part of the shard decription).
分区键仅在流中有多个分片时才起作用(但总是需要它们)。 Kinesis计算分区键的MD5哈希值,以决定存储记录的分片(如果您描述流,您将看到哈希值作为分片描述的一部分)。
So why does this matter?
那为什么这很重要?
Each shard can only accept 1,000 records and/or 1 MB per second (see PutRecord doc). If you write to a single shard faster than this rate you'll get a ProvisionedThroughputExceededException
.
每个分片只能接受1,000条记录和/或每秒1 MB(请参阅PutRecord doc)。如果您以比此速率更快的速度写入单个分片,则会获得ProvisionedThroughputExceededException。
With multiple shards, you scale this limit: 4 shards gives you 4,000 records and/or 4 MB per second. Of course, there are caveats.
使用多个分片时,可以缩放此限制:4个分片可为您提供4,000条记录和/或每秒4 MB。当然,有一些警告。
The biggest is that you must use different partition keys. If all of your records use the same partition key then you're still writing to a single shard, because they'll all have the same hash value. How you solve this depends on your application: if you're writing from multiple processes then it might be sufficient to use the process ID, server's IP address, or hostname. If you're writing from a single process then you can either use information that's in the record (for example, a unique record ID) or generate a random string.
最大的是你必须使用不同的分区键。如果所有记录都使用相同的分区键,那么您仍然在写一个分片,因为它们都具有相同的哈希值。如何解决这个问题取决于您的应用程序:如果您是从多个进程编写的,那么使用进程ID,服务器的IP地址或主机名就足够了。如果您是从单个进程编写的,那么您可以使用记录中的信息(例如,唯一的记录ID)或生成随机字符串。
Second caveat is that the partition key counts against the total write size, and is stored in the stream. So while you could probably get good randomness by using some textual component in the record, you'd be wasting space. On the other hand, if you have some random textual component, you could calculate your own hash from it and then stringify that for the partition key.
第二个警告是分区键计入总写入大小,并存储在流中。因此,虽然你可能通过在记录中使用一些文本组件来获得良好的随机性,但你会浪费空间。另一方面,如果您有一些随机文本组件,您可以从中计算自己的哈希值,然后将其用于分区键。
Lastly, if you're using PutRecords (which you should, if you're writing a lot of data), individual records in the request may be rejected while others are accepted. This happens because those records went to a shard that was already at its write limits, and you have to re-send them (after a delay).
最后,如果您使用PutRecords(如果您正在编写大量数据,则应该使用PutRecords),请求中的个别记录可能会被拒绝,而其他记录则被接受。发生这种情况是因为这些记录转到已经处于其写入限制的分片,并且您必须重新发送它们(在延迟之后)。