Region Split请求是在Region MemStore Flush之后被触发的:
boolean shouldCompact = region.flushcache(); // We just want to check the size
boolean shouldSplit = region.checkSplit() != null; if (shouldSplit) {
this.server.compactSplitThread.requestSplit(region);
} else if (shouldCompact) {
server.compactSplitThread.requestCompaction(region, getName());
} server.getMetrics().addFlush(region.getRecentFlushInfo());
Region Flush操作完成之后,会进行checkSplit的判断,如果返回值不为null(返回值为该Region的SplitPoint),表示该Region达到了进行Split的条件,发起相应的Split请求。
checkSplit方法定义如下:
/**
* Return the splitpoint. null indicates the region isn't splittable. If the
* splitpoint isn't explicitly specified, it will go over the stores to find
* the best splitpoint. Currently the criteria of best splitpoint is based
* on the size of the store.
*/
public byte[] checkSplit() {
// Can't split ROOT/META
if (this.regionInfo.isMetaTable()) {
if (shouldForceSplit()) {
LOG.warn("Cannot split root/meta regions in HBase 0.20 and above");
} return null;
} if (!splitPolicy.shouldSplit()) {
return null;
} byte[] ret = splitPolicy.getSplitPoint(); if (ret != null) {
try {
checkRow(ret, "calculated split");
} catch (IOException e) {
LOG.error("Ignoring invalid split", e); return null;
}
} return ret;
}
由上述代码可以看出,如果当前Region属于目录信息表(ROOT/META),则是不允许进行Split操作的,否则根据当前Region的RegionSplitPolicy实例判断是否需要进行Split,流程包含两步:
(1)该Region是否允许进行Split;
(2)该Region在允许进行Split的条件下,是否可以计算出相应的SplitPoint。
RegionSplitPolicy shouldSplit
如果没有在定义表结构时进行特殊的指定,RegionSplitPolicy默认为org.apache.hadoop.hbase.regionserver.IncreasingToUpperBoundRegionSplitPolicy的实例,配置项为hbase.regionserver.region.split.policy。
方法代码如下:
@Override
protected boolean shouldSplit() {
if (region.shouldForceSplit()) {
return true;
} boolean foundABigStore = false; // Get count of regions that have the same common table as this.region
int tableRegionsCount = getCountOfCommonTableRegions(); // Get size to check
long sizeToCheck = getSizeToCheck(tableRegionsCount); for (Store store : region.getStores().values()) {
// If any of the stores is unable to split (eg they contain
// reference files)
// then don't split
if ((!store.canSplit())) {
return false;
} // Mark if any store is big enough
long size = store.getSize(); if (size > sizeToCheck) {
LOG.debug("ShouldSplit because " + store.getColumnFamilyName()
+ " size=" + size + ", sizeToCheck=" + sizeToCheck
+ ", regionsWithCommonTable=" + tableRegionsCount); foundABigStore = true; break;
}
} return foundABigStore;
}
执行流程:
(1)如果当前Region被请求执行ForceSplit,则直接返回true;
(2)计算当前Region中的各个Store大小的上限值;
(3)循环判断当前Region中的某一Store大小是否超过上限值,如果存在这样的Store,则提前结束循环,返回true即可。
其中,进行大小判断的Region Store必须是可Split的,即该Store中不包含Reference类型的文件,如果某一Store中出现了Reference类型的文件,则表示该Region已经被Split过,不能再进行Split,此时,直接返回false即可。
重点讲述一下Region中各个Store大小的上限值的计算方法:
(1)假设当前Region所属的表为t,计算该Region所处于的RegionServer上包含表t的Online Region数目,并将结果保存至变量tableRegionsCount中;
// Get count of regions that have the same common table as this.region
int tableRegionsCount = getCountOfCommonTableRegions();
getCountOfCommonTableRegions方法代码如下:
/**
* @return Count of regions on this server that share the table this.region
* belongs to
*/
private int getCountOfCommonTableRegions() {
RegionServerServices rss = this.region.getRegionServerServices(); // Can be null in tests
if (rss == null) {
return 0;
} byte[] tablename = this.region.getTableDesc().getName(); int tableRegionsCount = 0; try {
List<HRegion> hri = rss.getOnlineRegions(tablename); tableRegionsCount = hri == null || hri.isEmpty() ? 0 : hri.size();
} catch (IOException e) {
LOG.debug("Failed getOnlineRegions " + Bytes.toString(tablename), e);
} return tableRegionsCount;
}
首先获取该Region所处于的RegionServer实例:
RegionServerServices rss = this.region.getRegionServerServices();
然后获取该Region所对应的表的名称:
byte[] tablename = this.region.getTableDesc().getName();
最后获取表tablename在rss上的Online Region的数目:
List<HRegion> hri = rss.getOnlineRegions(tablename);
(2)根据tableRegionsCount计算上限值:
// Get size to check
long sizeToCheck = getSizeToCheck(tableRegionsCount);
getSizeToCheck方法代码如下:
/**
* @return Region max size or
* <code>count of regions squared * flushsize, which ever is
* smaller; guard against there being zero regions on this server.
*/
long getSizeToCheck(final int tableRegionsCount) {
return tableRegionsCount == 0 ? getDesiredMaxFileSize() : Math.min(
getDesiredMaxFileSize(), this.flushSize
* (tableRegionsCount * tableRegionsCount));
}
计算过程根据tableRegionsCount的值分为两种情况:
(1)tableRegionsCount值为0时(可能发生么?),直接通过方法getDesiredMaxFileSize返回结果即可(getDesiredMaxFileSize的返回值可以在创建表时指定,如果创建表时没有特殊指定,则由配置项hbase.hregion.max.filesize决定,默认值为10737418240即10G);
(2)tableRegionsCount值不为0时,结果为getDesiredMaxFileSize()与this.flushSize * (tableRegionsCount * tableRegionsCount)两者之间的最小值,其中flushSize在创建表时指定,如果创建表时没有特殊指定,则由配置项hbase.hregion.memstore.flush.size决定,默认值为134217728即128M。
RegionSplitPolicy getSplitPoint
进行到这一步,表示该Region是允许进行Split的,下一步应该计算该Region的SplitPoint。
方法代码如下:
/**
* @return the key at which the region should be split, or null if it cannot
* be split. This will only be called if shouldSplit previously
* returned true.
*/
protected byte[] getSplitPoint() {
byte[] explicitSplitPoint = this.region.getExplicitSplitPoint();
if (explicitSplitPoint != null) {
return explicitSplitPoint;
} Map<byte[], Store> stores = region.getStores(); byte[] splitPointFromLargestStore = null; long largestStoreSize = 0; for (Store s : stores.values()) {
byte[] splitPoint = s.getSplitPoint(); long storeSize = s.getSize(); if (splitPoint != null && largestStoreSize < storeSize) {
splitPointFromLargestStore = splitPoint; largestStoreSize = storeSize;
}
} return splitPointFromLargestStore;
}
执行流程如下:
(1)如果请求ForceSplit时显示指定了SplitPoint,则直接将该值返回即可;
(2)循环处理该Region的Store,分别获取该Store的大小和SplitPoint,最后Region的SplitPoint为最大的那个Store的SplitPoint。
接下来的问题是如何计算Store的SplitPoint。
Store getSplitPoint
/**
* Determines if Store should be split
*
* @return byte[] if store should be split, null otherwise.
*/
public byte[] getSplitPoint() {
this.lock.readLock().lock(); try {
// sanity checks
if (this.storefiles.isEmpty()) {
return null;
} // Should already be enforced by the split policy!
assert !this.region.getRegionInfo().isMetaRegion(); // Not splitable if we find a reference store file present in the
// store.
long maxSize = 0L; StoreFile largestSf = null; for (StoreFile sf : storefiles) {
if (sf.isReference()) {
// Should already be enforced since we return false in this
// case
assert false : "getSplitPoint() called on a region that can't split!"; return null;
} StoreFile.Reader r = sf.getReader(); if (r == null) {
LOG.warn("Storefile " + sf + " Reader is null"); continue;
} long size = r.length(); if (size > maxSize) {
// This is the largest one so far
maxSize = size; largestSf = sf;
}
} StoreFile.Reader r = largestSf.getReader(); if (r == null) {
LOG.warn("Storefile " + largestSf + " Reader is null"); return null;
} // Get first, last, and mid keys. Midkey is the key that starts
// block
// in middle of hfile. Has column and timestamp. Need to return just
// the row we want to split on as midkey.
byte[] midkey = r.midkey(); if (midkey != null) {
KeyValue mk = KeyValue.createKeyValueFromKey(midkey, 0,
midkey.length); byte[] fk = r.getFirstKey();
KeyValue firstKey = KeyValue.createKeyValueFromKey(fk, 0,
fk.length); byte[] lk = r.getLastKey();
KeyValue lastKey = KeyValue.createKeyValueFromKey(lk, 0,
lk.length); // if the midkey is the same as the first or last keys, then we
// cannot
// (ever) split this region.
if (this.comparator.compareRows(mk, firstKey) == 0
|| this.comparator.compareRows(mk, lastKey) == 0) {
if (LOG.isDebugEnabled()) {
LOG.debug("cannot split because midkey is the same as first or "
+ "last row");
} return null;
} return mk.getRow();
}
} catch (IOException e) {
LOG.warn("Failed getting store size for " + this, e);
} finally {
this.lock.readLock().unlock();
} return null;
}
执行流程
(1)选择Store StoreFiles中的最大的那个StoreFile largestSf;
long maxSize = 0L; StoreFile largestSf = null; for (StoreFile sf : storefiles) {
if (sf.isReference()) {
// Should already be enforced since we return false in this
// case
assert false : "getSplitPoint() called on a region that can't split!"; return null;
} StoreFile.Reader r = sf.getReader(); if (r == null) {
LOG.warn("Storefile " + sf + " Reader is null"); continue;
} long size = r.length(); if (size > maxSize) {
// This is the largest one so far
maxSize = size; largestSf = sf;
}
}
(2)获取largestSf的MidKey、FirstKey、LastKey,如果MidKey与FirstKey相等或者MidKey与LastKey相等,则返回null(为什么?);否则返回MidKey。
// Get first, last, and mid keys. Midkey is the key that starts
// block
// in middle of hfile. Has column and timestamp. Need to return just
// the row we want to split on as midkey.
byte[] midkey = r.midkey(); if (midkey != null) {
KeyValue mk = KeyValue.createKeyValueFromKey(midkey, 0,
midkey.length); byte[] fk = r.getFirstKey();
KeyValue firstKey = KeyValue.createKeyValueFromKey(fk, 0,
fk.length); byte[] lk = r.getLastKey();
KeyValue lastKey = KeyValue.createKeyValueFromKey(lk, 0,
lk.length); // if the midkey is the same as the first or last keys, then we
// cannot
// (ever) split this region.
if (this.comparator.compareRows(mk, firstKey) == 0
|| this.comparator.compareRows(mk, lastKey) == 0) {
if (LOG.isDebugEnabled()) {
LOG.debug("cannot split because midkey is the same as first or "
+ "last row");
} return null;
} return mk.getRow();
}
StoreFile是由多个Block组成的(这里的Block不同于HDFS的Block),每个Block的第一个RowKey会被存储到StoreFile中的特殊位置中,因此,这里的MidKey、FirstKey、LastKey指的就是StoreFile中MidBlock、FirstBlock、LastBlock各自的第一个RowKey。
Region Split是以Row作为最小切分单位的,即同一行的数据会完整的出现在某一Region中,如果MidKey与FirstKey相等或者MidKey与LastKey相等,则表示如果进行切分则会出现某Region中的RowKey是完全一样的,即该Region中仅包含一个行的数据,这种情况出现中HBase中是不合理的,因此不允许MidKey与FirstKey相等或者MidKey与LastKey相等时进行Split。
综上所述,如果某一Region满足Split的条件且可以计算出SplitPoint,则可以发起Split请求:
this.server.compactSplitThread.requestSplit(region);
HBase Split的更多相关文章
-
Hbase split的三种方式和split的过程
在Hbase中split是一个很重要的功能,Hbase是通过把数据分配到一定数量的region来达到负载均衡的.一个table会被分配到一个或多个region中,这些region会被分配到一个或者多个 ...
-
Hbase split的过程以及解发条件
一.Split触发条件 1. 有任一一个Hfile的大小超过默认值10G时,都会进行split 2. 达到这个值不在拆分,默认为int_max,不进行拆分 3.compact ...
-
HBase内部操作日志说明
版本:0.94-cdh4.2.1 1. Split Region [regionserver60020-splits-1397585864985] INFO org.apache.hadoop.hba ...
-
为什么不建议在hbase中使用过多的列簇
我们知道,hbase表可以设置一个至多个列簇(column families),但是为什么说越少的列簇越好呢? 官网原文: HBase currently does not do well with ...
-
HBASE学习笔记(一)
一.数据库OLAP和OLTP简单的介绍比较 1.OLTP:on-line transaction processing在线事务处理,应用在传统关系型数据库比较多,执行日常基本的事务处理,比如数据库记录 ...
-
HBase原理 – 分布式系统中snapshot是怎么玩的?(转载)
snapshot(快照)基础原理 snapshot是很多存储系统和数据库系统都支持的功能.一个snapshot是一个全部文件系统.或者某个目录在某一时刻的镜像.实现数据文件镜像最简单粗暴的方式是加锁拷 ...
-
HBASE-使用问题-split region
问题描述: HBASE表的管理以REGION分区为核心,通常面临如下几个问题: 1) 数据如何存储到指定的region分区,即rowkey设计,region splitkey设计 2)设计的split ...
-
HBase change split policy on an existing table
hbase(main)::> create 'test_table_region', 'username' row(s) in 1.2150 seconds hbase(main)::> ...
-
HBase如何选取split point
hbase region split操作的一些细节,具体split步骤很多文档都有说明,本文主要关注regionserver如何选取split point 首先推荐web ui查看hbase regi ...
随机推荐
-
Tools - Get technical information from the Internet
Official Sites Overview / QuickStart Guide / Docs / E-books Community / Fourm / Blog Demo / Download ...
-
u盘安装Fedora23
-2.计划用ultraiso安装 fedora 23 ,然后从windows平台转到Linux平台上. -1.概念理解 硬盘模式:achi(sata) ide (ata) 分区格式:主引导记录(mbr ...
-
HttpClient抓取网页内容简单介绍
版本HttpClient3.1 1.GET方式 第一步.创建一个客户端,类似于你用浏览器打开一个网页 HttpClient httpClient = new HttpClient(); 第二步.创建一 ...
-
jquery $.each()用法
今天看到一个新的each玩法即each作为jquery的函数(平时用的大概都是用的each方法)使用: $.each([ 52, 97 ], function( index, value ) { al ...
-
.NET DateTime 源码学习
今天下载了微软.Net 源码,看了一下DateTime类,做下记录 DaysInMonth 这个方法是获取某年某月的天数,平时直接用觉得很简单,今天看到源码,发现设计的还是很好的 我想如果是我的话,封 ...
-
CSS的box-sizing属性
box-sizing属性可以为三个值之一:content-box(default),border-box,padding-box. content-box,border和padding不计算入widt ...
-
App设计:消息推送和界面路由跳转
概要 app消息推送.显示通知栏,点击跳转页面是很一般的功能了,下面以个推为例演示push集成,消息处理模块及app内部路由模块的简单设计. 推送 推送sdk集成 集成sdk步骤根据文档一步步做就行了 ...
-
MSSQL Sql加密函数 hashbytes 用法简介
转自:http://www.maomao365.com/?p=4732 一.mssql sql hashbytes 函数简介 hashbytes函数功能为:返回一个字符,通过 MD2.MD4.MD5. ...
-
TestNg-数据驱动-dataProvider
参考https://blog.csdn.net/hjianhui24/article/details/50554828 之前的用例自己一笔一划写出来的,知道了数据驱动的概念之后,修改了一下用例. @D ...
-
Numpy常用API
目录 一.输入和输出 1.1 NumPy二进制文件(NPY,NPZ) 1.2 文本文件 1.3 正则表达式解析 1.4 原始二进制文件 1.5 内存映射文件 1.6 Base-n相关 1.7 数据源 ...