在C++的char*以及string中,使用的是字节流编码,即sizeof(char) == 1。
也就是说,C++是不区分字符的编码的。
而一个合法UTF8的字符长度可能为1~4位。
现在假设一串输入为UTF8编码,如何能准确的定位到每个UTF8字符的“CharPoint”,而不会错误的分割字符呢?
参考这个页面:http://www.nubaria.com/en/blog/?p=289
可以改造出下面的函数:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
|
const unsigned char kFirstBitMask = 128; // 1000000
const unsigned char kSecondBitMask = 64; // 0100000
const unsigned char kThirdBitMask = 32; // 0010000
const unsigned char kFourthBitMask = 16; // 0001000
const unsigned char kFifthBitMask = 8; // 0000100
int utf8_char_len( char firstByte)
{
std::string::difference_type offset = 1;
if (firstByte & kFirstBitMask) // This means the first byte has a value greater than 127, and so is beyond the ASCII range.
{
if (firstByte & kThirdBitMask) // This means that the first byte has a value greater than 224, and so it must be at least a three-octet code point.
{
if (firstByte & kFourthBitMask) // This means that the first byte has a value greater than 240, and so it must be a four-octet code point.
offset = 4;
else
offset = 3;
}
else
{
offset = 2;
}
}
return offset;
}
|