Having ignored it all this time, I am currently forcing myself to learn more about unicode in Java. There is an exercise I need to do about converting a UTF-16 string to 8-bit ASCII. Can someone please enlighten me how to do this in Java? I understand that you can't represent all possible unicode values in ASCII, so in this case I want a code which exceeds 0xFF to be merely added anyway (bad data should also just be added silently).
一直忽略它,我目前正在强迫自己学习更多关于Java中的unicode。关于将UTF-16字符串转换为8位ASCII,我需要做一些练习。有人可以请教我如何用Java做到这一点?我知道你不能用ASCII代表所有可能的unicode值,所以在这种情况下我想要一个超过0xFF的代码只是被添加(坏的数据也应该只是静默添加)。
Thanks!
谢谢!
4 个解决方案
#1
5
How about this:
这个怎么样:
String input = ... // my UTF-16 string
StringBuilder sb = new StringBuilder(input.length());
for (int i = 0; i < input.length(); i++) {
char ch = input.charAt(i);
if (ch <= 0xFF) {
sb.append(ch);
}
}
byte[] ascii = sb.toString().getBytes("ISO-8859-1"); // aka LATIN-1
This is probably not the most efficient way to do this conversion for large strings since we copy the characters twice. However, it has the advantage of being straightforward.
这可能不是对大字符串进行此转换的最有效方法,因为我们将字符复制两次。但是,它具有直截了当的优点。
BTW, strictly speaking there is no such character set as 8-bit ASCII. ASCII is a 7-bit character set. LATIN-1 is the nearest thing there is to an "8-bit ASCII" character set (and block 0 of Unicode is equivalent to LATIN-1) so I'll assume that's what you mean.
顺便说一句,严格来说,没有像8位ASCII这样的字符集。 ASCII是一个7位字符集。 LATIN-1是最接近“8位ASCII”字符集的东西(Unicode的块0等同于LATIN-1)所以我假设这就是你的意思。
EDIT: in the light of the update to the question, the solution is even simpler:
编辑:根据问题的更新,解决方案更简单:
String input = ... // my UTF-16 string
byte[] ascii = new byte[input.length()];
for (int i = 0; i < input.length(); i++) {
ascii[i] = (byte) input.charAt(i);
}
This solution is more efficient. Since we now know how many bytes to expect, we can preallocate the byte array and in copy the (truncated) characters without using a StringBuilder as intermediate buffer.
该解决方案更有效。由于我们现在知道要多少字节,我们可以预先分配字节数组并复制(截断的)字符,而不使用StringBuilder作为中间缓冲区。
However, I'm not convinced that dealing with bad data in this way is sensible.
但是,我不相信以这种方式处理不良数据是明智的。
EDIT 2: there is one more obscure "gotcha" with this. Unicode actually defines code points (characters) to be "roughly 21 bit" values ... 0x000000 to 0x10FFFF ... and uses surrogates to represent codes > 0x00FFFF. In other words, a Unicode codepoint > 0x00FFFF is actually represented in UTF-16 as two "characters". Neither my answer or any of the others take account of this (admittedly esoteric) point. In fact, dealing with codepoints > 0x00FFFF in Java is rather tricky in general. This stems from the fact that 'char' is a 16 bit type and String is defined in terms of 'char'.
编辑2:还有一个模糊不清的“陷阱”。 Unicode实际上将代码点(字符)定义为“大致21位”值... 0x000000到0x10FFFF ...并使用代理来表示代码> 0x00FFFF。换句话说,Unicode代码点> 0x00FFFF实际上以UTF-16表示为两个“字符”。我的答案或任何其他人都没有考虑到这一点(诚然是深奥的)。事实上,在Java中处理代码点> 0x00FFFF一般来说相当棘手。这源于'char'是16位类型而String是根据'char'定义的事实。
EDIT 3: maybe a more sensible solution for dealing with unexpected characters that don't convert to ASCII is to replace them with the standard replacement character:
编辑3:对于处理不转换为ASCII的意外字符,可能更合理的解决方案是用标准替换字符替换它们:
String input = ... // my UTF-16 string
byte[] ascii = new byte[input.length()];
for (int i = 0; i < input.length(); i++) {
char ch = input.charAt(i);
ascii[i] = (ch <= 0xFF) ? (byte) ch : (byte) '?';
}
#2
11
You can use java.nio for an easy solution:
您可以使用java.nio来获得简单的解决方案:
// first encode the utf-16 string as a ByteBuffer
ByteBuffer bb = Charset.forName("utf-16").encode(CharBuffer.wrap(utf16str));
// then decode those bytes as US-ASCII
CharBuffer ascii = Charset.forName("US-ASCII").decode(bb);
#3
2
Java internally represents strings in UTF-16. If a String object is what you are starting with, you can encode using String.getBytes(Charset c), where you might specify US-ASCII (which can map code points 0x00-0x7f) or ISO-8859-1 (which can map code points 0x00-0xff, and may be what you mean by "8-bit ASCII").
Java内部表示UTF-16中的字符串。如果您正在使用String对象,则可以使用String.getBytes(Charset c)进行编码,您可以在其中指定US-ASCII(可以映射代码点0x00-0x7f)或ISO-8859-1(可以映射)代码点0x00-0xff,可能是“8位ASCII”的意思。
As for adding "bad data"... ASCII or ISO-8859-1 strings simply can't represent values outside of a certain range. I believe getBytes
will simply drop characters it's not able to represent in the destination character set.
至于添加“坏数据”...... ASCII或ISO-8859-1字符串根本不能代表某个范围之外的值。我相信getBytes只会删除它无法在目标字符集中表示的字符。
#4
2
Since this is an exercise, it sounds like you need to implement this manually. You can think of an encoding (e.g. UTF-16 or ASCII) as a lookup table that matches a sequence of bytes to a logical character (a codepoint).
由于这是一个练习,听起来你需要手动实现这个。您可以将编码(例如UTF-16或ASCII)视为将字节序列与逻辑字符(代码点)匹配的查找表。
Java uses UTF-16 strings, which means that any given codepoint can be represented in one or two char
variables. Whether you want to handle the two-char
surrogate pairs depends on how likely you think your application is to encounter them (see the Character class for detecting them). ASCII only uses the first 7 bits of an octet (byte), so the valid range of values is 0 to 127. UTF-16 uses identical values for this range (they're just wider). This can be confirmed with this code:
Java使用UTF-16字符串,这意味着任何给定的代码点都可以用一个或两个char变量表示。是否要处理两个char代理项对取决于您认为应用程序遇到它们的可能性(请参阅Character类以检测它们)。 ASCII仅使用八位字节(字节)的前7位,因此有效值范围为0到127. UTF-16对此范围使用相同的值(它们只是更宽)。这可以通过以下代码确认:
Charset ascii = Charset.forName("US-ASCII");
byte[] buffer = new byte[1];
char[] cbuf = new char[1];
for (int i = 0; i <= 127; i++) {
buffer[0] = (byte) i;
cbuf[0] = (char) i;
String decoded = new String(buffer, ascii);
String utf16String = new String(cbuf);
if (!utf16String.equals(decoded)) {
throw new IllegalStateException();
}
System.out.print(utf16String);
}
System.out.println("\nOK");
Therefore, you can convert UTF-16 to ASCII by casting a char
to a byte
.
因此,您可以通过将char转换为字节来将UTF-16转换为ASCII。
You can read more about Java character encoding here.
您可以在此处阅读有关Java字符编码的更多信息
#1
5
How about this:
这个怎么样:
String input = ... // my UTF-16 string
StringBuilder sb = new StringBuilder(input.length());
for (int i = 0; i < input.length(); i++) {
char ch = input.charAt(i);
if (ch <= 0xFF) {
sb.append(ch);
}
}
byte[] ascii = sb.toString().getBytes("ISO-8859-1"); // aka LATIN-1
This is probably not the most efficient way to do this conversion for large strings since we copy the characters twice. However, it has the advantage of being straightforward.
这可能不是对大字符串进行此转换的最有效方法,因为我们将字符复制两次。但是,它具有直截了当的优点。
BTW, strictly speaking there is no such character set as 8-bit ASCII. ASCII is a 7-bit character set. LATIN-1 is the nearest thing there is to an "8-bit ASCII" character set (and block 0 of Unicode is equivalent to LATIN-1) so I'll assume that's what you mean.
顺便说一句,严格来说,没有像8位ASCII这样的字符集。 ASCII是一个7位字符集。 LATIN-1是最接近“8位ASCII”字符集的东西(Unicode的块0等同于LATIN-1)所以我假设这就是你的意思。
EDIT: in the light of the update to the question, the solution is even simpler:
编辑:根据问题的更新,解决方案更简单:
String input = ... // my UTF-16 string
byte[] ascii = new byte[input.length()];
for (int i = 0; i < input.length(); i++) {
ascii[i] = (byte) input.charAt(i);
}
This solution is more efficient. Since we now know how many bytes to expect, we can preallocate the byte array and in copy the (truncated) characters without using a StringBuilder as intermediate buffer.
该解决方案更有效。由于我们现在知道要多少字节,我们可以预先分配字节数组并复制(截断的)字符,而不使用StringBuilder作为中间缓冲区。
However, I'm not convinced that dealing with bad data in this way is sensible.
但是,我不相信以这种方式处理不良数据是明智的。
EDIT 2: there is one more obscure "gotcha" with this. Unicode actually defines code points (characters) to be "roughly 21 bit" values ... 0x000000 to 0x10FFFF ... and uses surrogates to represent codes > 0x00FFFF. In other words, a Unicode codepoint > 0x00FFFF is actually represented in UTF-16 as two "characters". Neither my answer or any of the others take account of this (admittedly esoteric) point. In fact, dealing with codepoints > 0x00FFFF in Java is rather tricky in general. This stems from the fact that 'char' is a 16 bit type and String is defined in terms of 'char'.
编辑2:还有一个模糊不清的“陷阱”。 Unicode实际上将代码点(字符)定义为“大致21位”值... 0x000000到0x10FFFF ...并使用代理来表示代码> 0x00FFFF。换句话说,Unicode代码点> 0x00FFFF实际上以UTF-16表示为两个“字符”。我的答案或任何其他人都没有考虑到这一点(诚然是深奥的)。事实上,在Java中处理代码点> 0x00FFFF一般来说相当棘手。这源于'char'是16位类型而String是根据'char'定义的事实。
EDIT 3: maybe a more sensible solution for dealing with unexpected characters that don't convert to ASCII is to replace them with the standard replacement character:
编辑3:对于处理不转换为ASCII的意外字符,可能更合理的解决方案是用标准替换字符替换它们:
String input = ... // my UTF-16 string
byte[] ascii = new byte[input.length()];
for (int i = 0; i < input.length(); i++) {
char ch = input.charAt(i);
ascii[i] = (ch <= 0xFF) ? (byte) ch : (byte) '?';
}
#2
11
You can use java.nio for an easy solution:
您可以使用java.nio来获得简单的解决方案:
// first encode the utf-16 string as a ByteBuffer
ByteBuffer bb = Charset.forName("utf-16").encode(CharBuffer.wrap(utf16str));
// then decode those bytes as US-ASCII
CharBuffer ascii = Charset.forName("US-ASCII").decode(bb);
#3
2
Java internally represents strings in UTF-16. If a String object is what you are starting with, you can encode using String.getBytes(Charset c), where you might specify US-ASCII (which can map code points 0x00-0x7f) or ISO-8859-1 (which can map code points 0x00-0xff, and may be what you mean by "8-bit ASCII").
Java内部表示UTF-16中的字符串。如果您正在使用String对象,则可以使用String.getBytes(Charset c)进行编码,您可以在其中指定US-ASCII(可以映射代码点0x00-0x7f)或ISO-8859-1(可以映射)代码点0x00-0xff,可能是“8位ASCII”的意思。
As for adding "bad data"... ASCII or ISO-8859-1 strings simply can't represent values outside of a certain range. I believe getBytes
will simply drop characters it's not able to represent in the destination character set.
至于添加“坏数据”...... ASCII或ISO-8859-1字符串根本不能代表某个范围之外的值。我相信getBytes只会删除它无法在目标字符集中表示的字符。
#4
2
Since this is an exercise, it sounds like you need to implement this manually. You can think of an encoding (e.g. UTF-16 or ASCII) as a lookup table that matches a sequence of bytes to a logical character (a codepoint).
由于这是一个练习,听起来你需要手动实现这个。您可以将编码(例如UTF-16或ASCII)视为将字节序列与逻辑字符(代码点)匹配的查找表。
Java uses UTF-16 strings, which means that any given codepoint can be represented in one or two char
variables. Whether you want to handle the two-char
surrogate pairs depends on how likely you think your application is to encounter them (see the Character class for detecting them). ASCII only uses the first 7 bits of an octet (byte), so the valid range of values is 0 to 127. UTF-16 uses identical values for this range (they're just wider). This can be confirmed with this code:
Java使用UTF-16字符串,这意味着任何给定的代码点都可以用一个或两个char变量表示。是否要处理两个char代理项对取决于您认为应用程序遇到它们的可能性(请参阅Character类以检测它们)。 ASCII仅使用八位字节(字节)的前7位,因此有效值范围为0到127. UTF-16对此范围使用相同的值(它们只是更宽)。这可以通过以下代码确认:
Charset ascii = Charset.forName("US-ASCII");
byte[] buffer = new byte[1];
char[] cbuf = new char[1];
for (int i = 0; i <= 127; i++) {
buffer[0] = (byte) i;
cbuf[0] = (char) i;
String decoded = new String(buffer, ascii);
String utf16String = new String(cbuf);
if (!utf16String.equals(decoded)) {
throw new IllegalStateException();
}
System.out.print(utf16String);
}
System.out.println("\nOK");
Therefore, you can convert UTF-16 to ASCII by casting a char
to a byte
.
因此,您可以通过将char转换为字节来将UTF-16转换为ASCII。
You can read more about Java character encoding here.
您可以在此处阅读有关Java字符编码的更多信息