Legacy software frequently assumes that every character in a string occupies 8 bits (a Java byte). The Java language assumes that every character in a string occupies 16 bits (a Java char). Unfortunately, neither the Java byte nor Java char data types can represent all possible Unicode characters. Many strings are stored or communicated using an encoding such as UTF-8 that allows characters to have with varying sizes.
While Java strings are stored as an array of characters and can be represented as an array of bytes, a single character in the string might be represented by two or more consecutive elements of type byte or of type char. Splitting a char or byte array risks splitting a multibyte character.
Ignoring the possibility of supplementary characters, multibyte characters, or combining characters (characters that modify other characters) may allow an attacker to bypass input validation checks. Consequently, programs characters must not split apart a series of bytes that represent a single characterbe split between two data structures.
Multibyte Characters
Multibyte encodings such as UTF-8 are used for character sets that require more than one byte to uniquely identify each constituent character. For example, Shift-JIS (shown below), one of the Japanese encodings, supports multibyte encoding wherein the maximum character length is two bytes (one leading and one trailing byte).
...
| Wiki Markup |
|---|
The trailing byte ranges overlap the range of both the single byte and lead byte characters. When a multibyte character is separated across a buffer boundary, it can be interpreted differently than if it were not separated across the buffer boundary; this difference arises because of the ambiguity of its composing bytes \[[Phillips 2005|AA. Bibliography#Phillips 05]\]. |
Supplementary Characters
| Wiki Markup |
|---|
According to the Java API \[[API 2006|AA. Bibliography#API 06]\] class {{Character}} documentation (Unicode Character Representations): |
The
chardata type (and consequently the value that aCharacterobject encapsulates) are sic based on the original Unicode specification, which defined characters as fixed-width 16-bit entities. The Unicode standard has since been changed to allow for characters whose representation requires more than 16 bits. The range of legal code points is now \u0000 to \u10FFFF, known as Unicode scalar value.The Java 2 platform uses the UTF-16 representation in
chararrays and in theStringandStringBufferclasses. In this representation, supplementary characters are represented as a pair ofcharvalues, the first from the high-surrogates range, (\uD800-\uDBFF), the second from the low-surrogates range (\uDC00-\uDFFF).An
intvalue represents all Unicode code points, including supplementary code points. The lower (least significant) 21 bits ofintare used to represent Unicode code points, and the upper (most significant) 11 bits must be zero. Unless otherwise specified, the behavior with respect to supplementary characters and surrogate char values is as follows:
- The methods that only accept a
charvalue cannot support supplementary characters. They treatcharvalues from the surrogate ranges as undefined characters. For example,Character.isLetter('\uD840')returnsfalse, even though this specific value if followed by any low-surrogate value in a string would represent a letter.- The methods that accept an
intvalue support all Unicode characters, including supplementary characters. For example,Character.isLetter(0x2F81A)returnstruebecause the code point value represents a letter (a CJK ideograph).
Noncompliant Code Example (Read)
This noncompliant code example tries to read up to 1024 bytes from a socket and build a String from themthis data. It does this by reading the bytes in a while loop, as recommended by rule FIO10-J. Ensure the array is filled when using read() to fill an array. If it ever detects that the socket has more than 1024 bytes available, it throws an exception. This prevents untrusted input from potentially exhausting the program's memory.
| Code Block | ||
|---|---|---|
| ||
public final int MAX_SIZE = 1024;
public String readBytes(Socket socket) throws IOException {
InputStream in = socket.getInputStream();
byte[] data = new byte[MAX_SIZE+1];
int offset = 0;
int bytesRead = 0;
String str = new String();
while ((bytesRead = in.read(data, offset, data.length - offset)) != -1) {
offset += bytesRead;
str += new String(data, offset, data.length - offset, "UTF-8");
if (offset >= data.length) {
throw new IOException("Too much input");
}
}
in.close();
return str;
}
|
This code fails to consider account for the interaction between characters represented with a multibyte encoding and the boundaries between the loop iterations. If the last byte read from the data stream in one read() operation is the leading byte of a multibyte character, the trailing bytes are not encountered until the next iteration of the while loop. However, multibyte encoding is resolved during construction of the new String within the loop. Consequently, the multibyte encoding is interpreted incorrectly.
Compliant Solution (Read)
This compliant solution does not actually create defers creation of the string until all the data is available.
...
This code avoids splitting multibyte multi-byte encoded characters across buffers by deferring construction of the result string until the data has been read in full.
Compliant Solution (Reader)
This compliant solution uses a Reader rather than an InputStream. The Reader class converts bytes into characters on the fly, so it avoids the hazard of splitting multibyte characters. This routine will abort aborts if the socket provides more than 1024 characters rather than 1024 bytes.
| Code Block | ||
|---|---|---|
| ||
public final int MAX_SIZE = 1024;
public String readBytes(Socket socket) throws IOException {
InputStream in = socket.getInputStream();
Reader r = new InputStreamReader(in, "UTF-8");
char[] data = new char[MAX_SIZE+1];
int offset = 0;
int charsRead = 0;
String str = new String(data);
while ((charsRead = r.read(data, offset, data.length - offset)) != -1) {
offset += charsRead;
str += new String(data, offset, data.length - offset);
if (offset >= data.length) {
throw new IOException("Too much input");
}
}
in.close();
return str;
}
|
Noncompliant Code Example (Substring)
| Wiki Markup |
|---|
This noncompliant code example attempts to trim leading letters from the {{string}}. It fails to accomplish this task because {{Character.isLetter()}} lacks support for supplementary and combining characters \[[Hornig 2007|AA. Bibliography#Hornig 07]\]. |
| Code Block | ||
|---|---|---|
| ||
// Fails for supplementary or combining characters
public static String trim_bad1(String string) {
char ch;
int i;
for (i = 0; i < string.length(); i += 1) {
ch = string.charAt(i);
if (!Character.isLetter(ch)) {
break;
}
}
return string.substring(i);
}
|
Noncompliant Code Example (Substring)
| Wiki Markup |
|---|
This noncompliant code example attempts to fixcorrect the problem by using the {{String.codePointAt()}} method, which accepts an {{int}} argument. This works for supplementary characters but fails for combining characters \[[Hornig 2007|AA. Bibliography#Hornig 07]\]. |
| Code Block | ||
|---|---|---|
| ||
// Fails for combining characters
public static String trim_bad2(String string) {
int ch;
int i;
for (i = 0; i < string.length(); i += Character.charCount(ch)) {
ch = string.codePointAt(i);
if (!Character.isLetter(ch)) {
break;
}
}
return string.substring(i);
}
|
Compliant Solution (Substring)
| Wiki Markup |
|---|
This compliant solution works both for supplementary and for combining characters \[[Hornig 2007|AA. Bibliography#Hornig 07]\]. According to the Java API \[[API 2006|AA. Bibliography#API 06]\] class {{java.text.BreakIterator}} documentation: |
...
To perform locale-sensitive String comparisons for searching and sorting, use the java.text.Collator class.
Risk Assessment
Failure to correctly account for supplementary and combining characters can lead to unexpected behavior.
Rule | Severity | Likelihood | Remediation Cost | Priority | Level |
|---|---|---|---|---|---|
IDS10-J | low | unlikely | medium | P2 | L3 |
Bibliography
<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="2d7c86437aea5d10-dec21a2a-47e6487e-8631871d-35b37db25df650ca695bc6a1"><ac:plain-text-body><![CDATA[ | [[API 2006 | AA. Bibliography#API 06]] | Classes | ]]></ac:plain-text-body></ac:structured-macro> |
<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="1b903a335e7dbfe4-f9b0c69c-450f48e1-b6399b0b-f0f14afffb67425314578c74"><ac:plain-text-body><![CDATA[ | [[Hornig 2007 | AA. Bibliography#Hornig 07]] | Problem areas: Characters | ]]></ac:plain-text-body></ac:structured-macro> |
...