Skip to end of metadata
Go to start of metadata

The char data type is based on the original Unicode specification, which defined characters as fixed-width 16-bit entities. The Unicode Standard has since been changed to allow for characters whose representation requires more than 16 bits. The range of Unicode code points is now U+0000 to U+10FFFF. The set of characters from U+0000 to U+FFFF is called the basic multilingual plane (BMP), and characters whose code points are greater than U+FFFF are called supplementary characters. Such characters are generally rare, but some are used, for example, as part of Chinese and Japanese personal names. To support supplementary characters without changing the char primitive data type and causing incompatibility with previous Java programs, supplementary characters are defined by a pair of Unicode code units called surrogates. According to the Java API [API 2014] class Character documentation (Unicode Character Representations):

The Java platform uses the UTF-16 representation in char arrays and in the String and StringBuffer classes. In this representation, supplementary characters are represented as a pair of char values, the first from the high-surrogates range, (\uD800-\uDBFF), the second from the low-surrogates range (\uDC00-\uDFFF).

A char value, therefore, represents BMP code points, including the surrogate code points, or code units of the UTF-16 encoding. An int value represents all Unicode code points, including supplementary code points. The lower (least significant) 21 bits of int are used to represent Unicode code points, and the upper (most significant) 11 bits must be zero. Similar to UTF-8 (see STR00-J. Don't form strings containing partial characters from variable-width encodings), UTF-16 is a variable-width encoding. Because the UTF-16 representation is also used in char arrays and in the String and StringBuffer classes, care must be taken when manipulating string data in Java. In particular, do not write code that assumes that a value of the primitive type char (or a Character object) fully represents a Unicode code point. Conformance with this requirement typically requires using methods that accept a Unicode code point as an int value and avoiding methods that accept a Unicode code unit as a char value because these latter methods cannot support supplementary characters.

Noncompliant Code Example

This noncompliant code example attempts to trim leading letters from string:

public static String trim(String string) {
  char ch;
  int i;
  for (i = 0; i < string.length(); i += 1) {
    ch = string.charAt(i);
    if (!Character.isLetter(ch)) {
      break;
    }
  }
  return string.substring(i);
}

Unfortunately, the trim() method may fail because it is using the character form of the Character.isLetter() method. Methods that accept only a char value cannot support supplementary characters. According to the Java API [API 2014] class Character documentation:

They treat char values from the surrogate ranges as undefined characters. For example, Character.isLetter('\uD840') returns false, even though this specific value if followed by any low-surrogate value in a string would represent a letter.

Compliant Solution

This compliant solution corrects the problem with supplementary characters by using the integer form of the Character.isLetter() method that accepts a Unicode code point as an int argument. Java library methods that accept an int value support all Unicode characters, including supplementary characters.  

public static String trim(String string) {
  int ch;
  int i;
  for (i = 0; i < string.length(); i += Character.charCount(ch)) {
    ch = string.codePointAt(i);
    if (!Character.isLetter(ch)) {
      break;
    }
  } 
  return string.substring(i);
}

Risk Assessment

Forming strings consisting of partial characters can result in unexpected behavior.

Rule

Severity

Likelihood

Remediation Cost

Priority

Level

STR01-J

Low

Unlikely

Medium

P2

L3

Automated Detection

ToolVersionCheckerDescription
The Checker Framework

2.1.3

Tainting CheckerTrust and security errors (see Chapter 8)

Bibliography

 


3 Comments

  1. The substring example is a specific case of measuring the position of a character in a string.  The correct way to do this depends on what the position is used for. 

    From http://www.unicode.org/faq/char_combmark.html

    Q: How are characters counted when measuring the length or position of a character in a string?

    A: Computing the length or position of a "character" in a Unicode string can be a little complicated, as there are four different approaches to doing so, plus the potential confusion caused by combining characters. The correct choice of which counting method to use depends on what is being counted and what the count or position is used for.

    Each of the four approaches is illustrated below with an example string <U+0061, U+0928, U+093F, U+4E9C, U+10083>. The example string consists of the Latin small letter a, followed by the Devanagari syllable "ni" (which is represented by the syllable "na" and the combining vowel character "i"), followed by a common Han ideograph, and finally a Linear B ideogram for an "equid" (horse):

    1. Bytes: how many bytes (what the C or C++ programming languages call a char) are used by the in-memory representation of the string; this is relevant for memory or storage allocation and low-level processing.

    Here is how the sample appears in bytes for the encodings UTF-8, UTF-16BE, and UTF-32BE:

     

    EncodingByte CountByte Sequence
    UTF-81461 E0 A4 A8 E0 A4 BF E4 BA 9C F0 90 82 83
    UTF-161200 61 09 28 09 3F 4E 9C D8 00 DC 83
    UTF-322000 00 00 61 00 00 09 28 00 00 09 3F
    00 00 4E 9C 00 01 00 83

     

    2. Code units: how many of the code units used by the character encoding form are in the string; this may be relevant, for example, when declaring the size of a character array or locating the character position in a string. It often represents the "length" of the string in APIs.

    Here is how the sample appears in code units for the encodings UTF-8, UTF-16, and UTF-32:

     

    EncodingCode Unit CountCode Unit Sequence
    UTF-81461 E0 A4 A8 E0 A4 BF E4 BA 9C F0 90 82 83
    UTF-1660061 0928 093F 4E9C D800 DC83
    UTF-32500000061 00000928 0000093F 00004E9C 00010083

     

    3. Code points: how many Unicode code points—the number of encoded characters—that are in the string. The sample consists of 5 code points (U+0061, U+0928, U+093F, U+4E9C, U+10083), regardless of character encoding form. Note that this is equivalent to the UTF-32 code unit count.

    4. Grapheme clusters: how many of what end users might consider "characters". In this example, the Devanagari syllable "ni" must be composed using a base character "na" (न) followed by a combining vowel for the "i" sound ( ि), although end users see and think of the combination of the two "नि" as a single unit of text. In this sense, the example string can be thought of as containing 4 “characters” as end users see them. A default grapheme cluster is specified in UAX #29, Unicode Text Segmentation, as well as in UTS #18, Unicode Regular Expressions.

    The choice of which count to use and when depends on the use of the value, as well as the tradeoffs between efficiency and comprehension. For example, Java, Windows, and ICU use UTF-16 code unit counts for low-level string operations, but also supply higher level APIs for counting bytes, characters, or denoting boundaries between grapheme clusters, when circumstances require them. An application might use these to, say, limit user input based on a number of "screen positions" using the user-perceived "character" (grapheme cluster) count. Or the application might have an internal limit based on storage allocation in a database field counted in bytes. This approach allows for efficient low-level processing, with allowance for higher-level usage. However, for a very high-level application, such as word-processing macros, grapheme clusters alone may be sufficient.

  2. "This noncompliant code example corrects the problem"