UTF-8 is a variable-width encoding for Unicode. UTF-8 uses 1 to 4 bytes per character, depending on the Unicode symbol. UTF-8 has the following properties:
- The classical US-ASCII characters (0 to 0x7f) encode as themselves, so files and strings that are encoded with ASCII values have the same encoding under both ASCII and UTF-8.
- It is easy to convert between UTF-8 and UCS-2 and UCS-4 fixed-width representations of characters.
- The lexicographic sorting order of UCS-4 strings is preserved.
- All possible 2^21 UCS codes can be encoded using UTF-8.
Generally, programs should validate UTF-8 data before performing other checks. The following table lists the well-formed UTF-8 byte sequences.
|Bits of code point
|First code point
|Last code point
|Bytes in sequence
Although UTF-8 originated from the Plan 9 developers [Pike 1993], Plan 9's own support covers only the low 16-bit range. In general, many "Unicode" systems support only the low 16-bit range, not the full 21-bit ISO 10646 code space [ISO/IEC 10646:2012].
According to RFC 2279: UTF-8, a transformation format of ISO 10646 [Yergeau 1998],
Implementors of UTF-8 need to consider the security aspects of how they handle invalid UTF-8 sequences. It is conceivable that, in some circumstances, an attacker would be able to exploit an incautious UTF-8 parser by sending it an octet sequence that is not permitted by the UTF-8 syntax.
A particularly subtle form of this attack can be carried out against a parser that performs security-critical validity checks against the UTF-8 encoded form of its input, but interprets certain invalid octet sequences as characters. For example, a parser might prohibit the null character when encoded as the single-octet sequence
00, but allow the invalid two-octet sequence
C0 80and interpret it as a null character. Another example might be a parser which prohibits the octet sequence
2F 2E 2E 2F(
"/../"), yet permits the invalid octet sequence
2F C0 AE 2E 2F.
Following are more specific recommendations.
Accept Only the Shortest Form
Only the "shortest" form of UTF-8 should be permitted. Naive decoders might accept encodings that are longer than necessary, allowing for potentially dangerous input to have multiple representations. For example,
- Process A performs security checks but does not check for nonshortest UTF-8 forms.
- Process B accepts the byte sequence from process A and transforms it into UTF-16 while interpreting possible nonshortest forms.
- The UTF-16 text may contain characters that should have been filtered out by process A and can potentially be dangerous. These "nonshortest" UTF-8 attacks have been used to bypass security validations in high-profile products, such as Microsoft's IIS Web server.
Handling Invalid Inputs
UTF-8 decoders have no uniformly defined behavior upon encountering an invalid input. Following are several ways a UTF-8 decoder might behave in the event of an invalid byte sequence. Note that implementing these behaviors requires careful security considerations.
- Substitute for the replacement character "U+FFFD" or the wildcard character such as "?" when U+FFFD is not available.
- Ignore the bytes (for example, delete the invalid byte before the validation process; see "Unicode Technical Report #36, 3.5 Deletion of Code Points" for more information).
- Interpret the bytes according to a different character encoding (often the ISO-8859-1 character map; other encoding, such as Shift_JIS, is known to trigger self-XSS, and so is potentially dangerous).
- Fail to notice but decode as if the bytes were some similar bit of UTF-8.
- Stop decoding and report an error.
The following function from John Viega's "Protecting Sensitive Data in Memory" [Viega 2003] detects invalid character sequences in a string but does not reject nonminimal forms. It returns
1 if the string is composed only of legitimate sequences; otherwise, it returns
Encoding of individual or out-of-order surrogate halves should not be permitted. Broken surrogates are invalid in Unicode and introduce ambiguity when they appear in Unicode data. Broken surrogates are often signs of bad data transmission. They can also indicate internal bugs in an application or intentional efforts to find security vulnerabilities.
Failing to properly handle UTF8-encoded data can result in a data integrity violation or denial-of-service attack.
|LDRA tool suite
176 S, 376 S
|SEI CERT C++ Coding Standard
|VOID MSC10-CPP. Character encoding: UTF8-related issues
|CWE-176, Failure to handle Unicode encoding
CWE-116, Improper encoding or escaping of output
|UTF-8 and Unicode FAQ for Unix/Linux
|Section 3.12, "Detecting Illegal UTF-8 Characters"
|Secure Programmer: Call Components Safely