Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

UTF : Unicode Transformation Format

In a previous post, we covered the Unicode Standard character set internal organization. It’s now time to get real and see how Unicode is actually used by taking a look at encodings. The Unicode Standard supports three distinct encodings : UTF-8, UTF-16 and UTF-32 (UCS-4). UTF stands for Unicode (or UCS) Transformation Format and they use 8, 16 and 32-bits code unit respectively (hence their names). All three encodings can represent the full range of Unicode characters but in a very different way.

In what follows, we’ll cover UTF-32, UTF-16 and UTF-8 in turn. But it should be noted those are not the only encodings for Unicode. There are at least seven more that were defined by the Unicode Consortium or others parties :

  • UTF-1 : variable-length character encoding backwards-compatible with ISO/IEC 646-US (aka ASCII) first designed for ISO/IEC 10646.
  • UTF-7 : variable-length character encoding for e-mail messages defined by RFC 2152
  • CESU-8 : Compatibility Encoding Scheme for UTF-16 defined by TR-26. CESU is an 8-bit encoding intended for internal use within systems processing Unicode in order to provide an ISO/IEC 646-US-compatible 8-bit encoding that is similar to UTF-8 but preserves UTF-16 binary collation.
  • UTF-EBCDIC : UCS compatible with IBM’s Extended Binary Coded Decimal Interchange Code defined by TR-16.
  • SCSU : Standard Compression Scheme for Unicode defined in TR-6. SCSU is an encoding of a sequence of Unicode characters as a compressed stream of bytes.
  • Punycode : Punycode was defined by RFC 3492 (see also https://unicode.org/reports/tr46/). Punycode uses generalized variable-length integers (mixed radix) to encode Internationalised Domains Names —ie: domains that use non ISO/IEC 646-US characters.
  • GB 18030 : GB 18030 is the official character set of the People’s Republic of China compatible with GB 2312-1980 and GBK that defines a roundtrip mapping to Unicode. GB 18030 also defines a variable length character encoding.

Character Codes and Encodings

Let us start with a short reminder. In order to understand the relation between the Unicode code points and /UTF-\d/, we need to define two separate but related things : character sets and character encodings. Unicode is a character set that maps a definition of a characters to a code. UTF-8, UTF-16 and UTF-32 are character encoding in that they give the machine byte representation of that code, the way it encodes the numerical value corresponding to the character that is.

As we saw earlier, a character can be encoded directly using its code point value. And until the variable-length encodings showed up back in the 1970’s, that was pretty much the way every encodings worked. For instance, the ASCII code point of the letter “a” is 9710 and it is simply encoded with the 8-bits integer representation of this value (011000012). But with variable-length encodings like UTF-8 or UTF-16, the integer representation is derived from the Unicode code point and may yield a quite different number. For instance, the code point for the letter “é” is 23310 but its UTF-8 counterpart is the two bytes word {0xc3, 0xa9}. This would be something like 4345910 in decimal though this doesn’t really make sense since UTF-8 is a byte-oriented encoding of Unicode and the bit sequence is really interpreted as a two bytes word as we will son see.

UTF-32

UTF-32 (aka UCS-4 in terms of the ISO standard) is the simplest of all three encodings. As stated by the standard (v13.0.0, p.63), each Unicode code point is represented directly by a single 32-bit code unit. Because of this, UTF-32 has a one-to-one relationship between encoded character and code unit; it is a fixed-width character encoding form. Therefore, it is a fixed width encoding.

An obvious shortcoming of UTF-32 is that it is not space efficient. First, Unicode code points range form 0x0 to 0x10FFFF and need at most 221 bits of storage. So the leading 11 bits will never get used. In addition, Unicode was designed so that the most commonly used characters fall within the Basic Multilingual Plane which needs at most two bytes. This is even worst if you’re using Latin scripts because most of characters will only require one byte due to ISO/IEC 646-US and Latin-1 back compatibility.

Modern computer process data by bytes and their architectures differ in the way they order them. So, depending on the machine’s architecture, the most significant byte (resp. the least significant byte) of a multi-byte words may come first (or last). These sequences are known as “big endian” and “little endian” orders, respectively. Therefore, decoding UTF-32 requires knowing the “endian-ness” of the character. For this purpose, UTF-32 allows a Byte Order Mark (BOM) to precede the first actual coded value. Another solution is to state explicitly the endianness of the encoding by specifying either “UTF-32LE” or “UTF-32BE” as the encoding when applicable (for example in the xml encoding tag in the head of the document). In that case, the BOM is not preprended.

On the plus side, it allows for random access contrary to variable length encodings.

Variable-length encodings

Variable-length encodings are encoding that use a varying number of bytes depending on the value of the code point. For instance, with UTF-8, encoding the low part of the Basic Latin Block (ISO/IEC 646) only takes one byte. But encoding, the next block (Latin-1) takes two. Actually, every block up to the 17th (U+0000–U+007F N’Ko languages) takes two bytes, then it takes three bytes up the 163th block (U+FFF0–U+FFFF Specials) which lay at the end of the BMP) and four bytes for the remaining code points.

We can find the boundary blocks using the Blocks.txt file from the Unicode Database (UCD) :

##
## bitmap index
##
hi <- matrix(
  rep(
    2^c(7, 11, 16, 21) ## UTF-8 boundaries – please refer to the text
    , each=nrow(ucd.blk)
  )
  , ncol=4 
)
##
nbits <- rowSums(!ucd.blk$cp_hi <= hi)
## 
cbind(
  ucd.blk[which(
    ## lag
    Lidx <- nbits!=c(nbits[length(nbits)], nbits[-length(nbits)]) ## 
  ), c("cp_lo", "Block") 
  ]
  , ucd.blk[which(
    ## lag
    Hidx <- nbits!=c(nbits[-1], nbits[1])
  ), c("cp_hi", "Block")]
)

which results in

“Basic Latin” “NKo” “Specials” “Supplementary Private Use Area-B”

ucd.blk is the UCD Blocks.txt file converted to an R data.frame and is available from here. This works because the number of code points of each Unicode block are always multiples of 16 and so are UTF-8 boundaries.

Multi-bytes encoding are more space efficient, especially if the encoding uses less bytes for the more common characters. One early use was encoding of Chinese, Japanese and Korean languages. Those languages have very large character sets but many characters are rarely used. Thus, encoding more frequent characters with less bytes helps reducing the overall size of documents.

Another reason for using multi-bytes encodings is back compatibility. For example, UTF-8 was designed to be back compatible with ASCII and several CJK multi-bytes encodings were designed to be compatible with 7 and 8 bits encodings defined by ISO/IEC 2022 (and, hence, ISO/IEC 646).

But compatibility was also an issue for UTF-16. In its early stage, Unicode was designed as a 216 répertoire. But in the process, it became eventually clear that 65,536 will not be enough. Handling the entire set of CJK characters alone required to go way beyond the BMP. But Unicode was already in use through the UCS-2 encoding which is fixed-length encoding like UTF-32 but uses only two bytes. Therefore, the Unicode Consortium designed UTF-16 in the mid 1990’s in order to encode characters beyond the BMP while retaining their back compatibility with earlier sixteen bits encodings. UTF-16 was first specified in version 2 of the standard (1996).

By the way, this is the reason why Windows OSes use UTF-16. Microsoft was an early adopter of Unicode starting with Windows NT in 1993. MS product first used UCS-2 then moved to UTF-16 with Windows 2000.

UTF-16

Now, let’s see how UTF-16 works. As we mentioned earlier, things happen differently whether code points fall in or outside the BMP. Code points in ranges U+0000 to U+D7FF and U+E000 to U+FFFF (BMP) are encoded with just sixteen bits just like (UCS-2 back compatibility). For code points ranging from U+010000 to U+10FFFF, things are more contrived and this how it goes :

  1. first, 0x10000 is subtracted from the code point u
  2. then,
    1. add 0xD800 to leading ten bits
    2. add 0xDC00 to the trailing ten bits

Note that 0x10000 = 216 is the first code point outside the BMP, so step one yields a number u` in the 0x00000–0xFFFFF range which is at most twenty bits. Then u` is divided into two equal parts which gives a 2 x 16 bits word. The first (hight) part starts with 0xD800 while the second (low) starts with 0xDC00.

Let’s take the Mathematical Script Capital G “𝒢” example from the post on encodings. “𝒢” has code point U+1D4A2. So, 0x1D4A2 – 0x10000 = 0xD4A2 which gives the {0x0035,0x00A2} pair because 0x35*210=0xD400. Then, add the appropriate quantities to the two parts:

First (high) byteSecond (low) byte
u`2 + 0xD800u`1 + 0xDC00
0x0035 +0xD800 = 0xD835 0x00A2 + 0xDC00= 0xDcA2

Hence, the encoded value of “𝒢” is the 32-bits word {0xD835,0xDcA2}.

Pictorially, the process looks like this for big endian machines,

Mappings of Unicode code points to UTF-16 encoding for big endian machines

And the following R function converts either a character or a Unicode code points to UTF-16 :

#' Encodes either a character or a Unicode code point to UTF-16.
#' 
#' @param x A character or a Unicode code point.
#' @param LE optionnaly set the endianness of the result (default is the endianness of the machine).
#' @return The raw UTF-16 encoded value of x.
#' @examples
#' ucToByte_Utf16("é")
#' ucToByte_Utf16(0xE9)
ucToByte_Utf16 <- function(x, LE=(.Platform$endian=="little") ){
  ##
  cp <- if( is.character(x) ) utf8ToInt(x) else x
  ##
  if( is.na(cp) ){
    ##
    return( raw(0) )
  }
  ##
  if( cp > 0x10FFFF ){
    warning("code point is out of range")
    return( raw(0) )
  }
  ##
  mbytes <- F
  ## BMP : UCS-2
  if( cp<2^16){
    ##
    b <- intToRaw(
      cp
    )[1:2]
  }else{ ## other planes
    ##
    mbytes <- T
    ##
    m0 <- 0x3FF ## 2^10 -1 : 10 bits mask
    ##
    cp0 <- cp - 0x10000
    ##
    b <- c(
      ## hi : ( u`  >> 10 ) + 0xD800, note : no need to use a mask here since every higher order bits are zero
      intToRaw(
        bitwShiftL( cp0, 10)  + 0xD800 
      )[1:2]
      ## lo : ( u` ^ m ) + 0xDC00 
      , intToRaw(
        bitwAnd(cp0, m0) + 0xDC00 
      )[1:2]
    )
  }
  ##
  ##
  if( 
    (LE & .Platform$endian!="little")  | ( !LE & .Platform$endian=="little")
  ){
    b <- b[
      if( mbytes ){ c(2,1,4,3) }else{ c(2,1) }  
      ] ## 
  }
  ##
  b
}

This function calls the following helper function (which, oddly enough, does not seem to be provided by base R) :

#' A utility function to convert from integer to raw bytes
#' 
#' @param x An integer.
#' @return The byte representation of x.
#' @examples
#' intToRaw(0xD800)
#' all(rawToBits(intToRaw(0xD800))==intToBits((0xD800)))
intToRaw <- function(x){
  xr <- raw(4)
  lapply(
    0:3
    , function(i){
      ## 
      b <- bitwShiftR(
        bitwAnd(
          x
          , bitwShiftL(
            0xFF ## 2^8 -1
            , i*8
          )
        )
        , i*8
      )
      xr[i+1] <<- (as.raw(
        b
      ))
    }
  )
  xr
}

The ranges 0xD800–0xDBFF and 0xDC00–0xDFFF are called surrogates (high and low respectively). In an earlier post, we saw that those two ranges of the BMP were reserved by the Unicode standard and could not be assigned to a character. And now we can see why. Since the encoded values necessarily falls within those two ranges, none of the two sixteen bits can correspond to an assigned BMP code points. They are called surrogates, since they do not represent characters directly, but only as a pair. Thus the lead units cannot overlap the trail units and UTF-16 unambiguously encodes each code points contrary to other multi-bytes encodings.

Since each code unit is suffixed with 0xD800 or 0xDC00, the leading 6 bits of each 16 bits pairs cannot be used. This leaves {(2^{10}})^{2} = 2^{20} bits available for encoding code points. Therefore, because of surrogates, UTF-16 can only encode 2^{16} + 2^{20} code points and this why the size of the Unicode codespace is limited to 1,114,112.

UTF-16 allows for the encoding of the all Unicode codespace without breaking compatibility with 16-bits fixed-length encodings and is more space efficient that UTF-32. But there are a few caveats.

Since UTF-16 uses 16 bits code units and machines usually work on bytes, the resulting encoding also depends on the endianness (byte order) of the machine. Same as thing as UTF-32 here. Either use a BOM or specify “UTF-16LE” or “UTF-16BE” as the encoding.

But, contrary to UTF-32, since each character can be represented either as one or two 16-bits code units, UTF-16 does no allow for random access. But, in many applications this isn’t really an issue as we will later see. Another consequence is that binary order for data represented in the UTF-16 encoding form is not the same as code point order. But note that in some cases, you cannot sort Unicode based solely of code points order anyway. For instance, when you sort French characters, you expect the the letter “é” to come before the letter “f” but “f” has a lower code point than “é”. See the Unicode Technical Standard #10, “Unicode Collation Algorithm” for more information.

Another short coming of UTF-16 is that the 0x00 byte may validly occur anywhere within a string and this causes trouble with languages like C which uses the NUL characher (\00) as a string delimiter. This can be illustrated by the following R iconv function call 

> iconv("é", to="UTF-16")
Error in iconv("foo", to = "UTF-16"):
  embedded nul in string: '\xff\xfe\xe9\0'

As mentioned before, UTF-16 is the character encoding used by Windows in its API as well as other OS and applications or library (like Qt). It is also used by languages like Java, c# and all the .NET framework related languages (obviously) or Javascript.

UTF-8

UTF-8 was designed during the evening of September 02, 1992 by Rob Pike and Ken Thompson (yes, The Ken Thompson) for the Plan 9 operating system. It improved on FSS-UTF (File System Safe UCS Transformation Format) which was a proposal for a byte-stream encoding of multi-bytes character sets for Unicode proposed by an X/Open committee. FSS-UTF itself was designed to overcome the limitations of UTF-1, the first byte-oriented transformation format for Unicode1.

UTF-8 was quickly implemented for Plan 9 and later presented at the USENIX Winter 1993 Technical Conference and eventually included in the first version of the ISO/IEC 10646 standard in 1993. See Pike and Thompson’s presentation at the USENIX Winter 1993 Technical Conference for more details.

UTF-8 encoding is more elaborate than UTF-16 encoding but it goes along the same principles : split the original code point according to its range and add a suffix. Same as UTF-16, the UTF-8 encoded value length will depend on the range the code points falls into. But UTF-8 divides the Unicode codespace in four ranges :

Number
of bytes
Bits for
code point
First
code point
Last
code point
Byte 1 Byte 2 Byte 3 Byte 4
1 7 U+0000 U+007F 0xxxxxxx
2 11 U+0080 U+07FF 110xxxxx 10xxxxxx
3 16 U+0800 U+FFFF 1110xxxx 10xxxxxx 10xxxxxx
4 21 U+10000 U+10FFFF 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx
UTF-8 sequences
source : the Unicode Standard (see also Wikipedia)

The first range is ASCII and takes only one byte. As we saw earlier, the second range goes from Latin-1 to N’Ko scripts and takes two bytes. The rest of the BMP takes tree bytes and the other planes four. Every trailing bytes are suffixed with 012 while the leading byte ends with a value that depends on the number of bytes of the encoded value.

So, UTF-8 encoding goes along those lines :

  1. check is code point is < 0x80 (ASCII case). If so, great, nothing to do. Move on to the next character
  2. otherwise, iterate over each 6-bits chunks of the code point starting from the leading bits of the code point
    1. suffix chunks with 102
    2. set the nth byte with the nth prefixed chunk (in reverse order if machine is little endian)
  3. finally
    1. suffix the remaining high bits with a code indicating the total number of bytes (1102 for two bytes, 11102 for three bytes,…)
    2. set the first byte with the result

Of course, boundaries are chosen so that the prefix code length and the number of remaining bits of the leading byte add up to height.

Here is an illustration of the process (omitting the 7-bits ASCII case) :

Mappings of Unicode code points to UTF-8 encoding for big endian machines

To make things easier to visualize, code points are represented using a big endian layout (high bits come first). Bytes would be reverse order for little endian machine (but not their UTF-8 counterpart). That is to say code points would represented as zzzzzz yyyyy (or zzzzzz yyyyyy xxxx) but its UTF-8 counterpart would still be 110yyyy 10zzzzzz. UTF-8 remains the same whatever the endianness of the machine .

Now, let’s take an example. The Latin Small Letter E with Acute “é” has one 6-bit chunk:

ceiling(log2(utf8ToInt("é"))) %/% 5
[1] 1

so the encoded value needs two bytes (first line of the previous figure). This chunk goes in second byte with suffix 102. The remaining 3 high bits are suffixed with 1102 and go in the first byte which gives the number of bytes of the encoded value.

The Euro Sign “€” has two 6-bits chunk and needs three bytes. The first 6-bit chunks goes in the second byte and the second chunk goes into the third byte both with suffix 102. The remaining goes in the first byte with suffix 11102.

The following R function encodes either a character or a Unicode code point to UTF-8

#' Encodes either a character or a Unicode code point to UTF-8.
#' 
#' @param x A character or a Unicode code point.
#' @return The raw UTF-8 encoded value of x.
#' @examples
#' ucToByte_Utf8("é")
#' ucToByte_Utf8(0xE9)
##
ucToByte_Utf8 <- function(x){
  ##
  ## Helper function that extract the n-th 6-bits chunk of the code point
  ## and sets the high bits with suff
  ##
  getByte <- function(
     cp, n
    , suff=0x80 ## default suffix : 10bbbbbb (2^7)
  ){
    ## 6-bits mask
    m <- 0x3F ##  2^6 - 1
    ## #shift
    nsh <- n*6
    ## get the n-th chunk : ( cp ^ ( m << n*6 ) )
    b <- bitwAnd(
      cp
      , bitwShiftL( m, nsh )
    )
    ## shift right and set suffix : 
    b <- bitwOr( bitwShiftR(b, nsh ), suff ) 
    ##
    b
  }
  ##
  cp <- if( is.character(x) ) utf8ToInt(x) else x
  ##
  if( is.na(cp) ){
    ##
    return( raw(0) )
  }
  ##
  if( cp > 0x10FFFF ){
    warning("code point is out of range")
    return( raw(0) )
  }
  ## 1 byte (ASCII)
  if( cp <0x80 ) return( 
    intToRaw( cp )[1]
  )
  ## allocate result
  r <- raw(4)
  ## #bytes
  n <- 1L
  ##
  ## 4 bytes
  ##
  if( cp >= 0x10000 ){
    ##
    n <- n+1L
    ##
    b <- getByte(cp, 2 )
    ##
    r[n] <- as.raw(b)
  }
  ## 3 bytes
  if( cp >= 0x0800 ){
    ##
    n <- n+1L
    ##
    b <- getByte(cp, 1 )
    ##
    r[n] <- as.raw(b)
  }
  ## 2 bytes
  if( cp >= 0x0080 ){
    ##
    n <- n+1L
    ##
    b <- getByte(cp, 0 )
    ##
    r[n] <- as.raw(b)
  }
  ## suffix for the first byte (-ie: code for #bytes)
  suff <- bitwShiftL( 2^n -1 , 8-n)
  ## get leading bits
  b    <- getByte(cp, n-1, suff)
  ##
  r[1] <- as.raw(b)
  ##
  r[1:n]
}

UTF-8 does not need surrogates and can encode the entire 21-bits codespace. Actually, it could go beyond and encode the all 32-bits space (using six bytes) but was restricted to four by RFC-3629 in 2003.

To sum up, UTF-8 uses 8-bits code units, so

  • UTF-8 is back compatible with ASCII
  • and byte-oriented
  • and has no issue with endianness, byte order will be the same whatever the machine’s architecture
  • or embedded NUL character in C strings

Points one and two may sound like something from the past but it’s not only that. Indeed, retaining 8-bits code units was a big issue when UTF-8 was designed back in 1993. But many protocols and applications are still byte oriented (non-exclusive) or still rely on ASCII heavily. For instance, TCP/IP is byte-oriented protocol. HTTP 1.1 uses ASCII as basic character set for the request line in requests, the status line in responses and URI are limited to Latin-1.

Thank to the specific bits patterns,

  • UTF-8 is self-synchronising meaning that you can easily find the start of a character when beginning from an arbitrary location in a byte stream of UTF-8
  • this leaves little chance to the Confusion of Encodings. Thanks to the specific bits patterns, we can easily check whether a string is UTF-8 or not

Other features include,

  • binary order is the same as code point order
  • UTF-8 acts like a compression algorithm : UTF-8 documents tends to be smaller in size but this readily depends on the writing system

The second point is one controversial issue. Many scripts have code point less than U+0800. So, they’ll need at most two bytes and thus use as much or even less space than UTF-16, especially Western language (and even more so English) because many if not most (if not all) of the required characters are ASCII and take only one byte to encode.

But writing system equal or above U+0800 like Hindi, Thai, Chinese, Japanese, or Korean use at least three bytes (and sometimes more because lesser used characters are located in the SIP or TIP —but this doesn’t mean they’re never used, quite on the contrary). That’s one byte more than UTF-16 which uses 16 bits for the entire BMP. The following figure shows the difference between the three encodings :

Comparison of the number of bytes used by each UTF encodings
Panel (a) uses equal-size (logarithmic) UTF ranges for readability while panel (b) uses proportional ranges

As shown by the figure, most of the BMP requires three bytes with UTF-8 and only two when using UTF-16.

This let some wonder whether UTF-8 was a “Racist Kludge or a Stroke of Genius”. This is a both a sensitive and technical issue, so I won’t go much further. But I think it should be noted that the culprit is not only UTF-8 per se but also the order the Unicode Consortium choose for code points blocks. And some argue that difference in size of the encoded documents is mostly significant for plain text because, whatever the language, UTF-8 encoded formatted documents or streams (like HTML or XML) are likely to be smaller because formatting is done in ASCII. Unfortunately, I’m not aware of any systematic study on the matter to support or disprove this claim.

Another shortcoming of UTF-8 (and multi-bytes encodings in general) is that it does not allow for random access. Searching for th nth character in a string is an O(n) operation. Now, question is, is this really a problem ? Of course, the answer depends on the application. But in many cases, string processing like pattern matching or parsing operates on the whole string. Therefore, random access is usually not required. And, as we saw in another post, a user notion of a character does not always match the Unicode definition. Many user perceived character might be made of several Unicode characters. So, depending on the linguistic settings, you might have to search for the whole string to index for a character even in UTF-32.

Of course if you’re using language that gives one a direct access to memory like C, this means that the days were plain text implied 8-bits characters and iterating over a string simply meant incrementing a pointer are now gone. You’ll need specialized code to iterate, capitalize, compare or extract substrings. But this seems a fair price to pay to deal with multilingual environments.

UTF-8 v. UTF-16

Over the years, UTF-8 has gained a lot of momentum. UTF-8 is now by far the most commonly used encoding on the Internet (the W3C requires UTF-8 encoding for XML and HTML documents even thought not everybody’s abiding like Bretons qui résistent encore et toujours. It is also the default encoding of many GNU/Linux distributions as well as OSX. And most if not all languages or computing environments designed in the last 15 years like Rust or Julia use it natively. On the other end, UTF-16 is often described as legacy used by software that were early adopters of Unicode. And, in the eye of many, there does not seem to be any reason to start using it now. Or is there ?

It should be noted that UTF-8 is also a legacy from the days of ASCII, byte-oriented data streams and NUL-terminated strings. The ASCII legacy point is very US-centric (I have to deal with Windows codepage legacy on an almost daily basis, not ASCII, and UTF-8 is of no help here). And C’s use of NUL terminated strings may have sounded like a good idea in the early 70’s but has been the cause of countless memory leaks and major security bugs since then. In addition, many modern software and libraries are now encoding-aware as well as file formats. So, there might be cases were many features of UTF-8 are not so relevant anymore.

Indeed, an easy (and quite unsatisfactory answer) would be that opting for UTF-8 or UTF-16 depends-on-the-application. Problem is, I’ve found very little information about this (besides ASCII-heavy streams like HTTP+HTML). For instance, I’ll be very interested in less network and I/O centric comparisons related to data analysis and NLP applications. Actually, I’ve read some claims that UTF-16 was more suitable for text processing but I could not find any evidences of this. Of course, this kind of benchmarking shootout would be difficult because both encodings and Unicode rules require complex algorithms (as illustrated by the R functions). So you’ll be also measuring this efficiency of the implementations. And I’m not sure you’ll get a definitive answer. But even that would still be a result though.

  1. As stated before UTF-1 was also a variable-width encoding that was backward compatible with ISO/IEC 646. But this encoding had a number of issues and was quickly discarded. For instance, one of its annoying feature was that the code corresponding to the forward slash “/” could pop up pretty much everywhere rendering it useless for Unix kind of operating systems. *NIX platforms often treat character strings as an opaque bytes arrays but require among other things a consistent use of “/” as the path directory separator and nothing else —hence the name of its proposed replacement—. In addition, UTF-1 relies on modulo 190 arithmetic and thus needs integer division which is much slower that bitwise operators used by UTF-8 and 16. []

OpenEdition suggests that you cite this post as follows:
Thomas Soubiran (May 16, 2020). UTF : Unicode Transformation Format. NUMA. Retrieved March 28, 2025 from https://doi.org/10.58079/sgoo


You may also like...

2 Responses

  1. 18/05/2020

    […] fourth (and last for now) entry is about Unicode Transformation Formats (UTF) like UTF-8. It goes a little […]

  2. 18/05/2020

    […] As advertised by its name, UTF-16 uses a 16 bits code unit like UCS-2 and has to resort to an intricate (but simpler than UTF-8 though) scheme to store values bigger than U+1000 that turns it into a variable-length encoding using up to four […]

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.