Handling (pre|de)composed UTF-8 character strings in R
Some time ago, someone posted a message on a mailing list presenting the following intriguing problem. When filtering on a condition based on a string in a column called OCCUPATION that looked like this,
c("abatteur", "abatteuse", "abbé", "abbesse", …) |
the following R code worked
occupations %>% filter(OCCUPATION == "abbesse")
but this didn’t
occupations %>% filter(OCCUPATION == "abbé")
despite the word “abbé” being the third one in the list.
And just in case you wonder, this was not a piping related issue since this didn’t work either :
occupations$OCCUPATION == "abbé" |
This kind of thing is likely to drive anybody crazy and, as we will soon see, it is really a nasty.
(pre|de)composed Unicode code points
When you have encoding issues in R, it’s likely that the input file does not match the session locale encoding. Therefore, feeding the correct encoding to the input functions (e.g. encoding=
“WINDOWS-1252” when using an UTF-8 locale) usually solves the problem. Usually. But in this case, there was no mismatch between the input file and session locale encoding (both being UTF-8). So, what was wrong with the file ? Well, nothing.
UTF-8 is based on the Unicode standard which maps characters to an integer number called a code point among other things. But this map is not injective meaning that some characters may have a several representations. Those representations are said to be canonically equivalent in that their appearance is the same. For instance, letters with diacritic symbols like “é” can either be represented as a single code point or by a sequence. Unicode refers to “é” as “Latin small letter e with acute accent” and can either be represented as :
- a single character (U+00E9 — that’s 233 in decimal)
- or decomposed into an equivalent sequence of the base letter e (U+0065 — or 10110) and combining acute accent “◌́” (0x0301 — or 76910).
The latter is called a composite character since it is made of several characters. Therefore, both codes represents the same thing but differently. Just to give a rough idea of what’s going on, comparing a precomposed “é” to a decomposed “é” would be somehow like doing the following : 23310==21181312110 which obviously returns false (as a side note, this is usually not the way characters are actually compared but I’ll leave the gory details out for the moment).
Since a single symbols may have several representations, we need to able to tell which representations are equivalents so we can map one representation to another. That’s where Unicode normalizations comes into play. The standard defines four of them :
Normalization Form D (NFD) | Canonical Decomposition |
Normalization Form C (NFC) | Canonical Decomposition, followed by Canonical Composition |
Normalization Form KD (NFKD) | Compatibility Decomposition |
Normalization Form KC (NFKC) | Compatibility Decomposition, followed by Canonical Composition |
Decomposed characters use NFD normalization and precomposed characters use NFC. So what’s happening here is that the source file was encoded using NFD while the R interface uses the more common NFC.
Form | Character | Unicode |
NFC | é | U+00E9 |
NFD | e◌́ | {U+0065,U+0301} |
Now, one can readily check (at the R prompt) that :
## Try this at the R prompt ## ¡¡¡ --This won't work in RStudio-- !!! eaa.nfc = "é" ## NFC encoded é eaa.nfd = "é" ## NFD encoded é sprintf("U+%04x",utf8ToInt(eaa.nfc)) [1] "U+00e9" sprintf("U+%04x",utf8ToInt(eaa.nfd)) [1] "U+0065" "U+0301" |
The utf8ToInt()
function returns the Unicode code point (23310) and not the actual UTF-8 value of “é” ({0xC3, 0xA9}) which is how this character is actually represented in memory. The name is indeed a bit misleading. utf8ToUnicode()
would have been closer to what the function actually does.
Note that the above code won’t work in RStudio. It looks like input strings are automatically normalized (NFD strings are converted to NFC strings transparently).
And, for the sake of completeness, note also that
nchar(eaa.nfc) [1] 1 nchar( eaa.nfd ) [1] 2 |
Again, nothing wrong here. By default, nchar()
counts the number of Unicode characters. Because of multi-bytes sequence, this might be different from the user notion of character. Passing the “width” to the type argument gives a result closer to what you might expect,
nchar(eaa.nfd, type="width") [1] 1 |
but this actually gives “the number of columns cat will use to print the string in a monospaced font”. In this case, the function only returns either 1 or 2 and is related to East Asian character sets which classifies character as narrow or wide.
Now back to our problem. We know that we need to convert the normalization form of our strings from NFD to NFC. R provides bindings to the iconv API through the iconv()
function to convert between different encodings. But, unfortunately, the iconv library does not handle this. Nonetheless, NFD strings can easily be transformed to NFC using the stri_trans_nfc()
function from the stringi package :
stringi::stri_trans_nfc(eaa.nfd)==eaa.nfc [1] TRUE |
Thanks to the stringi package we can also run the previous example in RStudio :
## Use this in RStudio eaa.nfc = "é" ## NFC encoded é eaa.nfd = stringi::stri_trans_nfd("é") ## NFD encoded é ## alternatively : eaa.nfd = "\u0065\u0301" sprintf("U+%04x",utf8ToInt(eaa.nfc)) [1] "U+00e9" sprintf("U+%04x",utf8ToInt(eaa.nfd)) [1] "U+0065" "U+0301" |
NFD UTF-8 is the native encoding of OSX (and pretty much nothing else as far as I know). This may have caused the problem if the file came from an application that uses OSX native encoding.
What you see is not necessarily what you get
This example highlights one of the many annoying features of working with text data : sometimes, you simply cannot see the problem. So you have to go low-level and think about the way machines represent character strings. And this happens more than you may think. On several occasions I had to resort to the hexdump
command to find out that, for instance, a dangling non-printable character caused read.table()
to fail.
As pointed out by many people, when dealing with text, you need to know how character encoding works. You might not need to know everything about it, but the more text processing you’ll do, the more you’ll have to learn. And this is true even if you’re not working with multilingual data. Many programming environments such as R, Perl or Julia use UTF-8 natively or allow for UTF-8 (but not Python – Python supports Unicode though but does it differently). But since the world insists on using Windows OS, many files are still encoded using MS code pages (or UTF-16 if you’re lucky). UTF-8 support in Windows has improved over the years but switching to code page 65001 is still likely to break things. And Apple is not helping either by using UTF-8 with an uncommon normalization form.
The reason many programming environments use Unicode-based encodings like UTF-8 is that it makes working with text easier. Thanks to the Unicode, you can deal seamlessly with pretty much every known written symbols. This includes writing systems (Latin alphabet, Ελληνικό αλφάβητο, 平仮名, 汉字,…) and other kind of symbols such as 絵文字 (😋). And you only need a single character set for that so you don’t have to worry about code pages or things like that. Unicode also greatly improves pattern matching capabilities of regular expression engines thank to Unicode properties.
But we will cover that in other posts.
OpenEdition suggests that you cite this post as follows:
Thomas Soubiran (May 12, 2020). Handling (pre|de)composed UTF-8 character strings in R. NUMA. Retrieved February 7, 2025 from https://doi.org/10.58079/sgol
2 Responses
[…] shown in a previous post, processing character data may hold a fair share of unpleasant surprises. Again, what you see is […]
[…] are not bijective : a single character may have several corresponding codes and this is why we need normalization (and, more importantly, why we need to understand what normalization […]