Character Encodings and the Unicode
As shown in a previous post, processing character data may hold a fair share of unpleasant surprises. Again, what you see is not necessarily what you get and a working knowledge of how character encoding works is a bare minimum to survive the encoding green hell.
Character encoding has long be a problem and this is why Unicode was created. Unicode is a standard whose purpose is to provide consistent encoding of text to enable text data exchange and software interoperability. It is maintained by the Unicode consortium which started operations at the beginning of the 1990’s and, since then, thousands of characters of all kinds have been added to standard. As of writing, there are over 140,000 Unicode characters.
In what follow, we will see what motivated the design of the standard by looking at how character encodings work and how they evolved since the 1950’s.
But before looking further into the matter, it is important to emphasize that, despite ongoing standardization efforts, many ways of representing characters are nonetheless currently in use. For instance, different OS use different encodings. In Windows you have code pages and UTF-16 and only partial support for UTF-8. GNU/Linux and most of the web now use UTF-8. OSX also uses UTF-8 but with yet another normalization form called NFD —you know, think different—. In addition, some application use the OS default encoding, some handle encoding themselves. And this alone may cause some additional problems. Funny thing is that all the aforementioned encodings are based on Unicode. So having a standardized character set has not solved the encoding mess yet and this is not likely to change in the near future. But, that said, Unicode is still a huge improvement over the sorry state of affairs back in the 1980’s as we will soon see.
Encoding issues may arise whenever you have to deal with texts. A fairly common case is data importation. I guess anybody working with data has to face encoding issues at some point or another. And this is not a simple matter of correctly displaying characters. Being wrong about the encoding may simple cause string matching to fail or raise ungodly errors you would have never dreamed of.
Those issues are often presented from the point of view of Web applications but this can be a problem in any application like data analysis. And, beyond text encoding, the issue here is actually data exchange and the many ways to do it and the fact that many common formats are not much of an help when its comes to meta-data like encoding. Not that there is a lack of decent data storage or exchange formats (there are a many of them now). It’s just they’re still rarely used. And most often, you don’t have much choice. You may have to use data collected by somebody else or pre-processed by you in an another environment that gives you little choice regarding the output format. And this is how you end up dealing with a great variety of file formats which are more or less encoding friendly on a daily basis.
Actually, *.csv is somehow an almost de facto standard for data exchange but this does not make thing better, au contraire. csv are “text files” that are easy to parse but this comes with a price : there are no meta-data regarding for instance encoding (except sometimes for BOM). You might guess it by reading the first lines be until the 43,619th line eventually proves you wrong. Truth is, you can have the same kind of problem with some (designed in the 70’s-80’s but still in use) binary formats which only makes guessing even harder. And incredibly so, spreadsheets are widely use for data management, manipulation and storage (urgh) which often makes only thing worst when it comes to data exchange.
Contrary to what some people think, data analysis is hard and there’s an obvious conspiracy to make data analyst life’s even more miserable. I’ve spend countless hours fixing this kind of problems me being in this racket for a long time.
Character sets and character encodings
But truth is things regarding encoding have greatly improved over the years. And this is partly thanks to the Unicode standard which basically aims at providing a consistent encoding of every written symbol ever used. That said, file encoding will likely remain an issue for a long time, one reason being legacy (you know, csv files and all) and many more bad reasons. This is why understanding how text encoding works is important. And unfortunately, this entails reading things like that gibberish previous post of mine. Or what follows.
Speaking of which, let us start with a somewhat obscure but important fact. Whenever you have to deal with anything Unicode related, you first need to understand Unicode is NOT an encoding. It is a character set. This is a fundamental distinction. Simply put, a character set is a map that assign a unique number to a symbol. This number is often called a code or code point. On the other end, character encoding tell us the way this number should actually be represented in memory by the computer and, therefore, how it should operate on it. When the first encodings were devised in the XIXe century and until recently, there was no such distinction (the code point was the encoding). But making this distinction has proven necessary when designing a universal encoding scheme.
UCS-2, UTF-8, UTF-16, UTF-32 are all Unicode-based (well, kind of) encoding and they all map Unicode code points to different machine representations. For instance, the same code point U+E9 of, say “é”, yields :
Encoding | Value |
UCS-2 | 0x00E9 |
UTF-8 | 0xC3 0xA9 |
UTF-16 | 0x00E9 |
UTF-32 | 0x000000E9 |
Values in the right column are the symbols representations computer actually work with. In that case, all but UTF-8 use the Unicode code point directly, the only difference between UCS-2, UTF-16 and UTF-32 being word size : UCS-2 and UTF-16 use two bytes (16 bits) while UTF-32 uses four. UTF-8 works very differently but we’ll cover that in another post.
On the other end, the Mathematical Script Capital G “𝒢” (code point U+1D4A2) maps to :
Encoding | Value |
UCS-2 | NA |
UTF-8 | 0xF0 0x9D 0x92 0xA2 |
UTF-16 | 0xD835 0xDCA2 |
UTF-32 | 0x0001D4A2 |
“𝒢” cannot be represented with UCS-2 since its code point is higher than 216 (U+1000) and UCS-2 size is limited to 16 bits. Again the UTF-8 value is different from the Unicode code point but this is also true of UTF-16. As advertised by its name, UTF-16 uses a 16 bits code unit like UCS-2 and has to resort to an intricate (but simpler than UTF-8 though) scheme to store values bigger than U+1000 that turns it into a variable-length encoding using up to four bytes. Only the UTF-32 encoded value is identical to the Unicode code point.
As one can see, the resulting encoded value of a Unicode code point may greatly vary.
Why do we need to do anything like this in the first place ?
The reason we need to map characters to a code is that current computers are only good at one thing : integer operations in a certain range. Even floating point arithmetic rely on integer operations. And this is also true of character. The only thing CPUs care about are int or float. CPUs have simply no notion of UTF-8, let alone Unicode. Some executable format like Linux ELF can store strings but they’re really 8-bits arrays. There is no such thing as “plain text” files. Plain text files are binary files consisting of sequences of character codes and nothing else.
But there are additional benefits from using codes for characters. Representing characters with numbers enables us to sort or hash strings and encrypt messages, just to name a few.
Therefore, the machine representation of things is usually different from the way humans are taught to represent them. Main reason is that computer typically rely on binary (base2) systems. You may see “Hello world” printed on your screen but what the computer “sees” is (when using ASCII or ASCII compatible encodings like Latin-1 or UTF-8) :
Hexadecimal | 0x48 0x65 0x6c 0x6c 0x6f 0x20 0x77 0x6f 0x72 0x6c 0x64 |
Binary | 00000001000001000100010000010100000001010001010000000101000101000101010100010100000000000001000001010100010101000101010100010100000100000101010000000101000101000000010000010100 |
Differences in representations matter maybe even more when it comes to numerical computing (I’m using a superlative here only to emphasize the fact that, sometimes, numerical accuracy is the only thing that stands between life and death —Edit : turns out I was wrong, Unicode support can also literally be a matter of life and death—. Depending on the base, some floating point number can be stored exactly and some cannot and the number you see on the screen might be different from the number actually stored. But that’s another issue.
Since computers do not use characters directly, conventions are required so computers (or softwares) consistently use the same codes to represent the same characters. Otherwise, texts would be simply unreadable and strings operations would yield the wrong answer (that is, we end up in a world where “é” ≠ “é” like in my original post). One early example of such convention is the infamous American Standard Code for Information Interchange aka ASCII.
ASCII was devised in the 1960’s and the final version (1967)’s code chart looks like this :
0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | a | b | c | d | e | f | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Ox0 | NUL (0x0) |
SOH (0x1) |
STX (0x2) |
ETX (0x3) |
EOT (0x4) |
ENQ (0x5) |
ACK (0x6) |
BEL (0x7) |
BS (0x8) |
HT (0x9) |
LF (0xa) |
VT (0xb) |
FF (0xc) |
CR (0xd) |
SO (0xe) |
SI (0xf) |
Ox1 | DLE (0x10) |
DC1 (0x11) |
DC2 (0x12) |
DC3 (0x13) |
DC4 (0x14) |
NAK (0x15) |
SYN (0x16) |
ETB (0x17) |
CAN (0x18) |
EM (0x19) |
SUB (0x1a) |
ESC (0x1b) |
FS (0x1c) |
GS (0x1d) |
RS (0x1e) |
US (0x1f) |
Ox2 | SP (0x20) |
! (0x21) |
“ (0x22) |
# (0x23) |
$ (0x24) |
% (0x25) |
& (0x26) |
‘ (0x27) |
( (0x28) |
) (0x29) |
* (0x2a) |
+ (0x2b) |
, (0x2c) |
– (0x2d) |
. (0x2e) |
/ (0x2f) |
Ox3 | 0 (0x30) |
1 (0x31) |
2 (0x32) |
3 (0x33) |
4 (0x34) |
5 (0x35) |
6 (0x36) |
7 (0x37) |
8 (0x38) |
9 (0x39) |
: (0x3a) |
; (0x3b) |
< (0x3c) |
= (0x3d) |
> (0x3e) |
? (0x3f) |
Ox4 | @ (0x40) |
A (0x41) |
B (0x42) |
C (0x43) |
D (0x44) |
E (0x45) |
F (0x46) |
G (0x47) |
H (0x48) |
I (0x49) |
J (0x4a) |
K (0x4b) |
L (0x4c) |
M (0x4d) |
N (0x4e) |
O (0x4f) |
Ox5 | P (0x50) |
Q (0x51) |
R (0x52) |
S (0x53) |
T (0x54) |
U (0x55) |
V (0x56) |
W (0x57) |
X (0x58) |
Y (0x59) |
Z (0x5a) |
[ (0x5b) |
\ (0x5c) |
] (0x5d) |
^ (0x5e) |
_ (0x5f) |
Ox6 | ` (0x60) |
a (0x61) |
b (0x62) |
c (0x63) |
d (0x64) |
e (0x65) |
f (0x66) |
g (0x67) |
h (0x68) |
i (0x69) |
j (0x6a) |
k (0x6b) |
l (0x6c) |
m (0x6d) |
n (0x6e) |
o (0x6f) |
Ox7 | p (0x70) |
q (0x71) |
r (0x72) |
s (0x73) |
t (0x74) |
u (0x75) |
v (0x76) |
w (0x77) |
x (0x78) |
y (0x79) |
z (0x7a) |
{ (0x7b) |
| (0x7c) |
} (0x7d) |
~ (0x7e) |
DEL (0x7f) |
ASCII is composed of 128 characters coded from 0 to 127 and is organized in two parts. The first 32 positions are reserved for control character like CR (carriage return) and LF (line feed). ASCII was developed from telegraph codes and first targeted machine like teletypes, not computers (ever wondered why Windows used CR+LF for newlines ?). This is why there are so many of those, many being obsolete now. The second part is made of printable characters that represent letters, digits, punctuation marks, and a few miscellaneous symbols (except for the last one which corresponds to the DEL key).
As stated before, one benefit of using number is that you can compare characters and therefore sort them. In this case, numbers come first, then upper case letters and lower case letters come last. In addition, to change case, you simply have to add or subtract 32.
128=27 characters, so ASCII only need 7 bits of storage at most (12710 is 011111112). This leaves one high bit left for computers which usually operate on bytes. The left over bit was sometimes used as a parity bit to detect errors in bit transmission and, later, to extend the character set in a very unorderly manner as we will soon see.
Babel
The problem Unicode tries to solve is that, not only people around the world speak very different languages, they also use quite different writing systems. There are of course alphabetical systems but also syllabary or logographic systems as well. Some are written from left to right and some are written from right to left (or both) . Some use only a few dozen of characters, some use up to a thousand. Some languages like Japanese or Korean even mix several writing systems. And some extend the basic set of characters using special graphemes like in Latin script where diacritical marks like “◌´” (acute accent), “◌̃” (tilde) or “◌¸” (cedilla) are used to change the sound or the meaning of the glyphs to which they are added. And the set of diacritics changes from one language to another and so does their meaning.
But not only people use many writing systems, they eventually started to code them in their own way in more or less independent fashion. Over the years, many schemes have been devised to store subsets of all those characters by computers (as shown by the output of R function iconvlist()
). In the early days of computing there was IBM’s BCD and later EBCDIC, ASCII and its ECMA-6 extensions for Latin scripts and then came ISO-2022-JP, ISO-2022-CN and ISO-2022-KR for CJK characters just to name a few. All those encodings share a common feature. They were devised for a specific region or country (English-speaking countries, Europe, China, Japan or Korea) and their corresponding writing system. In addition, they are partially or totally incompatible. This basically meant that you could not have multilingual computers. And software support beyond restricted sets such as ASCII was often poor.
ASCII has long been one of the most widely used encoding. But 95 printable characters is somewhat limited, even if you’re writing for instance your résumé (and not your resume nor your résumé) in English. ASCII only needs 7 bits and thus left one bit that was use by various encoding to extend the restricted character set of ASCII. But this lead to a situation where the 128-255 range was interpreted differently according to the encoding (and a character could have several code). Code pages were an attempt to be more consistent but the problem somehow remained the same since every single code could have a different meaning depending on the code page. Still, you could write in Latin script or Greek script or Cyrillic script…but in that case, multilingual composing was often an exclusive OR. You could write, say, in English AND Russian (since KOI8-R is a soviet extension of ASCII) but not in Greek AND Russian. In addition extensions of ASCII like ISO/IEC 8859-1 only gave incomplete coverage, missing characters like “ǿ”, “œ”, “ẞ”.
For example, let’s take a look at the ISO/IEC 8859-1 and WINDOWS-1252 character sets :
Both character sets are derived from ASCII and, in turn, WINDOWS-1252 was derived from ISO/IEC 8859-1. They are mostly consistent except for the 0x80-0x9F range where control characters are replaced by letters and other miscellaneous symbols. Thus, an application assuming ISO/IEC 8859-1 will not display characters in this range correctly.
Let’s take another example. The following table shows the encoded character using Windows-1252, Windows-1253 (Greek), KOI8-R, CP-852 (Central European languages) and CP-863 (French Canada) in the 0x80-0xFF range :
Again some character sets overlap but some simply have nothing to do. (and the 0x00-0x7F range of CP-852 and CP-863 —not shown here— is not ASCII at all). In addition, some identical characters have different code value depending on the character set.
Since different encodings sometimes share the same codes, saving a text using one encoding and opening it assuming another results in the text being garbled with strange sequences of characters or even totally unreadable. This how you may end up reading someone’s résumé. You can find many examples of rendering of this kind of mojikabe on the Web. Or just by yourself by opening any web page with non ASCII character and see what happens when you change the page encoding. And, sometimes, it wasn’t because someone send you a text in a different encoding. It was just Windows getting confused because of all the many encodings it supports. As you may be well aware, “Bush hid the facts”.
You might think that reading a text about “hétéroscédasticité” and “hétérogénéité” is not so bad after all. Just make the appropriate transliterations while reading. But, this can get quickly really annoying. And for once, this is not a matter of exception culturelle française and us French being a pain as always. Most languages that use Latin script have additional “funny” characters. Spanish has the tilde (“◌̃”) or the inverted exclamation mark (“¡”). German has the Essesetz (“ß”) and the Umlaut (“◌̈“). And the list goes on and on. In addition, depending on the encodings or scripts, the entire text may be garbled. Therefore, even the most basic form of natural language processing becomes downright impossible.
Confusion of Tongues thus led to the Confusion of Encodings. But as soon as the 1970’s, people in the computer business started to repent for their sins. Interestingly, normalization of floating point number started about the same time (and led to the IEEE 754 Standard for floating-point arithmetic being adopted in 1985). I think this is no coincidence that the industry started to normalize the way machine represent things when the personal computer market started to grow. Heterogeneous machine representation of numbers and strings was simply hindering both the hardware and software business (you need softwares so that people buy hardware and you need hardware to so that people buy softwares). So in order to sell their products all around the worlds, many vendors needed to handle all the different writing systems their potential clients might use. By the way, the first draft (1988) of the Unicode Standard contains a list of writing systems ranked by cumulated GPD of the countries using each system…
Anyhow, to make a long story short, this mess led to the development of a universal character set capable of representing any character ever written since the dawn of times.
Enter the Unicode
Work on an universal character set began in the late 80’s as a joint work between XEROX and Apple based on earlier efforts made at XEROX. This working group eventually led to the incorporation of the Unicode Consortium in 1991. The Unicode Consortium is a non-profit organization whose purpose is to maintain and publish the Unicode Standard. Its member are mainly computer software and hardware companies (like Adobe, Apple, IBM or Microsoft) along with governments, organizations and individuals.
After several drafts, the first version of the Unicode standard was published in October 1991 and then consisted of 7,161 characters. About the same time (1990), the International Organization for Standardization (ISO) drafted another character set called Universal Coded Character Set (UCS). At first, both standards were very different but both organizations eventually joined forces. The UCS is defined by the international standard ISO/IEC 10646 which is code for code identical to Unicode and revisions of both standards are synchronized.
I’m mentioning UCS because this is yet another source of confusion. So, whenever you read about UCS or ISO/IEC 10646 just think Unicode (even though this is not technically true). And, for the sake of simplicity, that’s what I’m already doing (UCS-2 and UTF-8 are actually based on ISO/IEC 10646 contrary to what I’ve implied earlier).
Unicode is different from most previous character encoding initiatives in that it is an attempt at creating a universal character set. But, like any other character sets, the Unicode Standard first consists of a code chart that maps symbols to a name and a code (beware that the pdf file is 108 Mo). The first height characters are :
NUL (0x0) |
SOH (0x1) |
STX (0x2) |
ETX (0x3) |
EOT (0x4) |
ENQ (0x5) |
ACK (0x6) |
BEL (0x7) |
This should look familiar since this turns out to be exactly the head of the venerable ASCII table we saw earlier. Unicode was build on earlier standards like ISO/IEC 8859-1 which is in turn compatible with ASCII. So The first 256 characters in Unicode and ISO/IEC 8859-1 are identical.
Also, like ASCII, Unicode code points are allocated by groups of related characters called blocks. Therefore, due to back compatibility, the chart starts with “Basic Latin” (U+0000–U+007F -ie: ASCII) followed by “Latin-1 Supplement” (U+0080–U+00FF -ie: ISO/IEC 8859-1). Then come :
- several Latin scripts extensions (“Latin Extended-A” -U+0100–U+017F-, Latin Extended-B -U+0180–U+024F-)
- other Latin and non Latin European scripts blocks (“IPA Extensions”, “Spacing Modifier Letters”, “Combining Diacritical Marks”)
Then from U+0370 :
- “Greek and Coptic” (U+0370–U+03FF)
- “Cyrillic” (U+0400–U+04FF) and “Cyrillic Supplement” (U+0500–U+052F)
- Armenian (U+0530–U+058F)
- Aramaic (Hebrew, Arabic, Syriac,…)
- etc.
In all, 308 of them now.
But there is more to the Unicode standard than a simple chart. Unicode adds rules and algorithms for collation, normalization of forms (NFC, NFD and all that), and script direction (left-to-right or right-to-left). The Unicode also defines properties. Unicode properties are a kind of character meta-data that bind characters to attributes. For instance the General Category (GC) property give the type of the character classified as Letter, Mark, Number, Separator, Other, Punctuation, or Symbol. In addition, the standard provides rules for case folding, lines breaking, ligatures, sorting, regular expressions or security.
The latest version of the standard is 1,030 pages long. And there are over 50 additional tech reports. From this alone, it should be clear that assigning numbers to characters is not as straightforward as it may sounds. One reason being that languages and their corresponding scripts are highly varied and, sometimes, can be contrived or even inconsistent. In addition, the Unicode did not start from scratch and therefore had to deal with the legacy of encoding practices (as well as as the legacy of early design decisions that didn’t turned out to be so good as time went by and new characters were added).
But we’ll cover that in more details in other posts.
Some references
Wikipedia has literally bazillions of UTF-8 encoded pages related to Unicode and character encoding in general. It goes as far as having a page dedicated to (every?) Unicode blocks along with historical records of Unicode Consortium documents leading to their adoption and updates.
The unicode.org site is the main source of informations to pretty much every thing regarding the standard.
There are many pages dealing with one or another aspect of Unicode on the Web. Here are a few. This page is likely to be one of the first hit of any web search regarding Unicode and is totally worth reading as well as this one. There’s also this page which is less Web-centric or more geared toward data analysis.
OpenEdition suggests that you cite this post as follows:
Thomas Soubiran (May 14, 2020). Character Encodings and the Unicode. NUMA. Retrieved March 28, 2025 from https://doi.org/10.58079/sgom
3 Responses
[…] is a follow-up of a previous post on Unicode. Here, we will cover the Unicode Standard internal organization and introduce important […]
[…] second entry deals with character encoding in general in order to introduce the Unicode standard. It gives a few […]
[…] we saw earlier, a character can be encoded directly using its code point value. And until the variable-length […]