Some Coded Character Sets Design History
Last march, I had the pleasure to talk about coded character sets history at one seminar of the Gouvernance et régulation d’Internet workgroup of the CIS (Centre internet et société).
I’d like to thank again organizers Julien Rossi and Lucien Castex for the invitation and Valerie Schafer for discussion as well as everyone who attended the seminar for stimulating questions.
The aim of this talk was to show how Coded Character Sets (CCS) gradually evolved into highly sophisticated pieces of software engineering since the 1950’s while « caring about plumbing » that is to say understanding how the varying material and technical requirements as well as change in usage shaped the design of CCS over the course of decades as well as their legacy until today.
Slides —in French— are available here. The talk timeline stops right before ISO/IEC 10646–Unicode because this would have been simply too much. Therefore what follows highlights a few points from the talk but also extends it, especially with regard to the Unicode and CCS design in general. Starting with an oft overlooked fact : designing CCS is much harder than it sounds.
- Coded Character Sets design is hard
- Coded Character Sets do not exist in isolation which makes everything harder
- The reasons why Coded Character Sets design is hard have changed over time
- Coded Character Sets design is bound to hardware
- Coded Character Sets design is bound to usage
- Coded Character Sets design is bound to legacy
- Coded Character Sets history is hard
- A bird’s view on CCS development
Coded Character Sets design is hard
One of the main reason is that characters coding is a hard mix of linguistics and semantics and computer science. You simply cannot device a character set without a deep understanding of how both computers and scripts and therefore languages work. A very interesting example of how linguistics and engineering interplays is the ISCII standard which relies on structural properties of nine Brahmi–derived writing systems used in India. And an even more sophisticated example is obviously the Unicode Standard.
And what makes it especially hard is that, as stated elsewhere, human scripts are, to put it mildly, very varied. Truth is, we’re simply not dealing with mathematical, regular objects here but with semantics. One huge problem is that human languages and scripts have a lot of ambiguities and fuzziness. Ans a matter of fact, merely defining what a character is alone and, therefore simply what is it we’re coding, turns out to be surprisingly difficult.
Therefore, when designing a map between between digits and characters, charts entries should —but not necessarily do— have a well defined and unambiguous meaning. And making such a map alone is often much harder than it sounds. The 7-bits 128-characters ISO/IEC 646 character set took more than six years to complete. And it took more than fifteen years to design the Unicode core specification, and this is not counting the time spend on designing it predecessor XCCS which started circa 1978. And the standard is still in progress since new characters are still added every year.
Also there is more to CCS than an ~injective map because languages and scripts makes it impossible to take a general axiomatic view on the matter as with, say, numbers or regular graphs1. This is why the Unicode Standard has to resort to tables and needs to define table–based complex algorithms for various tasks such as segmenting or sorting.
Indeed, sorting is a big issue for multi–script character sets. The thing is that, contrary to early 5, 6 or 7-bits codes, you cannot directly sort with codes because they cannot be ordered in a straightforward way any more. For instance, different languages may use different sorting rules for, say, Latin script. Also, there are several ways to sort sinograms —either by radical, sound, number of strokes, shape or by combining several features such as radical and strokes—. In addition, the Unicode is build incrementally and includes several already existing CCS that were later extended in other blocks, so characters belonging to the same script or character category may be far apart2. Thus, the UC provides a multilevel algorithm as well as weights to sort string breaking arithmetic as we know it along the way. For instance, in an Orwell–ish moment —you know, War is Peace and 2 + 2 = 5 if the Party sayz so—, Tr10 states that « the fact that x is less than y does not mean that x + z is less than y + z ». Dude.
Coded Character Sets do not exist in isolation which makes everything harder
When designing a CCS, you need somehow to take into account few other things such as :
- fonts
- user input and interaction
- display
- computational cost of processing character codes and hardware requirements ( CPU clock rate, memory,…)
The Unicode Standard is just a useless piece of paper without fonts. And even the « smartest » fonts are just useless static binaries without hardware and software suites (X11, Gnome, Qt, Gecko,…) that handle user input, rendering and display that, in turn, need to be conformant to the Unicode.
Screens are and have always been bitmaps of varying resolutions. But, back in the days of monochrome VDU monitors and text mode of terminals, displaying 94 printable characters meant positioning 8 × 8 pixels monospaced fonts on a 24 × 80 grid. Nowadays, display is akin to a book page. And, in a multi–script environment, this means among others things being able to arbitrarily switch direction —write from left to right then right to left then vertically and so on—. Also, rendering must be able to handle an unlimited number of diacritics, complex writing systems such as Arabic or regional indicators3 or tags just to name a few. In order to do this, rendering engines in turn use rules, algorithms and data provided by the Unicode.
And, this is why you also to need to take into account CCS usage in order to understand CCS design as we’re about to see.
The reasons why Coded Character Sets design is hard have changed over time
Computing have changed a lot since 1945. And so have CCS. But, despite an ever growing process of hardware abstraction, the two are tightly knit together. Also, changes in computers usages has had a tremendous impact on CCS design in the 1970–90’s.
But the one thing that did not change is that computers are only good at one thing : discrete mathematics in a certain range. Floating point operations or character and string processing are still implemented by representing data as sequences of integers of varying length.
Now, it should be noted that letters and digits are not only different in nature but also with respect to storage capacity. Of course, it’s hard to compare letters and digits, but here are a couple of examples.
Same as a word is the basic unit of a sentence, the basic unit processed by a CPU is often called a word in computer science. As Wikipedia says, it is a fixed-sized datum handled as a unit by the instruction set or the hardware of the processor. The maximum width often designates an architecture. In the early 1980’s, the word’s size on most PC was 8 bits but has steadily grew to 64 bits since then4.
Of course, a processor can handle data larger than its word size but it is much slower as it needs to be done programmatically and needs more cycles and memory loads. And, somehow, this is what happens when processing text : natural language words are usually wider than the machine word.
Now, with a 32-bit word (4 bytes), we have 2^{32}-1 = 4,294,967,295 different unsigned integers (ℕ) And we also have 2 \; 2^{23} (2^8 - 2) = 4,261,412,864 (normalised) unevenly spaced single precision IEEE 754 floats5.
Let’s say we want to store Euler’s number e whose value —shown up to 50 decimal places—, is 2.71828182845904523536028747135266249775724709369995… This number is irrational and transcendental so it never stops. Therefore, we can only store an approximation of it. With single precision, it looks this 10000000010110111111000010101002. Its character representation is 2.71828174591064453125 and with a 8–bit character set this means 176 bits. That’s 5.5 times more. And the same thing goes for double precision floats. The character representation of 1000000000001011011111100001010100010110001010001010111011010012 2.7182818284590450907955982984276488423347473144531 needs six time more bits.
Obviously, this would be the reverse for small numbers but, for the most part, character representation of number need more space than their binary numeric counterpart.
Here is another example. With with a 32-bit word, we can index information about more half of the world’s population6.
Now, with characters, the numbers of words you can index depends on the encoding. But if each character fits in one byte, you can index only one character per byte. Hence, with 4-bytes words, you have only 4 letters so you can store and thus index only a fraction of many words in any given language. Using a very very rough approximation based on the hunspell fr_FR.dic file, you can index ~.043 of French words with four bytes. And this assumes that every characters used by the French language are available in our 8-bit encoding (Latin-1 has a few letter missing : Œ, œ and Ÿ). As a side note, this would be meaningless for German since the set of words is infinite, the number of combination of words being theoretically unbounded so there are simply no max length word.
And the number of words indexed with multi–byte encoding such as UTF-8 or UTF-16 is even lower since the actual size depends on the code point. With one byte you have only 94 « graphical » characters UTF-8 encoded character7 for code points below 2^7 and then the number of bytes quickly grows up to 2 bytes then 3 bytes for code points \ge 2^{11} and to 4 bytes for code points \ge 2^{16} . And the same holds for UTF-16 which uses two bytes for \text{cp} \lt 2^{16} and four otherwise. That is to say both encodings use 4 bytes for code points outside the BMP. Hence, the proportion of 4-bytes UTF-8–encoded words goes down to .039 for French when using NFC composition and to .038 with NFD composition. The proportion only slightly decreases here because most characters used in French are located at the head of the UC chart and thus only take one byte in UTF-8. Things are likely to be different for any script outside the Basic Latin block, especially with code point > 2,048.
Bottom line is you usually need more than 4 or 8 bytes to store a single meaningful character word. And, on top of that, words usually come in sentence, paragraph… Again, even if you can’t really compare but, with text data, you simply need much more bytes to encode the information. And this means more memory and computational power.
Coded Character Sets design is bound to hardware
Because of those memory and computational power requirements, hardware has been a major factor in CCS design way until the 1990’s.
In the early days of computing, CCS were tied to machines since there was no OS. Resources were scarce, expensive and unreliable. Paper tapes and punch cards were still in wide use. With only 80 characters —not bytes— per card, punch cards took a long time to process. Of course, there was also the iconic magnetic tapes. Magnetic tapes can store more data, provide faster access but saw widespread adoption only later on. Therefore, loading data into the main memory took a ridiculous amount of time in comparison of today’s hardware throughput and, by the way, this is why time-sharing machine were invented because most of the time, the processor was idle and starving for data.
Granted, memory access is still the main computing bottleneck but the latency has simply nothing to do.
Early CCS were designed for things like telecopy or terminals and, therefore, data transfer —this is why the term information interchange pops up everywhere, ASCI̲I, ISCI̲I, VISCI̲I…—. Communications were expensive and unreliable and had a low transfer rate. No Gigabit Ethernet back then. Therefore, character sets had to be real tight. And this, in turn, limited each CCS to one script —and often, to a subset of it—. For instance, there is one variant of ISO/IEC 646 for French but several more or less common characters are missing. And some scripts can simply not be encoded with 7 or 8 bits such as hànzì.
16-bits architecture only became common on the PC market in the late 1980’s. And not surprisingly, most CCS designed in the 1980’s used 8 bits at most. But there are exceptions like Japan where 16-bits PC become available as soon as 1983 one reason being CCS because kanji support meant more computing power. And the Unicode would simply not possible without ≥ 32 bits architectures because, whatever the encoding it is your are using, you’ll need 32-bits words once data is decoded8.
And this is not only a matter of word and memory size but also of algorithmic complexity and, therefore, clock speed. Just take a look at normalization or, yet again, sorting in the Unicode Standard to convince yourself.
Nowadays, things have obviously changed with regard to both CPU word size, clock speed and memory size and transfer rate. But there has been consequences until now through various kinds of legacy.
Coded Character Sets design is bound to usage
Using computers to write documents, emails, blog post and all kinds of messages seems obvious today. But back in the 1960’s, it wasn’t. Early computers were, well, really computers mainly used for —military— numerical computation and processing data such as payroll or census and official survey questionnaires9 or make election result prediction10. And, as hinted before, early CCS were designed for data transfer (̲II) to operate those computing machines and fetch results through terminals. But things started to change during the 1970’s when people really began using computers for writing texts. And new kinds of pieces of software appeared then such as troff or Knuth’s TEX as well as a whole new set of problems such as direction as well as complex writing or typographic rules.
Word processor —in a very broad sense— have also contributed to CCS design in that they have greatly helped in extending the set of available characters (mathematics, typography, ornaments,…) among other things.
TEX is interesting in that it shows the state of thing back in 1977. Knuth had to design pretty much everything from scratch and this includes extended characters sets and a font system (metafont which is the name for both font description language and interpreter).
Also, the TEX engine design itself reflects how the scarceness of resources impacted software design back then. Early versions of the engine TEX —TEX 78 and TEX 82— were written and tuned for the Stanford AI Laboratory PDP-10 computer. The PDP-10 is a 36-bit machine and has a memory of 256 kilowords (1152 kB) usually packing 5 7-bit ASCII characters into a word11. And Knuth had to resort to many tricks to economise memory. This includes keeping the hash tables as packed as possible or composing letters from a diacritic positioned on a base letter instead of storing accentuated letters and the likes. But this had one annoying consequence that diacritical letters could not be hyphenated automatically (this was eventually fixed). Also, TEX fonts could only have up to 128 characters (this was later increased to 256) with only 16 different widths and heights allowed. And, according to this tex.stackexchange answser, the 36-bits word size of PDP-10 is the reason why TEX does not allow for numbers in macro names. But despite efforts to make the best use of available resources, compilation was slow, so it seems12.
TEX is still widely used and new TEX engines keep being written since the 1980’s and thank to this the hailed PDP-10 still lives with us today. And this is only one in many cases of legacy.
Coded Character Sets design is bound to legacy
Decisions made in the 1960’s or later still have consequences today as advertised by the 7-8 bit legacy in both hardware and software.
Most if not all languages designed in the 1960-70 such as C/C++ —(¡¿¡¿ unsigned !?!?) char—, Pascal —char or byte—, Fortran —character from F77 on in replacement of Hollerith constant, Ada —CHARACTER—,… use a single byte to encode a character. Reason is they were designed when 7-8 bytes CCS were a rage. Also, this allowed for efficient processing of character string. Upper|lower–casing simply meant adding or subtracting a constant and, in C, random access simply meant incrementing or decrementing pointers.
But it seems to me that this is also one of the main reasons —besides word size— why most CCS designed in the 1970-1990 were 8-bits codes as illustrated by the sad story of Windows wide characters.
Microsoft contributed early on to the Unicode Standard (circa 1990) and Windows was one of the first OS to support the Unicode with the release of the Windows NT in 1993. But since WNT was written in C, this means Windows adopted Unicode way before the C Standard did. Microsoft had to designed Unicode support from scratch and implemented it in the MSVC (Microsoft Visual C) compiler. Windows NT uses Unicode internally since the OS itself uses Unicode encoding in both its kernel and API so this was used in many lines of codes13.
Now the fun part is that, when the C Standard committee included Unicode support for C99, it took a somewhat different incompatible route from MS. And, at that point, people at Microsoft had to make a decision, either adopt the new standard or stick to their own implementation. But since their home–brewed Unicode support has been in production for years and that abiding to the new standard would have meant rewriting the WNT kernel and API as well as every C piece of software in the Windows ecosystem, they chose to stick to it. Adopting C99 amounted to break pretty much every working Windows application in use at the time.
Users complain about characters but complain even more when compatibility breaks.
So this is why you sometime need to write a Windows version of some part of your code when handling character strings. On top of that, the API comes in two versions : one for Unicode strings and one for “ANSI” —ie: ASCII— strings to ease the transition between the two. And let us not forget about Windows code pages14. And when Microsoft speaks about Unicode, they really means UTF-16, not UTF-8. In the early 1990’s, Unicode was still a 16-bit code. Therefore the wchar_t wide character C type means 16-bits for the Win32 API. But not very long after MS shipped Windows NT, the Unicode Consortium realised its mistake and the Unicode became a 32-bits code with its 2.0 version published in July 1996 and MS had to transition from the fixed length encoding UCS–2 to the variable–encoding UTF-16 to ensure back compatibility.
As a matter of fact, Unicode support in C/C++ is a kinda messy. For instance, wchar_t is not well defined in C/C++ since it is more likely to be —but not necessarily— 32-bits long for Linux (and OS X ?) and, in this case, assumes UTF-32. Over the years, several C/C++ standards revisions have addressed Unicode support. So now, in addition to char and wchar_t, you have your char8_t, char16_t and char32_t as well as (C++ only) std::wstring, std::u16string,…in addition to std::string and the whole set of functions/methods attached to them.
Coded Character Sets history is hard
Before moving to some historical accounts, it should be noted that I did not plan to make an actual history of CCS. This would have obviously been impossible in such a short amount of time. But this also seems (to me anyway) an impossible task at the moment, main reason being lack of information.
Many parties have designed CCS and contributed to CCS development over the years. There are of course standardisation bodies, companies but also developers or academics. And, indeed, it’s easier to find info on standards than on in–house CCS and, with this respect, easier to find info on IBM or MS than Siemens, Bull, Olivetti or Fuji15, NEC or Fujistu just to name a few16. Thus CCS timelines for many scripts such as CJKV, arabic, Thai…have many loopholes.
For instance, the first CCS for Japanese katakana —and katakana only— I know about is the ISO/IEC 646 compatible JIS X 0201 —7 ビット及び 8 ビットの情報交換用符号化文字集合 – 7 and 8 coded character sets for information interchange— published by JIS in 1969. But my bet is that there were Japanese CCS before 1969, one reason being that the Japanese computer industry grew from communications technology17. And it turns out that CCS standards often come after in–house CCS.
Also, I spotted a fair amount of inaccuracies —including in peer-reviewed academic papers— and lots of printed legends —especially regarding ASCII—.
Therefore, the talk focuses on examples of CCS that highlights some features of characters coding in relation with hardware and usage.
A bird’s view on CCS development
When it comes to computers manufacturing —and not simply usage— and, therefore, CCS things started in three areas :
- obviously Northern America but in connection with Western Europe
- East Asia —first in Japan and then in Korea and PRC—
- The so-called Eastern Block
And since people in those three areas used different scripts, vendors started designing CCS to suit their needs in an independent but eventually somewhat coordinated way.
In the West, early CCS were 5 or 6 bits long. And each vendors first tackled the issue on their own so that by the end of the 1950’s the situation was already an utter mess (over fifty known codes in the US with a dozen at IBM alone). So various standardization bodies created working parties to, well, standardize Latin scripts coding. It turns out that people that took part to those initiatives also took part to ISO, so enters ISO. And after seven long years of tedious negotiations, this led to the adoption of ISO/IEC 646.
As stated before, ISO/IEC 646 was designed for information interchange and not for writing academic essays. Thus, ISO/IEC 646 actually endorsed many existing practices in both telecommunication and computer industry and includes an extended set of control characters. In the 1960’s, the OSI and TCP/IP models were yet to be invented. Textual telecommunications used something akin to in–band signaling where control information is send through the same channel as the message. And this why ISO/IEC 646 starts with 32 non–printable control characters such as SOH (Start of Heading), ACK (Acknowledge) or EOT (End of Transmission).
Due to limited channel capacity, ISO/IEC 646 had to be tight and since 32 positions were reserved for control characters, letters were restricted to the ISO basic Latin alphabet. In order to extend this basic set, the ISO/IEC 646 also distinguishes between variant and invariant characters. Invariant characters corresponds the basic Latin alphabet and are in the same positions in all version of ISO/IEC 646 while 12 positions are reserved for regional or national variants. In addition, some characters such as ^ where included to write diacritics and other for mathematical or programming use.
ISO/IEC 646, ASCII (and ECMA-6 for Europe) were designed in relation with one another. So ASCII is a kind of a variant of ISO/IEC 646 and not the other way around as implied by the Wikipedia page.
IBM took part to the design of ISO/IEC 646. But, about the same time (1964), IBM began using EBCDIC which was based on BCDIC which, in turn, was based on the Hollerith code. Due to Big Blue’s market share at the time, EBCDIC remained widely used in many countries until the 1980’s and a number of localized variants were designed —including Japan or the USSR—.
Within five years after its first publication, ISO/IEC 646 was extended with the ISO/IEC 2022 standard. ISO/IEC 2022 is a set of rules to design 7-bits compatible 8-bits CCS that allow for multi–byte encoding. Over a thirty years period, more than 200 CCS were registered such as the ISO/IEC 8859 family of character set.
ISO/IEC 2022 is one early —if not the earliest— example of « universal » CCS since it allows to arbitrarily switch between a wide array of CCS in a single stream by means of escape sequences.
But the ISO/IEC 646 and ISO/IEC 2022 standard did not prevent vendors to keep on using their own CCS. And things got worst in the PC area. Somehow, this was the 1950’s all over again when a surge of new vendors appeared and each of them designed their own —in most cases— 8-bits CCS. But with some partial de facto « standardization », the basic structure of ISO/IEC 646 being fairly ubiquitous for Latin scripts. As the same time, several standards were also published such as ISO/IEC 8859 to extend ISO/IEC 646 for a large number of scripts —Western European (Latin-1), Central European (Latin-2), Southern European (Latin-3),…as well as Cyrillic, Arabic, Greek, Hebrew, Thai,…—.
By the way, the first two blocks of the Unicode code chart —Basic Latin and Latin-1 Supplement— match Latin-1 which, itself is an extension of ISO/IEC 646. And this is why UTF-8 is compatible with ASCII.
As stated before, from the 1970’s, computer became typewriters. But writing text requires much more characters than what OS provided at the times. This means that in order to make for for lack of both character sets and fonts, typesetting systems designers started designing their own 7–8 bits CCS as well as font format to meet their needs. For TEX, Knuth designed the OT1 —basic Latin alphabet— OML —TEX math italic—, OMS —TEX math symbol—,…7-bits CCS as well as the Computer Modern typeface. About the same time, Adobe Systems designed the PostScript page description language as well as several CCS such as the PostScript Standard Encoding the PostScript Latin 1 Encoding —which is a superset of ISO/IEC 8859 Latin-1— or PostScript Expert Encoding and several vector based fonts —T1, T2,…—.
8-bits CCS were the most common in the West but also behind the so–called Iron Curtain. And, like in the West, things started with 5 bits codes which eventually grew to 8 bits in the 1970’s with, for example, KOI-8 —КОИ-8 : Код обмена информацией – Code for Information Interchange— which is an extension of ISO/IEC 646 with Cyrillic character located in the high part of the table or DKOI — ДКОИ, Двоичный Код Обработки Информации – Binary Code for Information Processing–, an EBCDIC encoding for Russian Cyrillic.
8 bits actually proved enough to provide at least partial support for many alphabetic and syllabic scripts. But this was simply not the case for sinograms which comprise several thousand of characters. Sinograms originated from China circa 1200 BC and were later adopted in neighbouring countries such as Korea, Japan and Vietnam.
Early CCS for CJK scripts were 8–bits codes and included only a subset of characters that were enough for basic communication such as Japanese katakana —JIS X 0201— or Korean Hangul jamo —KS C 5601-1974—. But lack of support of hànzì–kanji–hanja characters eventually led to the development of several multi–byte encodings where each code points is encoded with one or more bytes not unlike UTF-8 and UTF-16.
One difference with 8-bits encodings is that characters are not represented in memory directly with their code points but with sequences of possibly varying length of bytes derived from their code point. For instance, JIS X 0208 —7 ビット及び 8 ビットの 2 バイト情報交換用符号化漢字集合 ‒ 7 and 8 byte coded KANJI sets for information interchange— published in 1978 represents its codespace as a matrix and uses a pair of two bytes called ku —区 , row— and ten —点, cell— to index its 94^2 = 8836 elements. JIS X 0208 is ISO/IEC 2022 compliant and was registered in 1979. It uses several escape sequence starting with ESC 2/4 (ESC 2/4 4/0 for G0, ESC 2/4 2/9 4/0 for G1,…).
CCCII —中文資訊交換碼 ‒ Chinese Character Code for Information Interchange— published in 1980 in Taiwan went even further by using three bytes indexing a three dimensional array. CCCII is also ISO/IEC 2022 compliant but was never registered.
Another feature of those CCS is that they include several other scripts such as Latin, Cyrillic or Greek as well as bracket and mathematical symbols or box–drawing characters.
Now, in the early 1990’s, many scripts were supported. But because of the 8-bits legacy and lack of coordination among other things, OS had to handle hundreds of sometimes partially redundant CCS18. And most of them provided only an often partial support for at most two different scripts —CJK CCS being a major exception—. This prompted vendors and standardisation bodies to design « universal » CCS that is to say CCS that could encompass every script known. Hence the JTC 1/SC 2 joint technical committee of ISO and IEC began working on ISO/IEC 10646 and, about the same time, Xerox —later joined by Apple and other companies— began working on the Unicode. The two eventually joined force in 1991 but this is yet another troubled story.
Another universal CCS is the TRON Multilingual Environment from the TRON — The Real-time Operating system Nucleus— project started by Professor Ken Sakamura of the University of Tokyo in 1984. TRON’s CCS design is very different from ISO/IEC 10646–Unicode and is akin to ISO/IEC 2022. It is as stateful encoding that allows switching between several CCS —JIS X 0208, JIS X 0212 (JIS Supplementary Kanji), GB 2313 (Chinese), KS X 1001 (Korean),…—
Of course, this only gives a rough sketch of what happened. Please refer to the slides for further details and other interesting CCS.
- There are exceptions such as Hangul which was designed from scratch and not shaped in the course of centuries by many speakers. [↩]
- Such as punctuation or symbols which are scattered in different UC blocks. Or, for instance, the “é” and “è” letters that are in another block than the letter “e”. [↩]
- The regional indicator characters are used to encode ISO 3166-1 two-letter country codes. Each character represents a letter that can be combined as a pair to form a country code. For instance, the pair {U+1F1EB, U+1F1F7} stands for F and R and may be rendered as the French flag 🇫🇷 in many web browser or simply as FR. [↩]
- Things were different for mainframes though. For instance, there’s been 128-bits architectures at least since the mid 1970’s. [↩]
- Not including two zeros, two infinities, 2^{23} – 2 signaling NaNs and 2^{23} quiet NaNs 2^{24} - 2 subnormal numbers. [↩]
- And the entire World’s population with 64 bits integers since we actually need only ⌈ \log_2(7.9e9) ⌉ = \text{33 bits} — hence, 33 bits of entropy—. [↩]
- That’s 2^7=128 minus 34 control characters including space and delete. [↩]
- By the way, the Unicode was first designed as a 16 bits set. [↩]
- The Univac was used by the US Census Bureau not only for tabulating but also to compute complex survey weights. [↩]
- The Univac was also used to predict the result of the 1952 US presidential election for CBS. [↩]
- Therefore, editing a file larger then 1 mB alone was an issue. [↩]
- I’ve read that compiling a TEX file took more than one minute per page in the early 1980’s. [↩]
- This contrasts with Linux which is fairly easy going when it comes to encoding, as long as “/” is 0x2F and NULL is 0x00. [↩]
- Again, Linux handle all this a little more seamlessly through locale mechanisms. [↩]
- which finished the first Japanese computer in 1956 [↩]
- Same as it’s easier to find infos on TEX than on Scribe or STI —Scientific Text, Inc— by the way. [↩]
- The first ticket reservation system was set up by Hitachi c. 1959. [↩]
- For instance, there is several IBM or Microsoft code pages for French, German, Japanese, Arabic… [↩]
OpenEdition suggests that you cite this post as follows:
Thomas Soubiran (November 28, 2022). Some Coded Character Sets Design History. NUMA. Retrieved February 7, 2025 from https://doi.org/10.58079/sgp2