Unicode Properties and Regular Expressions

In the preceding entries of this series, we have mostly dealt with encoding issues, that is to say how the Unicode Standard attempts to solve the encoding problem and problems caused by characters encoding. In what follows, we’ll see that there is more to the Unicode than just dealing with low–level trivia you would rather forget about in the first place. More specifically, the Unicode can also help solving other kinds of issues such as pattern matching in strings as it greatly improves on the capabilities of regular expression. But, before that, we need to talk about character properties. Of course, you can skip this part and go directly to the Regular Expressions section.

Unicode character properties

There are many ways to define characters. There are letters, numbers in different bases, symbols and punctuation marks. Some characters are said ideographic or alphabetic and may be tagged as Latin, Greek, Cyrillic,…. In some scripts, letters can be lower, upper or title cased. Also, characters have names. And to those commonly used classifications, the Unicode Standard adds some more like Canonical Combining Class (CCC) —which is the order assigned to characters by the standard when character need to be combined— as well as other normalization related notions. This is why the Unicode Standard defines these and other semantic values by providing for properties.

Character properties are simply maps that bind a character to a value. They are a key component of the standard as they are required for interoperability and correct behavior in implementations, as well as for Unicode conformance. For instance, properties are crucial for character display and ensure that texts are legible. And, without properties, algorithms to change case or sort simply wouldn’t work.

As stated by the standard, the properties include the following :

  • Name
  • General Category : basic partition into letters, numbers, symbols, punctuation,…
  • Other important general characteristics : whitespace, dash, ideographic, alphabetic, noncharacter, deprecated,…
  • Display-related properties : bidirectional class, shaping, mirroring, width,…
  • Casing : upper, lower, title, folding—both simple and full
  • Numeric values and types
  • Script and Block
  • Normalization properties : decompositions, decomposition type, canonical combining class, composition exclusions,…
  • Age : version of the standard in which the code point was first designated
  • Boundaries : grapheme cluster, word, line, and sentence

Properties can either be normative, informative, contributory, and provisional meaning that some might change in the future while other won’t. Of course not all properties apply to all characters. In addition, the interpretation of some properties —such as the case of a character— is independent of context, whereas the interpretation of other properties —such as directionality— is applicable to a character sequence as a whole, rather than to the individual characters that compose the sequence. See section 3.5 of the standard and Tr#23 for more informations.

The Unicode Character Database

Thanks to properties, the standard provides a wealth of information about character use and meaning. Information regarding properties can be found in the standard and reports but the primary source is the Unicode Character Database. The UCD provides machine-readable character property tables for use in implementations of algorithms requiring semantic knowledge about the code points. The UCD is available from here and is documented in Tr#44.

In what follows we’ll go through the UCD to get a feel of what’s in there using R data.frames available from this Github repository. But note that, once you’re acquainted with properties, there’s a much simpler way information to get about Unicode characters when using R through the Unicode package as demonstrated later.

Files in the UCD may use different formats. But many starts with either a code point or a code point range followed by one or more property name. Fields coming next usually greatly vary. Some contains comments and|or additional fields whose length may change according to code points. Fields beyond the property name are usually stored in a single columns in the data.frame since they are often different from one line to another. Additional comment lines like groupings or missing properties are stored in the htxt attribute of the data.frame —see below for an example—.

Property aliases

UCD consists of more than sixty files. One place to look to get started is the PropertyValueAliases.txt from which the Ucd.propvalal data.frame is derived. This files contains a list of all the values —as well as aliases— a property can have for most property values used in the UCD. As stated in Tr#44,

  • the first field of the file contains the abbreviated alias for a Unicode property
  • the second field specifies an abbreviated symbolic name for a value of that property
  • and the third field specifies the long symbolic name for that value of that property. These are the preferred aliases
  • additional aliases for some property values may be specified in the fourth or subsequent fields.
head(ucd.propValal)
  
                propname prop val                            valname                                   comments
1 ASCII_Hex_Digit (AHex) AHex   N  No                                ; F                                ; False
2 ASCII_Hex_Digit (AHex) AHex   Y  Yes                                ; T                                ; True
3              Age (age)  age 1.1                               V1_1                                           
4              Age (age)  age 2.0                               V2_0                                           
5              Age (age)  age 2.1                               V2_1                                           
6              Age (age)  age 3.0                               V3_0 


Let’s see what’s in there for us. First, we can see that PropertyValueAliases.txt defines 92 properties

propnames <- unique(ucd.propValal$propname)
length(propnames)
head(propnames)

[1] "ASCII_Hex_Digit (AHex)" "Age (age)" "Alphabetic (Alpha)"
"Bidi_Class (bc)" "Bidi_Control (Bidi_C)"  "Bidi_Mirrored (Bidi_M)"


Actually, there are more, as some properties related to, for instance, Bidi or CJK characters, are not here. Or, to be more specific, some are there but marked as missing.

## The "htxt" attribute of the data.frame is a vector of metadata that 
## stores the commented lines of the original file in addition to their position 
## in the data.frame —attr(propal.htxt,"hidx")— 
## as well as the length (number of line) of each comment  —attr(propal.htxt,"hlen")—. 
propal.htxt <- attr(ucd.propal,"htxt")
grep("missing", propal.htxt, value=T)

[1] "@missing: 0000..10FFFF; Bidi_Mirroring_Glyph; <none>"
"@missing: 0000..10FFFF; Bidi_Paired_Bracket; <none>"         
[3] "@missing: 0000..10FFFF; Bidi_Paired_Bracket_Type; n"
"@missing: 0000..10FFFF; Case_Folding; <code point>"          
[5] "@missing: 0000..10FFFF; Decomposition_Mapping; <code point>"
"@missing: 0000..10FFFF; Equivalent_Unified_Ideograph; <none>"
  


A more thorough list of property names and abbreviations but without their corresponding values can be found in the Ucd.propval data.frame derived from the PropertyAliases.txt file. Main difference between is that PropertyValueAliases.txt has an extra column Propname that gives the property long name for a Unicode property which was extracted from the comments. Also, fields beyond the fourth are stored in the last column. This list is also available by calling the u_char_properties() function of the Unicode R package without arguments.

Some property names are fairly self explanatory. Alphabetic (Alpha) is a Boolean value for alphabetic characters, Block (Blk) gives the block name. Other Boolean value include Diacritic (Dia), Emoji (Emoji), as well as several casing related properties : Cased (Cased), Lowercase (Lower), Uppercase (Upper), Changes_When_Casefolded (CWCF), Changes_When_Titlecased (CWT),…. But some are not so obvious. For instance, as stated before, Age gives the version of the standard in which the code point was first designated. Bidi_Class (Bc), Bidi_Control (Bidi_C),… are related to bidirectional algorithms which is the order characters while rendering Unicode text —left–to–rigth or right–to–left—.

Canonical_Combining_Class is used with the Canonical Ordering Algorithm to determine which combining characters interact typographically and to determine how the canonical ordering of sequences of combining characters takes place. This is very important —albeit hazy— feature of Unicode when it comes to character comparison of composed characters, that is to say “characters”1 that are actually made of two or more characters. But the standard assumes that characters may interact typographically, therefore two possible orders are not considered equivalent. This is why we need a notion of canonical equivalence which states that two character sequences are said to be canonical equivalents if their full canonical decompositions are identical. Otherwise character sequences comparison would fail.

There are also a couple of other intriguing properties such as Pattern_Syntax, (Pat_Syn), Pattern_White_Space (Pat_WS), ID_Start XID_Start(XIDS), ID_Continue, XID_Continue (XIDC). Those are actually more related to programming. The definition of a computer language —like R but regular expressions as well— involves the definition of a special kind of terminal symbols : the identifiers. Identifiers are used to bind names to values —eg variables— or to functions. But not all characters are accepted as identifiers. Most —if not all— languages enforce rules to restrict the set of possibles identifiers. For instance, you wouldn’t want a non printable character or a punctuation mark as an identifier, especially if those are used for parsing statements like “\n” or “;”. One common rule is that a variable name can not start with a number to avoid parsing ambiguities and consists of letters and numbers with perhaps some special characters like _. R accepts “.” but this is a very rare instance and this can mess up when interacting with other softwares like SQL servers.

In ye good ol’ days of ASCII —and, more generally, 8–bits encodings—, buildings those rules was rather easy since compilers had to deal with 128 or 256 characters only. But when a language supports Unicode, this raise the question of what can be accepted as identifier or not. Fortunately, the Unicode Consortium provides guidance about what non-ASCII characters make sense as identifiers, and have issued Tr#31 —Unicode Identifier and Pattern Syntax—. The report provides an identifiers definition using a BNF–like syntax

<Identifier> := <Start> <Continue>* (<Medial> <Continue>+)*
    <Start>      := XID_Start + some characters listed 
    in table 2 of Tr#31
    <Continue>   := XID_Continue + some characters listed 
    in table 2 of Tr#31
    <Medial>     := some characters listed in table 2 of Tr#31 
    with the constrain that characters in the Medial class must not overlap 
    with those in either the Start or Continue classes.


On the other end, the Pattern_Syntax property defines a range of code points that are reserved for pattern syntax and this is an immutable property so it will never change. Perl promises, that if we ever add regular expression pattern metacharacters to the dozen already defined (\|()[{^$*+?.), that we will only use ones that have the Pattern_Syntax property.

In R, the set of valid identifiers is simply the set of strings made of alphanumeric characters that start with a “letter” in the current locale so it seems :

##
α <-function()1L

##
α()
1

##
locale <-  Sys.getlocale(category = "LC_CTYPE") ## save locale

##
## Note : this is an OS-dependent function call, so it might not work 
## on your machine. This was tested on a Linux box.
## Check your system for available locales
##
Sys.setlocale(locale="C") ## change locale to "C"

##
α()
Error : unexpected input in "�"

##
α <-function()1L
Error : unexpected input in "�"

## string input
ab="αβ"
"\316\261\316\262"

##
Sys.setlocale(locale=locale) ## restore locale

## back to normal
α()
1

##
ab
"αb"


What’s happening here is that R‘s lexer uses the libc isalnum() and isalpha() functions to parse symbols which match letters and numbers in the current locale —see the SymbolValue() function defined in the src/main/gram.y file of the R source—. Therefore, you can use any script for variables and functions names as long your locale allows for it. But, in any case, if you want to use anything else like math symbols, you’ll need to use backsticks : `¬` <- function(x)!x.

As demonstrated by the example, string input also depends on the locale —see the StringValue() function defined in the src/main/gram.y file—. And, more generally, many string manipulation function also rely on the current locale. I guess this kind of explains the “funny” things happening under Windows when dealing with an extended character set. —Edit : it looks like Windows 10 now allows to set UTF-8 as the native encoding and that R has been updated accordingly. Unfortunately, this does not work out of the box yet as explained in this blog post—.

Property files

Property definitions are scattered in over 30 files of the UCD. Those files are listed in table 9 of Tr#44 Many of them define a single property or a small number of properties. Interesting files to get started are UnicodeData.txt, PropList.txt, DerivedCoreProperties.txt, Scripts.txt, ScriptExtensions.txt, Emoji-data.txt just to name a few.

UnicodeData.txtUcd.udata— is the primary source of information about character properties such as Name or General_Category, Numeric_Type, etc.. This is the file the Unicode package for its look–ups.

## we need to get the code point first
## otherwise we'll get the Ox0A character —that's line feed ("\n")
u_char_info(utf8ToInt("a")) 
  
Code                     Name General_Category Canonical_Combining_Class 
1 U+0061 LATIN SMALL LETTER A               Ll                         0
Bidi_Class Decomposition Numeric_Value_Decimal_Digit Numeric_Value_Digit Numeric_Value Bidi_Mirrored
        L                                                                                         N
Unicode_1_Name ISO_Comment Simple_Uppercase_Mapping Simple_Lowercase_Mapping Simple_Titlecase_Mapping
1                                                0041                                              0041
  


returns the same thing as :

subset(ucd.udata, cp==utf8ToInt("a"))


By the way, this is where Python gets character by name

"\N{GREEK CAPITAL LETTER PAMPHYLIAN DIGAMMA}"
  
'Ͷ'


And so does Julia when printing objects of Char —not String— type

"a"[1]
'a': ASCII/Unicode U+0061 (category Ll: Letter, lowercase)


Unlike R, Julia distinguishes between Char and String and its definition of Char matches the Unicode’s :

collect("é") ## NFD encoded é
2-element Vector{Char}:
 'e': ASCII/Unicode U+0065 (category Ll: Letter, lowercase)
 '́': Unicode U+0301 (category Mn: Mark, nonspacing)


But note that the Ucd.udata data.frame has 33,797 rows, not 143,924 which is the number of allocated code points as of v13.0.0. In some case, code points in the UnicodeData.txt file are actually range boundaries 

head(
  data.frame(
    cp.lo = ucd.udata$cp[idx.lo <- grep("First>", ucd.udata$name)]
   , cp.hi = ucd.udata$cp[idx.hi <- grep("Last>", ucd.udata$name)]
   , ucd.udata[idx.lo, c("name", "general_category")]
  )
)

      cp.lo cp.hi                                    name general_category
12140 13312 19903      <CJK Ideograph Extension A, First>               Lo
12206 19968 40956                  <CJK Ideograph, First>               Lo
15071 44032 55203                <Hangul Syllable, First>               Lo
15145 55296 56191 <Non Private Use High Surrogate, First>               Cs
15147 56192 56319     <Private Use High Surrogate, First>               Cs
15149 56320 57343                  <Low Surrogate, First>               Cs


Those ranges account for a total of 249,660 assigned characters including Surrogates ranges and Private Use Planes :

sum(ucd.udata$cp[idx.hi ] - ucd.udata$cp[idx.lo ])


Information regarding those characters can be found in other files like PropList.txt, DerivedCoreProperties.txt or in the Unihan database.

One interesting thing about the UnicodeData.txt file is that it defines the General Category property which is very useful for pattern matching as we will later see. Here is the full list :

L
Letter
LuUppercase_Letteran uppercase letter
LlLowercase_Lettera lowercase letter
LtTitlecase_Lettera digraphic character, with first part uppercase
LCCased_LetterLu | Ll | Lt
LmModifier_Lettera modifier letter
LoOther_Letterother letters, including syllables and ideographs
M
Mark
MnNonspacing_Marka nonspacing combining mark (zero advance width)
McSpacing_Marka spacing combining mark (positive advance width)
MeEnclosing_Markan enclosing combining mark
N
Number
NdDecimal_Numbera decimal digit
NlLetter_Numbera letterlike numeric character
NoOther_Numbera numeric character of other type
P
Punctuation
PcConnector_Punctuationa connecting punctuation mark, like a tie
PdDash_Punctuationa dash or hyphen punctuation mark
PsOpen_Punctuationan opening punctuation mark (of a pair)
PeClose_Punctuationa closing punctuation mark (of a pair)
PiInitial_Punctuationan initial quotation mark
PfFinal_Punctuationa final quotation mark
PoOther_Punctuationa punctuation mark of other type
S
Symbol
SmMath_Symbola symbol of mathematical use
ScCurrency_Symbola currency sign
SkModifier_Symbola non-letterlike modifier symbol
SoOther_Symbola symbol of other type
Z
Separator
ZsSpace_Separatora space character (of various non-zero widths)
ZlLine_SeparatorU+2028 LINE SEPARATOR only
ZpParagraph_SeparatorU+2029 PARAGRAPH SEPARATOR only
C
Other
CcControla C0 or C1 control code
CfFormata format control character
CsSurrogatea surrogate code point
CoPrivate_Usea private-use character
CnUnassigneda reserved unassigned code point or a noncharacter
Source : adapted from table 12 of Tr#44. Bolded General Categories are derived by combining two or more categories.


PropList.txt defines contributory properties that are used in the generation of other properties derived from them. For instance, this is where the Pattern_Syntax property is defined. Since characters can have several properties, some character ranges show up a number of time as shown by the result of the following code snippet :

##
## PropList self–join to show range overlap
##
unq <- unique(ucd.prop[c(1,2)])
## order by uniue values
unq.code <- unq[,1]*as.numeric(max(unq[,1]))  + unq[,2]
unq <- unq[order(unq.code),]
## test for overlap
overlap.idx <- range.overlap(unq, ucd.prop[c(1,2)])
##
frq 1)


So there are 402 code point ranges that overlap one way or an another. The code uses the following range.overlap function that takes two vectors of ranges as arguments and tests for overlap for every pairs of its input vectors.

  ##
## util function to match ranges
##
range.overlap <- function(x=NULL,y=NULL){
  ##
  toRange <- function(x){
    if(is.matrix(x) | is.data.frame(x)){
      if( ncol(x)==2) return(x)
    }
    else if( is.vector(x) ){
      return( 
        matrix(
          rep(x,times=2)
          , ncol=2
        )
      )
    }
    else stop("x should be either a two columns matrix or data.frame or a vector")
  }
  x <- toRange(x);
  y <- if(is.null(y)) x else toRange(y);
  ##
  idx <- which(
    outer(x[,1], y[,2], "<=") & t(outer(y[,1], x[,2],  "<="))
    , arr.ind = T
  )
}


This use the fact that, when boundaries are in order, (x_{lo} \le y_{hi}) \land (y_{lo} \le x_{hi}) is actually enough to test overlap instead of testing every possible cases. Funny thing is, while I came up with it on my own —this is pretty standard stuff—, this function is almost identical to Unicode:::u_char_match worker function of the Unicode package.

Besides the PropList.txt, the UCD provides the DerivedCoreProperties.txt which defines derived properties, that is properties that combines two or more properties. For instance, Lowercase is \text{Gc}==\text{Ll} \cup \text{Other\_Lowercase}==\text{T} while Upercase is \text{Gc}==\text{Lu} \cup \text{Other\_Lowercase}==\text{T} where Ll and Lu are the General_Category for lower and upper case respectively. Other_Lowercase are characters such as :

head(
  within(
    ucd.prop[ucd.prop$propname=="Other_Lowercase",]
    , {char.hi=intToUtf8(cp.hi,multiple=T); char.lo=intToUtf8(cp.lo,multiple=T)}
  )[c('comments', 'char.lo', 'char.hi')]
)
                                                                              comments char.lo char.hi
1023                                               Lo       FEMININE ORDINAL INDICATOR       ª       ª
1024                                              Lo       MASCULINE ORDINAL INDICATOR       º       º
1025                         Lm   [9] MODIFIER LETTER SMALL H..MODIFIER LETTER SMALL Y       ʰ       ʸ
1026      Lm   [2] MODIFIER LETTER GLOTTAL STOP..MODIFIER LETTER REVERSED GLOTTAL STOP       ˀ       ˁ
1027 Lm   [5] MODIFIER LETTER SMALL GAMMA..MODIFIER LETTER SMALL REVERSED GLOTTAL STOP       ˠ       ˤ
1028                                            Mn       COMBINING GREEK YPOGEGRAMMENI        ͅ                ͅ  


Another example is the XID_Start property we saw earlier which is derived from ID_Start properties.

We can merge the PropList.txt and DerivedCoreProperties.txt files using the range.overlap again to get the actual derivations :

  
##
## Merge PropList and DerivedCoreProperties
##
overlap.idx <- range.overlap(ucd.prop[c(1,2)], ucd.dervProp[c(1,2)])
##
prop.dprop <- data.frame(
  ucd.prop[overlap.idx[,1],]
  , ucd.dervProp[overlap.idx[,2],]
  , stringsAsFactors=F
)
##
head( unique(prop.dprop[,c("dpropname", "propname")]) )
     dpropname       propname
1269      Math Pattern_Syntax
181       Math     Other_Math
776       Math      Diacritic
24        Math           Dash
1171      Math Other_ID_Start
1148      Math    Soft_Dotted
  


The UCD also provides files containing Derived Extracted Properties which list the characters corresponding to a single property extracted from other files like, for instance, CaseFolding.txt which is derived from the UnicodeData.txt and SpecialCasing.txt files. The exact list of derived extracted files and the extracted properties they represent are given in Table 10 of Tr#44. Those files are used by the Unicode package to match character for properties —see below—.

Yet another interesting file is the Scripts.txt as well as the ScriptExtensions.txt file. A script is defined by Tr#24 as a collection of letters and other written signs that generally has the following attributes :

  • the written elements share a common graphological style and history
  • the collection is used (in full, or as a subset) to represent textual information in a writing system for one or more languages

Its values form a full partition of the codespace as every Unicode code point is assigned a single Script property value. This value is either the explicit value for a specific script, such as Cyrillic, or is one of the following three special values :

  • Inherited : for characters that may be used with multiple scripts
  • Common : for other characters that may be used with multiple scripts
  • Unknown : for unassigned, private—use, noncharacter, and surrogate code points

Of course, scripts is not the same as language, since characters may be used in different languages. In addition, many characters are shared between scripts such as numbers, punctuation, symbols or formatting characters. This is why some characters are assigned the Common script value.

Getting informations about Unicode properties in R

Walking through the UCD gives a lot of insight on how the standard works as well as how algorithms are implemented. But, as mentioned before, there is a much convenient way to get informations about properties using the Unicode package. To use the package, we first need to create an u_char object. But, As stated before, characters need to be converted to numeric since character are interpreted as hex value

print(as.u_char("a"))==print(as.u_char(utf8ToInt("a")))


Here are some examples of use :

## Get the characters names
(charnames <- u_char_name( utf8ToInt("Unicode") ))
  
"LATIN CAPITAL LETTER U" "LATIN SMALL LETTER N"   "LATIN SMALL LETTER I"   
"LATIN SMALL LETTER C"   "LATIN SMALL LETTER O"   "LATIN SMALL LETTER D"   
"LATIN SMALL LETTER E"

## Get character from Name (exact match)
u_char_from_name(charnames)

"Unicode"
  
## Get every letter A in any script using a regular expression
head(
  data.frame(
    char = intToUtf8( cp <- u_char_from_name("\\bA$",type="grep") , multiple=T)
    , name = u_char_name(cp)
    , stringsAsFactors = F
  )
  , n=10
)

   char                            name
1     A          LATIN CAPITAL LETTER A
2     a            LATIN SMALL LETTER A
3     ɐ     LATIN SMALL LETTER TURNED A
4      ͣ  COMBINING LATIN SMALL LETTER A
5     А       CYRILLIC CAPITAL LETTER A
6     а         CYRILLIC SMALL LETTER A
7     ߊ                     NKO LETTER A
8      ࠡ SAMARITAN VOWEL SIGN OVERLONG A
9      ࠢ     SAMARITAN VOWEL SIGN LONG A
10     ࠣ          SAMARITAN VOWEL SIGN A
  
## Get character properties
u_char_properties(
  u_char_from_name("LATIN CAPITAL LETTER I WITH OGONEK"), c("Block","Script") 
)
                  Block Script
U+012E Latin Extended-A  Latin
  


The Unicode package only provides for a partial access to the UCD. This is likely to be enough in most cases though. But, in very specific cases, you may have to dig in the database.

In Python, the unicodedata module also provides for character property lookup as well as testing and strings normalization.

Unicode properties for regular expressions

Now we’re ready to use properties in pattern matching. But, first, a quick word of caution.

Regular expressions standardization or lack thereof

Regular expressions are used since the 1960’s for pattern matching. One important thing to note is that, despite their wide adoption, regular expressions are only partially standardized. Therefore, syntax can change from one implementation to another as well as their behavior —see examples below—.

Regular expressions were first implemented by Ken Thompson2 based on previous work by American mathematician Stephen Cole Kleene. Following Alonzo Church, Kleene worked on the theoretical foundations of computing and formalized the notion of regular languages. A regular language is basically a language that can be transformed into finite automaton and is the theoretical foundation of regular expressions3. For instance, the * quantifier is based on what is now known as the Kleene star which is the set of all finite–length strings that can be generated by concatenating any elements of a set of strings V. Thompson later devised an algorithm to transform regular expression into a special kind of finite automaton called DFA (Deterministic Finite Automaton) and used them in Unix text processing tools such as ed. Hence the name grep which is derived from the g/re/p command that does a G̲lobal search for R̲egular E̲xpression and P̲rints matching lines.

Regexes were first standardized be the POSIX standard in the early 80’s. But this applies to Unix utilities only. Other implementations later led to various extensions like in the Tcl or Perl programming languages. Perl is basically a language build around a regex engine that greatly helped expand the syntax and soon became a de facto standard. In turn, other implementations then started to mimic Perl regex syntax like PCRE (Perl Compatible Regular Expressions) which is a standalone library used by many softwares such as R through the perl=TRUE switch4. PCRE is —mostly— compatible with Perl regular expressions.

So, it’s hard to speak about regex as a all as there are many flavors. Here, we’ll deal with libraries available in R, namely TRE, PCRE and ICU as well as a little bit of Perl.

Character classes

But, despite many idiosyncrasies, there are fortunately many features common to most regex implementations. And, if you have already used regular expressions before, you must be familiar with character classes. Character classes are unordered sets of characters that acts like a shortcuts. For instance, in Perl, if you want to match any digit, instead of typing 0|1|2|3|4|5|6|7|8|9 or [0-9], you can use the backlashed metasymbol \d or \D if you want to match anything but a digit.

An obvious shortcoming of this class definition is that it will not match numbers in languages that use other symbols for digits. For instance, the following subset of the Unidata.txt file

  numerals <- within( 
  ucd.udata[
    grep( "(NUMER(AL|IC))|(DIGIT)|(FRACTION)", ucd.udata$name, ignore.case = T)
    ,c("cp", "name", "general_category","numeric_type.")
  ]
  , {char <- intToUtf8(cp,multiple = T)} 
)


has over 1,100 entries. Besides decimal digits, there are Arabic, Balinese or Kannada digit, Hangzhou numerals, cuneiform numerics, counting rods digits, dingbats digits. Most of them are base10 but there are non-decimal radix systems, like Ethiopic, Mende Kikakui or Mayan numerals. In addition, real numbers can be represented as fractions, or digits can be circled. And, despite the fact that the standard claims to encode characters, not glyphs, some variations of decimal digits are not considered glyph variants by the standard and are separately encoded such as mathematical bold, double–struck or sans–serif digit :

Mathematical bold𝟎 𝟏 𝟐 𝟑 𝟒 𝟓 𝟔 𝟕 𝟖 𝟗
Mathematical double–struck𝟘 𝟙 𝟚 𝟛 𝟜 𝟝 𝟞 𝟟 𝟠 𝟡
Mathematical sans–serif𝟢 𝟣 𝟤 𝟥 𝟦 𝟧 𝟨 𝟩 𝟪 𝟫
Mathematical sans–serif bold𝟬 𝟭 𝟮 𝟯 𝟰 𝟱 𝟲 𝟳 𝟴 𝟵
Mathematical monospace𝟶 𝟷 𝟸 𝟹 𝟺 𝟻 𝟼 𝟽 𝟾 𝟿
Source : UnicodeData.txt

With Perl, trying to match “𝟘” with \d

perl -ne 'print if /\d/' <<EOF
𝟎
EOF


prints nothing by default. But note difference in output when enabling Unicode through the -C flag :

perl -CSD -ne 'print if /\d/' <<EOF
𝟎
EOF

𝟎


Perl Unicode support started with version 5.6 with a back compatibility layer. So you need to use various switches to enable Unicode support in your scripts. See this page for more informations about Unicode support in Perl regexes.


In R, trying to match “߀” or “𝟘” with \d

## NKO DIGIT ZERO
grepl("\\d","߀" )
grepl("\\d","߀", perl=T)  ## use PCRE
## MATHEMATICAL BOLD DIGIT ZERO
grepl("\\d","𝟘")
grepl("\\d","𝟘", perl=T) ## use PCRE


returns FALSE in every case.

Depending on the implementation, the same thing is likely to happen with \w which match for words and, in Perl, is defined as [a-zA-Z0-9_]

grepl("\\w", ch <- c("а", "a", "à", "α", "А", "ᴀ") )
grepl("\\w", ch ,perl=T) ## use PCRE


TRE matches all character while PCRE matches none. The reason why PCRE does not match “а” and “А” is because they are not Latin but Cyrillic characters :

sapply(ch, function(ch) u_char_name(utf8ToInt(ch)) )
  
                        а                                a  
"CYRILLIC SMALL LETTER A"  "FULLWIDTH LATIN SMALL LETTER A" 
                                à                                 α 
"LATIN SMALL LETTER A WITH GRAVE"        "GREEK SMALL LETTER ALPHA"
                                А                               ᴀ 
       "CYRILLIC CAPITAL LETTER A" "LATIN LETTER SMALL CAPITAL A" 


And, believe it or not, “a” is yet another latin “a”. Also note that the Unicode defines small capitals letters.

One possible fix is to build our own character classes using brackets [<code-point-list/>] instead of the built–in character classes. For the set of common modern Greek characters, this would give something like [αΑάΆβΒγΓδΔεΕζΖηΗθΘιΙίΊϊΪκΚλΛμΜνΝξΞοΟόΌπΠρΡσΣςτΤυΥύΎϓφΦχΧψΨωΩώΏ]. But, to make things easier, we can also use ranges [<code-point-start/>-<code-point-end/>] in conjunction with the hex notation \x{<code-point-in-hex/>} which is less tedious than having to remember keyboard shortcuts. This yields [\x{0370}-\x{03E1}] which is the range of the Greek and Coptic Block if you don’t mind matching archaic and polytonic Greek characters or Coptic characters and unassigned code points.

Going back to our previous digit character matching example, we can also use hex notation (\x{1D7D8}) to match for “𝟘” and any kind of digits or numeric character. But since there are over 600 digit code points, it’s easier assemble the RE pattern from the Unidata.txt file, filtering on the Nd General Category —decimal number— this time :

grepl(
  sprintf(
    "[%s]"
    , paste(
      sprintf("\\x{%x}", ucd.udata$cp[ucd.udata$General_Category=="Nd"]) 
      , collapse = ""
    )
  )
  , c( "߀" , "𝟘" )
  , perl=T
)


returns TRUE for both digits. Interestingly, perl=F raises an ‘Out of memory’ error from TRE.

Of course, we could use the Numeric_Type property from the DerivedNumericType.txt to assemble a regex based on range to make it faster. But building an ad hoc character class really works for small sets of characters like [\x{0370}-\x{03E1}]. As character sets and data grow, this is becoming both tedious and computationally inefficient. Fortunately, there is a simpler and more efficient way to do it thanks to the Unicode.

The Unicode specification for regular expressions

A stated in Tr#18, the Unicode Standard supplies guidelines for Unicode support in regular expressions. Like many programs, regex engines had to be adapted for Unicode support. Because, like many programs, regex engines used to assume that character were bytes instead of varying–length 8 or 16–bits encodings among other things like caseless pattern matching becoming much more complicated. But regular expression syntax needed to be modified as well in order to use character semantics in regular expressions by providing syntax for sets of characters based on the Unicode character properties. Tr#18 thus introduced the \p{<prop/>} metasymbol and its set complement \P{<prop/>}.

Let’s try it on the matching different kind of zero characters example using the Nd property we saw earlier :

grepl(
  "\\p{Nd}"
  , c( "߀" , "𝟘" )
  , perl=T
)


returns TRUE for both characters. This only works with PCRE because the TRE library does not support the \p{<prop/>} syntax. Using the L derived property —any character interpreted as a letter—, we can also match characters that \w failed to match :

grepl("\\p{L}", ch ,perl=T)

[1] TRUE TRUE TRUE TRUE TRUE TRUE


Support for General Category property matching is not the only extension to the RE syntax provided by the standard. Actually, any property is supported through the following grammar.

CHARACTER_CLASS := POSITIVE_SPEC | NEGATIVE_SPEC
ITEM            := POSITIVE_SPEC | NEGATIVE_SPEC
POSITIVE_SPEC   := ("\p{" PROP_SPEC "}") | ("[:" PROP_SPEC ":]")
NEGATIVE_SPEC   := ("\P{" PROP_SPEC "}") | ("[:^" PROP_SPEC ":]")
PROP_SPEC       := <binary_unicode_property>
PROP_SPEC       := <unicode_property> (":" | "=" | "≠" | "!=" ) PROP_VALUE
PROP_SPEC       := <script_or_category_property_value> ("|" 
<script_or_category_property_value>)*
PROP_VALUE      := <unicode_property_value> ("|" <unicode_property_value>)*


The basic syntax for using property is thus \p{<propname/>(":" | "=" | "≠" | "!=" )<propvalue/>}. For instance, one can match Greek letters with \p{script=Greek} and the set complement by \p{script≠Greek} which is the same as \P{script=Greek} or [^\p{script=Greek}]. Actually, in most cases, you don’t have to use the property name because most of property values are unique. This is why you can safely use \p{Nd} instead of \p{Gc=Nd} or \p{General_Category=Nd}. But this is not always true. For instance, Greek is a property value shared by the Block and Script properties. Also note that for one–character property, braces are optional (\pN).

The Unicode Standard also extends regex set operations beyond those currently defined such as :

  • union : […] which is the set of elements in A or B ( A \cup B = \{ x \;|\; x \in A \, or \, x \in B \} )
  • and complement : \W\D… which is the set of elements of the universe U not in A ( A^c = U - A = \{ x \in U \;|\; x \notin B \} )

by adding

  • intersection : [<prop/>&&<prop/>] which is the set of elements belonging to both sets ( A \cap B = = \{ x \;|\; x \in A \, and \; x \in B \} )
  • difference : [<prop/>--<prop/>] which is the set of elements of A not in B ( A \setminus B = \{ x \in A \;|\; x \notin B \} )
  • symmetric difference : [<prop/>~~<prop/>] which is the set of elements belonging to one but not both of two given sets ( A \triangle B = \{ x \;|\; (x \in A) \oplus (x \in B \} ). It is therefore the union of the two sets minus their intersection ( (A \cup B) \setminus (A \cap B) ) or the union of the differences of A with respect to B and B with respect to A ( (A \setminus B) \cup(A \setminus B) ) . It also corresponds to the xor operation in Boolean algebra.

For example, one can match the set of letters except Latin script [\p{letter}--\p{Script=Latin}], the set of consonants letters [a-z&&[^aeiuo]], or Thai numbers [\p{Nd}&&[\p{IsThai}]]. In addition one can also combine operators. For instance, [\p{N}--[\p{Nd}--0-9]] matches the set of all non-decimal numbers, plus 0-9.

The standard defines several levels of conformance. What we just saw is Basic Unicode Support level 1 conformance which also includes

  • Simple Word Boundaries : since Unicode extends the set of what a word can be, implementation of the \b metacharacter should reflect this change. See below for a more general support for word boundaries
  • Simple Loose Matches : case folding is actually a fairly complicated task. therefore, an implementation should provide Unicode compliant case folding when doing case-insensitive matching. See below for a more general support for case conversion
  • Line Boundaries : an implementation should extend line boundaries testing beyond the usual LF, CR and CRLF
  • Supplementary Code Points : an implementation should handle the full range of Unicode code points

According to the standard, level 1 is the minimally useful level of support for Unicode. All regex implementations dealing with Unicode should be at least at Level 1. Level 2 is recommended for implementations that need to handle additional Unicode features and includes

  • Canonical Equivalents : implementation should enforce canonical equivalence between two characters when normalized to NFD. For instance, the letter Latin small letter o with horn and dot below “ợ” can also be encoded in at least four other ways : “o” + “◌̛” + ” ◌̣”, “o” + “◌̣” +” ◌̛”, “ơ” + ” ◌̣” and “ọ” + “◌̛”.
    But this does not apply to NFC meaning that “é” still doesn’t match “e” + “◌́ “.
  • Extended Grapheme Clusters and Character Classes with Strings : as explained in a previous post, the Unicode definition of a character might be different of what a user perceives as a character because what is perceived as a character in a script may be a combination of Unicode characters like in Hangul script. Therefore, in order to match a grapheme cluster boundary, the standard provides the \b{g} syntax as well as \X to match a single grapheme —see example below—.
    In addition, the standard defines Character Classes with Strings to help define character classes. For example, \q{ll} will treat “ll” as a single character which is customary in Spanish. More generally, this is meant to help handling of character that consist of more than one code point such as “🇫🇷” —Regional indicator symbol letter f + Regional indicator symbol letter r—.
  • Name Properties : this provides for matching characters by name along the following syntax,
    <codepoint> := "\N{" <character_name> "}"
    Example : \p{name=ZERO WIDTH NO-BREAK SPACE} matches “” (yes, that’s a character whose code point is U+FEFF)
  • Wildcards in Property Values : this is actually regex within regex in that this feature allows the use of a regular expression to pick out a set of characters based on whether the property values match the regular expression using the following syntax,
    PROP_VALUE := | "/" "/" | "@" "@"
    For instance, this matches any smiling or grinning emoji face :\p{name=/(SMILING|GRINNING) FACE/}
  • Full Properties : this is the list of properties an implementation should support —Name, General_Category,Script,…—

Level 2 also includes :

Like the standard, the Unicode regular expressions specification has been updated several times since 1999. For instance, Character Classes with strings were added in 2020 and Tr#18 is currently still under revision to give better guidance on implementation among other things.

Unicode regex specification support

Now, those are the guidelines provided by the Unicode Standard and, as you might expect, support by regex engines greatly varies. For instance, PCRE only supports the General Category and script properties and, by the way, this does not work \p{Script=Greek}. In addition, PCRE defines its own syntax for matching like Xan or Xps. See this page, section Unicode character properties for more information.

But there is at least one R library that gives a more thorough support of Unicode regular expressions : the stringi package which is a R wrapper around the International Components for Unicode (ICU) libraries. ICU is an open–source set C/C++ and Java libraries that aim at providing Unicode and internationalization support for software applications. It was originally developed by the Taligent company then IBMand was later incorporated into the Java SDK. ICU seems to closely tracks the Unicode Standard and cover a great deal more than regex and provide for conversion, collation, formatting, normalization or case Folding as shown by the number of exported methods of the stringi package :

length(ls("package:stringi"))
  
250


Some build of R also uses ICU for collation. That is to say, where available R by default makes use of ICU for, e.g., comparison —see ?Comparison as well as R Installation and Administration for more information—. Use capabilities("ICU") to check if your R installation was build with ICU.

stringi pattern matching functions are prefixed with stri_match*. In what follows, we’ll only use the stri_match function which only gives the first match. But stringi also supports multiple matches.

Let’s first try simple matching using properties :

ch <- "β"
pat <- c(
  "\\p{name=GREEK SMALL LETTER BETA}" ## get character by name
  , "\\p{Lowercase}" ## isLower ?
  , "\\p{East_Asian_Width=Narrow}" ## Why not ?
)
##
sapply(
  pat
  , function(pat){## try-it in case we use a non-supported property
    try( stringi::stri_match(ch,regex=pat),  silent = T ) 
  }
)

\\p{name=GREEK SMALL LETTER BETA}                    \\p{Lowercase}      
                              "β"                               "β"      
\\p{East_Asian_Width=Narrow} 
                          NA


And now with difference and intersection :

## every Greek letters but β —Note : \p{Greek} means 
## Script=Greek, not Block=Greek
stri_match( 
  c(" β", "ϡ" )
  , regex="[\\p{Greek}--\\N{GREEK SMALL LETTER BETA}]"
)[,1]

NA  "ϡ"

## consonant letters
stri_match(
  letters,regex="[a-z&&[^aeiuo]]"
)[,1]

NA  "b" "c" "d" NA  "f" "g" "h" NA  "j" "k" "l" "m" "n" NA  "p" "q" 
"r" "s" "t" NA "v" "w" "x" "y" "z" ## non-digital numbers + 0-9 stri_match( ch<-c("0", "𝟘","߀" , "⁰", "①", "I", "Ⅰ", "𝋠", "๐") , regex="[\\p{N}--[\\p{Nd}--0-9]]" )[,1] "0" NA NA "⁰" "①" NA "Ⅰ" "𝋠" NA ## Thai digits stri_match( ch, regex="[\\p{Nd}&&\\p{IsThai}]" )[,1] NA NA NA NA NA NA NA NA "๐"


Note that non–digit matches character includes Roman numerals —”Ⅰ”—. So it looks like that digit not only means base10 but also implies positional notation.

Now, let’s try to match diacritical letters —note that we need to convert to NFD first— :

##
eacc <- c("é", stri_trans_nfd("é")) ## "\u0065\u0301"
##
stri_match(
  eacc, regex="\\p{M}"
)[,1]

NA "́"

##
stri_match(
  eacc, regex="\\p{Dia}"
)[,1]

NA "́"


But ICU does not seem to be fully level 2 compliant as of writing. For instance, Properties Wildcards are not implemented so it seems :

##
stringi::stri_match(letters,regex="\\p{name=/(SMILING|GRINNING) FACE/}")
  
Error in stri_match_first_regex(str, regex, ...) : 
  Incorrect Unicode property. (U_REGEX_PROPERTY_SYNTAX,
context=`\p{name=/(SMILING|GRINNING) FACE/}`)


Nor does it enforces canonical equivalence between characters :

  ##
stri_match(
  ohorndot <- c( ## Latin small letter o with horn and dot below "ợ"
    "\u06f\u31b\u323" ,"\u06f\u323\u31b" , "\u1a1\u323"
  )
  ## enforce NFD
  , regex =  stri_trans_nfd("ợ")
)[,1]
## Latin small letter o with horn and dot below "ợ"
ohorndot <- c( 
    "\u06f\u31b\u323" ## Canonical Decomposition (NFD)
  , "\u06f\u323\u31b" ## !NF
  , "\u1ee3"          ## NFC
  , "\u1a1\u323"      ## !NF: Latin small letter o with horn ơ + Combining dot below ◌̣
)
##
matrix(c( 
  stri_trans_isnfd(ohorndot)
  , stri_trans_isnfc(ohorndot)
  , stri_trans_isnfkd(ohorndot)
  , stri_trans_isnfkc(ohorndot)
), nrow=length(ohorndot))
      [,1]  [,2]  [,3]  [,4]
[1,]  TRUE FALSE  TRUE FALSE
[2,] FALSE FALSE FALSE FALSE
[3,] FALSE  TRUE FALSE  TRUE
[4,] FALSE FALSE FALSE FALSE
##
stri_match(
  ohorndot
  ## enforce NFD
  , regex =  stri_trans_nfd("ợ")
)[,1]
"ợ" NA  NA  NA 


According to this page, UREGEX_CANON_EQ is the —not yet implemented— flag to enforce canonical equivalence in ICU.

Also, note difference in output between the two following regular expressions

##
stri_match(
  ohorndot
  , regex =  "."
)[,1]

"o" "o" "ơ"

##
stri_match(
  ohorndot, regex =  "\\X"
)[,1]

"ợ" "ợ" "ợ"


. only matches a single character while \X matches a whole grapheme cluster.

Fazit

In order to work with withe the Unicode, regular expressions had to be updated as characters moved from 8-bits to multibytes encodings and character classes needed to be redefined as the Unicode character set expanded. But Tr#18 goes way beyond that. It greatly improves the matching capabilities of regexes —even though support varies from one engine to another—. Through an extended syntax and properties, it helps at regexes meeting the expectation of users in any language like having metacharacters really matching any letter instead of just [A-Z] or really match word boundaries in any script this makes sense. In addition, Tr#18 also extends character semantics and allow for more complex regular expressions. Of course, understanding the Unicode character property model requires some work. But this is for more advanced pattern matching and, in many circumstances, the General Category and Script properties are likely to be enough.

Also, we only used Acme Widget examples and ICU and the stringi package have much more to offer. But we’ll cover more life–like use cases as well as other ICU functionalities in another post.

  1. Or, more accurately, what a user perceives as a character, see Regular Expressions section below []
  2. Yes, The Ken Thompson again —some people are so good it hurts— []
  3. Strictly speaking, modern regex engine have features that do not belong to a regular language like backtracking. But somehow the name just stuck []
  4. otherwise, function calls default to the TRE library which is POSIX compliant []

You may also like...

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Search OpenEdition Search

You will be redirected to OpenEdition Search