Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Unicode Properties and Regular Expressions Ⅱ : Unicode Support in Regular Expressions

This the second part of the entry on Unicode Properties and Regular Expressions. The first part dealt with Unicode character properties Now we’re ready to use properties in pattern matching. But, first, a quick word of caution.

Regular Expressions Standardization or Lack Thereof

Regular expressions are used since the 1960’s for pattern matching. One important thing to note is that, despite their wide adoption, regular expressions are only partially standardized. Therefore, syntax can change from one implementation to another as well as their behaviour —see examples below—.

Regular expressions were first implemented by Ken Thompson1 based on previous work by American mathematician Stephen Cole Kleene. Following Alonzo Church, Kleene worked on the theoretical foundations of computing and formalized the notion of regular languages. A regular language is basically a language that can be transformed into finite automaton and is the theoretical foundation of regular expressions2. For instance, the * quantifier is based on what is now known as the Kleene star which is the set of all finite–length strings that can be generated by concatenating any elements of a set of strings V. Thompson later devised an algorithm to transform regular expression into a special kind of finite automaton called DFA (Deterministic Finite Automaton) and used them in Unix text processing tools such as ed . Hence the name grep which is derived from the g/re/p command that does a G̲lobal search for R̲egular E̲xpression and P̲rints matching lines.

Regexes were first standardized be the POSIX standard in the early 80’s. But this applies to Unix utilities only. Other implementations later led to various extensions like in the Tcl or Perl programming languages. Perl is basically a language build around a regex engine that greatly helped expand the syntax and soon became a de facto standard. In turn, other implementations then started to mimic Perl regex syntax like PCRE (Perl Compatible Regular Expressions) which is a standalone library used by many softwares such as R through the perl=TRUE switch3. PCRE is —mostly— compatible with Perl regular expressions.

So, it’s hard to speak about regex as a all as there are many flavors. Here, we’ll deal with libraries available in R, namely TRE, PCRE and ICU as well as a little bit of Perl.

Character Classes

But, despite many idiosyncrasies, there are fortunately many features common to most regex implementations. And, if you have already used regular expressions before, you must be familiar with character classes. Character classes are unordered sets of characters that acts like a shortcuts. For instance, in Perl, if you want to match any digit, instead of typing 0|1|2|3|4|5|6|7|8|9 or [0-9], you can use the backlashed metasymbol \d or \D if you want to match anything but a digit.

An obvious shortcoming of this class definition is that it will not match numbers in languages that use other symbols for digits. For instance, the following subset of the Unidata.txt file

numerals <- within( 
  ucd.udata[
    grep( "(NUMER(AL|IC))|(DIGIT)|(FRACTION)", ucd.udata$name, ignore.case = T)
    ,c("cp", "name", "general_category","numeric_type.")
  ]
  , {char <- intToUtf8(cp,multiple = T)} 
)


has over 1,100 entries. Besides decimal digits, there are Arabic, Balinese or Kannada digit, Hangzhou numerals, cuneiform numerics, counting rods digits, dingbats digits. Most of them are base10 but there are non-decimal radix systems, like Ethiopic, Mende Kikakui or Mayan numerals. checkbase In addition, real numbers can be represented as fractions, or digits can be circled. And, despite the fact that the standard claims to encode characters, not glyphs, some variations of decimal digits are not considered glyph variants by the standard and are separately encoded such as mathematical bold, double–struck or sans–serif digit :

Mathematical bold𝟎 𝟏 𝟐 𝟑 𝟒 𝟓 𝟔 𝟕 𝟖 𝟗
Mathematical double–struck𝟘 𝟙 𝟚 𝟛 𝟜 𝟝 𝟞 𝟟 𝟠 𝟡
Mathematical sans–serif𝟢 𝟣 𝟤 𝟥 𝟦 𝟧 𝟨 𝟩 𝟪 𝟫
Mathematical sans–serif bold𝟬 𝟭 𝟮 𝟯 𝟰 𝟱 𝟲 𝟳 𝟴 𝟵
Mathematical monospace𝟶 𝟷 𝟸 𝟹 𝟺 𝟻 𝟼 𝟽 𝟾 𝟿
Source : UnicodeData.txt

With Perl, trying to match “𝟘” with \d

perl -ne 'print if /\d/' <<EOF
𝟎
EOF


prints nothing by default. But note difference in output when enabling Unicode through the -C flag :

perl -CSD -ne 'print if /\d/' <<EOF
𝟎
EOF

𝟎


Perl Unicode support started with version 5.6 with a back compatibility layer. So you need to use various switches to enable Unicode support in your scripts. See this page for more informations about Unicode support in Perl regexes.


In R, trying to match “߀” or “𝟘” with \d

## NKO DIGIT ZERO
grepl("\\d","߀" )
grepl("\\d","߀", perl=T)  ## use PCRE
## MATHEMATICAL BOLD DIGIT ZERO
grepl("\\d","𝟘")
grepl("\\d","𝟘", perl=T) ## use PCRE


returns FALSE in every case.

Depending on the implementation, the same thing is likely to happen with \w which match for words and, in Perl, is defined as [a-zA-Z0-9_]

grepl("\\w", ch <- c("а", "a", "à", "α", "А", "ᴀ") )
grepl("\\w", ch ,perl=T) ## use PCRE


TRE matches all character while PCRE matches none. The reason why PCRE does not match “а” and “А” is because they are not Latin but Cyrillic characters :

sapply(ch, function(ch) u_char_name(utf8ToInt(ch)) )
  
                        а                                a  
"CYRILLIC SMALL LETTER A"  "FULLWIDTH LATIN SMALL LETTER A" 
                                à                                 α 
"LATIN SMALL LETTER A WITH GRAVE"        "GREEK SMALL LETTER ALPHA"
                                А                               ᴀ 
       "CYRILLIC CAPITAL LETTER A" "LATIN LETTER SMALL CAPITAL A" 


And, believe it or not, “a” is yet another latin “a”. Also note that the Unicode defines small capitals letters.

One possible fix is to build our own character classes using brackets [<code-point-list/>] instead of the built–in character classes. For the set of common modern Greek characters, this would give something like [αΑάΆβΒγΓδΔεΕζΖηΗθΘιΙίΊϊΪκΚλΛμΜνΝξΞοΟόΌπΠρΡσΣςτΤυΥύΎϓφΦχΧψΨωΩώΏ]. But, to make things easier, we can also use ranges [<code-point-start/>-<code-point-end/>] in conjunction with the hex notation \x{<code-point-in-hex/>} which is less tedious than having to remember keyboard shortcuts. This yields [\x{0370}-\x{03E1}] which is the range of the Greek and Coptic Block if you don’t mind matching archaic and polytonic Greek characters or Coptic characters and unassigned code points.

Going back to our previous digit character matching example, we can also use hex notation (\x{1D7D8}) to match for “𝟘” and any kind of digits or numeric character. But since there are over 600 digit code points, it’s easier assemble the RE pattern from the Unidata.txt file, filtering on the Nd General Category —decimal number— this time :

grepl(
  sprintf(
    "[%s]"
    , paste(
      sprintf("\\x{%x}", ucd.udata$cp[ucd.udata$General_Category=="Nd"]) 
      , collapse = ""
    )
  )
  , c( "߀" , "𝟘" )
  , perl=T
)


returns TRUE for both digits. Interestingly, perl=F raises an ‘Out of memory’ error from TRE.

Of course, we could use the Numeric_Type property from the DerivedNumericType.txt to assemble a regex based on range to make it faster. But building an ad hoc character class really works for small sets of characters like [\x{0370}-\x{03E1}]. As character sets and data grow, this is becoming both tedious and computationally inefficient. Fortunately, there is a simpler and more efficient way to do it thanks to the Unicode.

The Unicode Specification for Regular Expressions

A stated in Tr#18, the Unicode Standard supplies guidelines for Unicode support in regular expressions. Like many programs, regex engines had to be adapted for Unicode support. Because, like many programs, regex engines used to assume that character were bytes instead of varying–length 8 or 16–bits encodings among other things like caseless pattern matching becoming much more complicated. But regular expression syntax needed to be modified as well in order to use character semantics in regular expressions by providing syntax for sets of characters based on the Unicode character properties. Tr#18 thus introduced the \p{<prop/>} metasymbol and its set complement \P{<prop/>}.

Let’s try it on the matching different kind of zero characters example using the Nd property we saw earlier :

grepl(
  "\\p{Nd}"
  , c( "߀" , "𝟘" )
  , perl=T
)


returns TRUE for both characters. This only works with PCRE because the TRE library does not support the \p{<prop/>} syntax. Using the L derived property —any character interpreted as a letter —, we can also match characters that \w failed to match :

grepl("\\p{L}", ch ,perl=T)

[1] TRUE TRUE TRUE TRUE TRUE TRUE


Support for General Category property matching is not the only extension to the RE syntax provided by the standard. Actually, any property is supported through the following grammar .

CHARACTER_CLASS := POSITIVE_SPEC | NEGATIVE_SPEC
ITEM            := POSITIVE_SPEC | NEGATIVE_SPEC
POSITIVE_SPEC   := ("\p{" PROP_SPEC "}") | ("[:" PROP_SPEC ":]")
NEGATIVE_SPEC   := ("\P{" PROP_SPEC "}") | ("[:^" PROP_SPEC ":]")
PROP_SPEC       := <binary_unicode_property>
PROP_SPEC       := <unicode_property> (":" | "=" | "≠" | "!=" ) PROP_VALUE
PROP_SPEC       := <script_or_category_property_value> ("|" 
<script_or_category_property_value>)*
PROP_VALUE      := <unicode_property_value> ("|" <unicode_property_value>)*


The basic syntax for using property is thus \p{<propname/>(":" | "=" | "≠" | "!=" )<propvalue/>}. For instance, one can match Greek letters with \p{script=Greek} and the set complement by \p{script≠Greek} which is the same as \P{script=Greek} or [^\p{script=Greek}]. Actually, in most cases, you don’t have to use the property name because most of property values are unique. This is why you can safely use \p{Nd} instead of \p{Gc=Nd} or \p{General_Category=Nd}. But this is not always true. For instance, Greek is a property value shared by the Block and Script properties. Also note that for one–character property, braces are optional (\pN).

The Unicode Standard also extends regex set operations beyond those currently defined such as :

  • union : […] which is the set of elements in A or B ( A \cup B = \{ x \;|\; x \in A \, or \, x \in B \} )
  • and complement : \W\D… which is the set of elements of the universe U not in A ( A^c = U - A = \{ x \in U \;|\; x \notin B \} )

by adding

  • intersection : [<prop/>&&<prop/>] which is the set of elements belonging to both sets ( A \cap B = = \{ x \;|\; x \in A \, and \; x \in B \} )
  • difference : [<prop/>--<prop/>] which is the set of elements of A not in B ( A \setminus B = \{ x \in A \;|\; x \notin B \} )
  • symmetric difference : [<prop/>~~<prop/>] which is the set of elements belonging to one but not both of two given sets ( A \triangle B = \{ x \;|\; (x \in A) \oplus (x \in B \} ). It is therefore the union of the two sets minus their intersection ( (A \cup B) \setminus (A \cap B) ) or the union of the differences of A with respect to B and B with respect to A ( (A \setminus B) \cup(A \setminus B) ) . It also corresponds to the xor operation in Boolean algebra.

For example, one can match the set of letters except Latin script [\p{letter}--\p{Script=Latin}], the set of consonants letters [a-z&&[^aeiuo]], or Thai numbers [\p{Nd}&&[\p{IsThai}]] . In addition one can also combine operators. For instance, [\p{N}--[\p{Nd}--0-9]] matches the set of all non-decimal numbers, plus 0-9.

The standard defines several levels of conformance. What we just saw is Basic Unicode Support level 1 conformance which also includes

  • Simple Word Boundaries : since Unicode extends the set of what a word can be, implementation of the \b metacharacter should reflect this change. See below for a more general support for word boundaries
  • Simple Loose Matches : case folding is actually a fairly complicated task. therefore, an implementation should provide Unicode compliant case folding when doing case-insensitive matching. See below for a more general support for case conversion
  • Line Boundaries : an implementation should extend line boundaries testing beyond the usual LF, CR and CRLF
  • Supplementary Code Points : an implementation should handle the full range of Unicode code points

According to the standard, level 1 is the minimally useful level of support for Unicode. All regex implementations dealing with Unicode should be at least at Level 1. Level 2 is recommended for implementations that need to handle additional Unicode features and includes

  • Canonical Equivalents : implementation should enforce canonical equivalence between two characters when normalized to NFD. For instance, the letter Latin small letter o with horn and dot below “ợ” can also be encoded in at least four other ways : “o” + “◌̛” + ” ◌̣”, “o” + “◌̣” +” ◌̛”, “ơ” + ” ◌̣” and “ọ” + “◌̛”.
    But this does not apply to NFC meaning that “é” still doesn’t match “e” + “◌́ “.
  • Extended Grapheme Clusters and Character Classes with Strings : as explained in a previous post, the Unicode definition of a character might be different of what a user perceives as a character because what is perceived as a character in a script may be a combination of Unicode characters like in Hangul script . Therefore, in order to match a grapheme cluster boundary, the standard provides the \b{g} syntax as well as \X to match a single grapheme —see example below—.
    In addition, the standard defines Character Classes with Strings to help define character classes. For example, \q{ll} will treat “ll” as a single character which is customary in Spanish. More generally, this is meant to help handling of character that consist of more than one code point such as “🇫🇷” —Regional indicator symbol letter f + Regional indicator symbol letter r—.
  • Name Properties : this provides for matching characters by name along the following syntax,
    <codepoint> := "\N{" <character_name> "}"
    Example : \p{name=ZERO WIDTH NO-BREAK SPACE} matches “” (yes, that’s a character whose code point is U+FEFF)
  • Wildcards in Property Values : this is actually regex within regex in that this feature allows the use of a regular expression to pick out a set of characters based on whether the property values match the regular expression using the following syntax,
    PROP_VALUE := | "/" "/" | "@" "@"
    For instance, this matches any smiling or grinning emoji face :\p{name=/(SMILING|GRINNING) FACE/}
  • Full Properties : this is the list of properties an implementation should support —Name, General_Category,Script,…—

Level 2 also includes :

Like the standard, the Unicode regular expressions specification has been updated several times since 1999. For instance, Character Classes with strings were added in 2020 and Tr#18 is currently still under revision to give better guidance on implementation among other things.

Unicode regex Specification Support

Now, those are the guidelines provided by the Unicode Standard and, as you might expect, support by regex engines greatly varies. For instance, PCRE only supports the General Category and script properties and, by the way, this does not work \p{Script=Greek}. In addition, PCRE defines its own syntax for matching like Xan or Xps. See this page, section Unicode character properties for more information.

But there is at least one R library that gives a more thorough support of Unicode regular expressions : the stringi package which is a R wrapper around the International Components for Unicode (ICU) libraries. ICU is an open–source set C/C++ and Java libraries that aim at providing Unicode and internationalization support for software applications. It was originally developed by the Taligent company then IBMand was later incorporated into the Java SDK. ICU seems to closely tracks the Unicode Standard and cover a great deal more than regex and provide for conversion, collation, formatting, normalization or case Folding as shown by the number of exported methods of the stringi package :

length(ls("package:stringi"))
  
250


Some build of R also uses ICU for collation. That is to say, where available R by default makes use of ICU for, e.g., comparison —see ?Comparison as well as R Installation and Administration for more information—. Use capabilities("ICU") to check if your R installation was build with ICU.

stringi pattern matching functions are prefixed with stri_match*. In what follows, we’ll only use the stri_match function which only gives the first match. But stringi also supports multiple matches.

Let’s first try simple matching using properties :

ch <- "β"
pat <- c(
  "\\p{name=GREEK SMALL LETTER BETA}" ## get character by name
  , "\\p{Lowercase}" ## isLower ?
  , "\\p{East_Asian_Width=Narrow}" ## Why not ?
)
##
sapply(
  pat
  , function(pat){## try-it in case we use a non-supported property
    try( stringi::stri_match(ch,regex=pat),  silent = T ) 
  }
)

\\p{name=GREEK SMALL LETTER BETA}                    \\p{Lowercase}      
                              "β"                               "β"      
\\p{East_Asian_Width=Narrow} 
                          NA


And now with difference and intersection :

## every Greek letters but β —Note : \p{Greek} means 
## Script=Greek, not Block=Greek
stri_match( 
  c(" β", "ϡ" )
  , regex="[\\p{Greek}--\\N{GREEK SMALL LETTER BETA}]"
)[,1]

NA  "ϡ"

## consonant letters
stri_match(
  letters,regex="[a-z&&[^aeiuo]]"
)[,1]

NA  "b" "c" "d" NA  "f" "g" "h" NA  "j" "k" "l" "m" "n" NA  "p" "q" 
"r" "s" "t" NA "v" "w" "x" "y" "z" ## non-digital numbers + 0-9 stri_match( ch<-c("0", "𝟘","߀" , "⁰", "①", "I", "Ⅰ", "𝋠", "๐") , regex="[\\p{N}--[\\p{Nd}--0-9]]" )[,1] "0" NA NA "⁰" "①" NA "Ⅰ" "𝋠" NA ## Thai digits stri_match( ch, regex="[\\p{Nd}&&\\p{IsThai}]" )[,1] NA NA NA NA NA NA NA NA "๐"


Note that non–digit matches character includes Roman numerals —”Ⅰ”—. So it looks like that digit not only means base10 but also implies positional notation.

Now, let’s try to match diacritical letters —note that we need to convert to NFD first— :

##
eacc <- c("é", stri_trans_nfd("é")) ## "\u0065\u0301"
##
stri_match(
  eacc, regex="\\p{M}"
)[,1]

NA "́"

##
stri_match(
  eacc, regex="\\p{Dia}"
)[,1]

NA "́"


But ICU does not seem to be fully level 2 compliant as of writing. For instance, Properties Wildcards are not implemented so it seems :

##
stringi::stri_match(letters,regex="\\p{name=/(SMILING|GRINNING) FACE/}")
  
Error in stri_match_first_regex(str, regex, ...) : 
  Incorrect Unicode property. (U_REGEX_PROPERTY_SYNTAX,
context=`\p{name=/(SMILING|GRINNING) FACE/}`)


Nor does it enforces canonical equivalence between characters :

##
stri_match(
  ohorndot <- c( ## Latin small letter o with horn and dot below "ợ"
    "\u06f\u31b\u323" ,"\u06f\u323\u31b" , "\u1a1\u323"
  )
  ## enforce NFD
  , regex =  stri_trans_nfd("ợ")
)[,1]
##
ohorndot <- c( 
    "\u06f\u31b\u323" ## NFD: Latin small letter o o + Combining horn ◌̛ + Combining dot below ◌̣ (Canonical Decomposition)
  , "\u06f\u323\u31b" ## !NF: Latin small letter o o + Combining dot below ◌̣  + Combining horn ◌̛
  , "\u1ee3"          ## NFC: Latin small letter o with horn and dot below "ợ"
  , "\u1a1\u323"      ## !NF: Latin small letter o with horn ơ + Combining dot below  ◌̣
)
##
matrix(c( 
  stri_trans_isnfd(ohorndot)
  , stri_trans_isnfc(ohorndot)
  , stri_trans_isnfkd(ohorndot)
  , stri_trans_isnfkc(ohorndot)
), nrow=length(ohorndot), dimnames=list(NULL, c('nfd', 'nfc', 'nfkd', 'nfkc') ))


       nfd   nfc  nfkd  nfkc
[1,]  TRUE FALSE  TRUE FALSE
[2,] FALSE FALSE FALSE FALSE
[3,] FALSE  TRUE FALSE  TRUE
[4,] FALSE FALSE FALSE FALSE

##
stri_match(
  ohorndot
  ## enforce NFD
  , regex =  stri_trans_nfd("ợ")
)[,1]

"ợ" NA  NA  NA 

## canonical equivalence
stri_cmp_equiv(ohorndot[1], ohorndot[2:4] )
FALSE  TRUE  TRUE
## strength=1 allows for more permissive equivalences (default is 3L)
stri_cmp_equiv(ohorndot[1], ohorndot[2:4] , strength=1)
TRUE TRUE TRUE

stri_match() does not match the third string although “\u1ee3” is canonically equivalent to “\u06f\u31b\u323” (second and fourth strings are not normalized but also equivalent).

According to this page, UREGEX_CANON_EQ is the —not yet implemented— flag to enforce canonical equivalence in ICU.

Also, note difference in output between the two following regular expressions

##
stri_match(
  ohorndot
  , regex =  "."
)[,1]

"o" "o" "ơ"

##
stri_match(
  ohorndot, regex =  "\\X"
)[,1]

"ợ" "ợ" "ợ"


. only matches a single character while \X matches a whole grapheme cluster.

Fazit

In order to work with withe the Unicode, regular expressions had to be updated as characters moved from 8-bits to multibytes encodings and character classes needed to be redefined as the Unicode character set expanded. But Tr#18 goes way beyond that. It greatly improves the matching capabilities of regexes —even though support varies from one engine to another—. Through an extended syntax and properties, it helps at regexes meeting the expectation of users in any language like having metacharacters really matching any letter instead of just [A-Z] or really match word boundaries in any script this makes sense. In addition, Tr#18 also extends character semantics and allow for more complex regular expressions. Of course, understanding the Unicode character property model requires some work. But this is for more advanced pattern matching and, in many circumstances, the General Category and Script properties are likely to be enough.

Also, we only used Acme Widget examples and ICU and the stringi package have much more to offer. But we’ll cover more life–like use cases as well as other ICU functionalities in another post.

  1. Yes, The Ken Thompson again —some people are so good it hurts— []
  2. Strictly speaking, modern regex engine have features that do not belong to a regular language like backtracking. But somehow the name just stuck []
  3. otherwise, function calls default to the TRE library which is POSIX compliant []

OpenEdition suggests that you cite this post as follows:
Thomas Soubiran (April 4, 2021). Unicode Properties and Regular Expressions Ⅱ : Unicode Support in Regular Expressions. NUMA. Retrieved February 7, 2025 from https://doi.org/10.58079/vew4


You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.