Word tokenization

Содержание

Слайд 2

Text Normalization Every NLP task needs to do text normalization: Segmenting/tokenizing

Text Normalization

Every NLP task needs to do text normalization:
Segmenting/tokenizing words

in running text
Normalizing word formats
Segmenting sentences in running text

Для чего необходимо решение задач 1-3?

Слайд 3

How many words? I do uh main- mainly business data processing

How many words?

I do uh main- mainly business data processing
Fragments, filled

pauses
Seuss’s cat in the hat is different from other cats!
Lemma: same stem, part of speech, rough word sense
cat and cats = same lemma
Wordform: the full inflected surface form
cat and cats = different wordforms
Слайд 4

Рыбак рыбака видит издалека. Рыбак и рыбака — одна лемма, но

Рыбак рыбака видит издалека.
Рыбак и рыбака — одна лемма, но разные

словоформы.

В чем отличие леммы от словоформы?

Слайд 5

How many words? they lay back on the San Francisco grass

How many words?

they lay back on the San Francisco grass and

looked at the stars and their
Type: an element of the vocabulary.
Token: an instance of that type in running text.
How many?
15 tokens (or 14)
13 types (or 12) (or 11?)

Чем отличается модель от токена?

Слайд 6

Он не мог не ответить на это письмо. Сколько моделей и токенов?

Он не мог не ответить на это письмо.
Сколько моделей и

токенов?
Слайд 7

How many words? N = number of tokens V = vocabulary

How many words?

N = number of tokens
V = vocabulary = set

of types
|V| is the size of the vocabulary

Church and Gale (1990): |V| > O(N½)

Слайд 8

Simple Tokenization in UNIX (Inspired by Ken Church’s UNIX for Poets.)

Simple Tokenization in UNIX

(Inspired by Ken Church’s UNIX for Poets.)
Given a

text file, output the word tokens and their frequencies
tr -sc ’A-Za-z’ ’\n’ < shakes.txt
| sort
| uniq –c
1945 A
72 AARON
19 ABBESS
5 ABBOT
... ...

25 Aaron
6 Abate
1 Abates
5 Abbess
6 Abbey
3 Abbot
.... …

Change all non-alpha to newlines

Sort in alphabetical order

Merge and count each type

Какие команды UNIX можно использовать для обработки текста?

Слайд 9

The first step: tokenizing tr -sc ’A-Za-z’ ’\n’ THE SONNETS by

The first step: tokenizing

tr -sc ’A-Za-z’ ’\n’ < shakes.txt | head
THE
SONNETS
by
William
Shakespeare
From
fairest
creatures
We
...


Что произошло в результате выполнения команды?

Слайд 10

The second step: sorting tr -sc ’A-Za-z’ ’\n’ A A A

The second step: sorting

tr -sc ’A-Za-z’ ’\n’ < shakes.txt | sort

| head
A
A
A
A
A
A
A
A
A
...

Что вывелось в результате выполнения команды?

Слайд 11

More counting Merging upper and lower case tr ‘A-Z’ ‘a-z’ Sorting

More counting

Merging upper and lower case
tr ‘A-Z’ ‘a-z’ < shakes.txt |

tr –sc ‘A-Za-z’ ‘\n’ | sort | uniq –c
Sorting the counts
tr ‘A-Z’ ‘a-z’ < shakes.txt | tr –sc ‘A-Za-z’ ‘\n’ | sort | uniq –c | sort –n –r

23243 the
22225 i
18618 and
16339 to
15687 of
12780 a
12163 you
10839 my
10005 in
8954 d

What happened here?

Почему “d” вывелось как отдельное слово?

Слайд 12

Issues in Tokenization Finland’s capital → Finland Finlands Finland’s ? what’re,

Issues in Tokenization

Finland’s capital → Finland Finlands Finland’s ?
what’re, I’m, isn’t

→ What are, I am, is not
Hewlett-Packard → Hewlett Packard ?
state-of-the-art → state of the art ?
Lowercase → lower-case lowercase lower case ?
San Francisco → one token or two?
m.p.h., PhD. → ??
Красно-желтый → Красно желтый? Красно-желтый?

В чем заключается проблема токенизации?

Слайд 13

Tokenization: language issues French L'ensemble → one token or two? L

Tokenization: language issues

French
L'ensemble → one token or two?
L ? L’ ?

Le ?
Want l’ensemble to match with un ensemble
German noun compounds are not segmented
Lebensversicherungsgesellschaftsangestellter
‘life insurance company employee’
German information retrieval needs compound splitter
Слайд 14

Какие проблемы, связанные с особенностями языков, могут возникнуть?

Какие проблемы, связанные с особенностями языков, могут возникнуть?

Слайд 15

Tokenization: language issues Chinese and Japanese no spaces between words: 莎拉波娃现在居住在美国东南部的佛罗里达。

Tokenization: language issues

Chinese and Japanese no spaces between words:
莎拉波娃现在居住在美国东南部的佛罗里达。
莎拉波娃 现在 居住

在 美国 东南部 的 佛罗里达
Sharapova now lives in US southeastern Florida
Further complicated in Japanese, with multiple alphabets intermingled
Dates/amounts in multiple formats

フォーチュン500社は情報不足のため時間あた$500K(約6,000万円)

End-user can express query entirely in hiragana!

Слайд 16

Какие особенности японского языка еще больше осложняют обработку текста?

Какие особенности японского языка еще больше осложняют обработку текста?

Слайд 17

Word Tokenization in Chinese Also called Word Segmentation Chinese words are

Word Tokenization in Chinese

Also called Word Segmentation
Chinese words are composed of

characters
Characters are generally 1 syllable and 1 morpheme.
Average word is 2.4 characters long.
Standard baseline segmentation algorithm:
Maximum Matching (also called Greedy)

Какой алгоритм применяется для токенизации в китайском языке?

Слайд 18

Maximum Matching Word Segmentation Algorithm Given a wordlist of Chinese, and

Maximum Matching Word Segmentation Algorithm

Given a wordlist of Chinese, and a string.
Start

a pointer at the beginning of the string
Find the longest word in dictionary that matches the string starting at pointer
Move the pointer over the word in string
Go to 2

В чем заключается суть алгоритма Maximum Matching?

Слайд 19

Max-match segmentation illustration Thecatinthehat Thetabledownthere Doesn’t generally work in English! But

Max-match segmentation illustration

Thecatinthehat
Thetabledownthere
Doesn’t generally work in English!
But works astonishingly well in

Chinese
莎拉波娃现在居住在美国东南部的佛罗里达。
莎拉波娃 现在 居住 在 美国 东南部 的 佛罗里达
Modern probabilistic segmentation algorithms even better

the table down there

the cat in the hat

theta bled own there