Day 24 – Topic Modeling with Perl 6

Hi, everyone.

Today, let me introduce Algorithm::LDA.
This module is a Latent Dirichlet Allocation (i.e., LDA) implementation for topic modeling.

Introduction

What’s LDA? LDA is one of the popular unsupervised machine learning methods.
It models document generation process and represents each document as a mixture of topics.

So, what does “a mixture of topics” mean? Fig. 1 shows an article in which some of the words are highlighted in three colors: yellow, pink, and blue. Words about genetics are marked in yellow; words about evolutionary biology are marked in pink; words about data analysis are marked in blue. Imagine all of the words in this article are colored, then we can represent this article as a mixture of topics (i.e., colors).

Fig. 1:
Fig. 1
(This image is from “Probabilistic topic models.” (David Blei 2012))

OK, then I’ll demonstrate how to use Algorithm::LDA in the next section.

Modeling Quotations

In this article, we explore Wikiquote. Wikiquote is a cloud-sourced platform providing sourced quotations.
By using Wikiquote API, we get quotations that are used for LDA estimation. After that, we execute LDA and plot the result.
Finally, we create an information retrieval application using the resulting model.

Preliminary

Wikiquote API

Wikiquote has action API that provides means for getting Wikiquote resources.
For example, you can get content of the Main Page as follows:

$ curl "https://en.wikiquote.org/w/api.php?action=query&prop=revisions&titles=Main%20Page&rvprop=content&format=json"

The result of the above command is:

{"batchcomplete":"","warnings":{"main":{"*":"Subscribe to the mediawiki-api-announce mailing list at <https://lists.wikimedia.org/mailman/listinfo/mediawiki-api-announce> for notice of API deprecations and breaking changes. Use [[Special:ApiFeatureUsage]] to see usage of deprecated features by your application."},"revisions":{"*":"Because \"rvslots\" was not specified, a legacy format has been used for the output. This format is deprecated, and in the future the new format will always be used."}},"query":{"pages":{"1":{"pageid":1,"ns":0,"title":"Main Page","revisions":[{"contentformat":"text/x-wiki","contentmodel":"wikitext","*":"
\n
{{Main page header}}
\n
{{Main Page Quote of the day}}
\n</div>\n\n
\n{{Main Page Selected pages}}\n{{Main categories}}\n
\n\n
\n{{New pages}}\n{{Main Page Community}}\n
\n\n
\n==Wikiquote's sister projects==\n{{otherwiki}}\n\n==Wikiquote languages==\n{{Wikiquotelang}}\n
\n__NOTOC__ __NOEDITSECTION__\n{{noexternallanglinks:ang|simple}}\n[[Category:Main page]]"}]}}}}

WWW

WWW by Zoffix Znet is a library which provides easy-to-use API for fetching and parsing json very simply.
For instance, as the README says, you can easily get content by jget(URL)<HASHKEY> style:

say jget('https://httpbin.org/get?foo=42&bar=x')<args><foo>;

To install WWW:

$ zef install WWW

Chart::Gnuplot

Chart::Gnuplot by titsuki is a bindings for gnuplot.

To install Chart::Gnuplot:

$ zef install Chart::Gnuplot

In this article, we use this module; however, if you unfamiliar with gnuplot there are many alternatives: SVG::Plot, Graphics::PLplot, Call matplotlib functions via Inline::Python.

Stopwords from NLTK

NLTK is a toolkit for natural language processing.
Not only APIs, it also provides corpus.
You can get stopwords for English via “70. Stopwords Corpus”: http://www.nltk.org/nltk_data/

Exercise 1: Get Quotations and Create Cleaned Documents

At the beginning, we have to get quotations from Wikiquote and create clean documents.

The main goal of this section is to create documents according to the following format:

<docid> <personid> <word> <word> <word> ...
<docid> <personid> <word> <word> <word> ...
<docid> <personid> <word> <word> <word> ...

The whole source code is:

use v6.c;
use WWW;
use URI::Escape;

sub get-members-from-category(Str $category --> List) {
    my $member-url = "https://en.wikiquote.org/w/api.php?action=query&list=categorymembers&cmtitle={$category}&cmlimit=100&format=json";
    @(jget($member-url)<query><categorymembers>.map(*<title>));
}

sub get-pages(Str @members, Int $batch = 50 --> List) {
    my Int $start = 0;
    my @pages;
    while $start < @members {
        my $list = @members[$start..^List($start + $batch, +@members).min].map({ uri_escape($_) }).join('%7C');
        my $url = "https://en.wikiquote.org/w/api.php?action=query&prop=revisions&rvprop=content&format=json&formatversion=2&titles={$list}";
        @pages.push($_) for jget($url)<query><pages>.map({ %(body => .<revisions>[0]<content>, title => .<title>) });
        $start += $batch;
    }
    @pages;
}

sub create-documents-from-pages(@pages, @members --> List) {
    my @documents;
    for @pages -> $page {
        my @quotations = $page<body>.split("\n")\
        .map(*.subst(/\[\[$<text>=(<-[\[\]|]>+?)\|$<link>=(<-[\[\]|]>+?)\]\]/, { $<text> }, :g))\
        .map(*.subst(/\[\[$<text>=(<-[\[\]|]>+?)\]\]/, { $<text> }, :g))\
        .map(*.subst("[", "[", :g))\
        .map(*.subst("]", "]", :g))\
        .map(*.subst("&amp;", "&", :g))\
        .map(*.subst("&nbsp;", "", :g))\
        .map(*.subst(/:i [ \<\/?\s?br\> | \<br\s?\/?\> ]/, " ", :g))\
        .grep(/^\*<-[*]>/)\
        .map(*.subst(/^\*\s+/, ""));

        # Note: The order of array wikiquote API returned is agnostic.
        my Int $index = @members.pairs.grep({ .value eq $page<title> }).map(*.key).head;
        @documents.push(%(body => $_, personid => $index)) for @quotations;
    }
    @documents.sort({ $^a<personid> <=> $^b<personid> }).pairs.map({ %(docid => .key, personid => .value<personid>, body => .value<body>) }).list
}

my Str @members = get-members-from-category("Category:1954_births");
my @pages = get-pages(@members);
my @documents = create-documents-from-pages(@pages, @members);

my $docfh = open "documents.txt", :w;
$docfh.say((.<docid>, .<personid>, .<body>).join(" ")) for @documents;
$docfh.close;

my $memfh = open "members.txt", :w;
$memfh.say($_) for @members;
$memfh.close;

First, we get the members listed in the “Category:1954_births” page. I choosed the year that the Perl 6 designer was born in:

my Str @members = get-members-from-category("Category:1954_births");

where get-members-from-category gets members via Wikiquote API:

sub get-members-from-category(Str $category --> List) {
    my $member-url = "https://en.wikiquote.org/w/api.php?action=query&list=categorymembers&cmtitle={$category}&cmlimit=100&format=json";
    @(jget($member-url)<query><categorymembers>.map(*<title>));
}

Next, call get-pages:

my @pages = get-pages(@members);

get-pages is a subroutine that gets pages of the given titles (i.e., members):

sub get-pages(Str @members, Int $batch = 50 --> List) {
    my Int $start = 0;
    my @pages;
    while $start < @members {
        my $list = @members[$start..^List($start + $batch, +@members).min].map({ uri_escape($_) }).join('%7C');
        my $url = "https://en.wikiquote.org/w/api.php?action=query&prop=revisions&rvprop=content&format=json&formatversion=2&titles={$list}";
        @pages.push($_) for jget($url)<query><pages>.map({ %(body => .<revisions>[0]<content>, title => .<title>) });
        $start += $batch;
    }
    @pages;
}

where @members[$start..^List($start + $batch, +@members).min] is a slice of length $batch, and the elements of the slice are percent encoded by uri_escase and joint by %7C (i.e., percent encoded pipe symbol).
In this case, one of the resulting $list is:

Mumia%20Abu-Jamal%7CRene%20Balcer%7CIain%20Banks%7CGerard%20Batten%7CChristie%20Brinkley%7CJames%20Cameron%20%28director%29%7CEugene%20Chadbourne%7CJackie%20Chan%7CChang%20Yu-hern%7CLee%20Child%7CHugo%20Ch%C3%A1vez%7CDon%20Coscarelli%7CElvis%20Costello%7CDaayiee%20Abdullah%7CThomas%20H.%20Davenport%7CGerardine%20DeSanctis%7CAl%20Di%20Meola%7CKevin%20Dockery%20%28author%29%7CJohn%20Doe%20%28musician%29%7CF.%20J.%20Duarte%7CIain%20Duncan%20Smith%7CHerm%20Edwards%7CAbdel%20Fattah%20el-Sisi%7CRob%20Enderle%7CRecep%20Tayyip%20Erdo%C4%9Fan%7CAlejandro%20Pe%C3%B1a%20Esclusa%7CHarvey%20Fierstein%7CCarly%20Fiorina%7CGary%20L.%20Francione%7CAshrita%20Furman%7CMary%20Gaitskill%7CGeorge%20Galloway%7C%C5%BDeljko%20Glasnovi%C4%87%7CGary%20Hamel%7CFran%C3%A7ois%20Hollande%7CKazuo%20Ishiguro%7CJean-Claude%20Juncker%7CAnish%20Kapoor%7CGuy%20Kawasaki%7CRobert%20Francis%20Kennedy%2C%20Jr.%7CLawrence%20M.%20Krauss%7CAnatoly%20Kudryavitsky%7CAnne%20Lamott%7CJoep%20Lange%7CAng%20Lee%7CLi%20Bin%7CRay%20Liotta%7CPeter%20Lipton%7CJames%20D.%20Macdonald%7CKen%20MacLeod

Note that get-pages subroutine uses hash contextualizer %() for creating a sequence of hash:

@pages.push($_) for jget($url)<query><pages>.map({ %(body => .<revisions>[0]<content>, title => .<title>) });

After that, we call create-documents-from-pages:

my @documents = create-documents-from-pages(@pages, @members);

create-documents-from-pages creates documents from each page:

sub create-documents-from-pages(@pages, @members --> List) {
    my @documents;
    for @pages -> $page {
        my @quotations = $page<body>.split("\n")\
        .map(*.subst(/\[\[$<text>=(<-[\[\]|]>+?)\|$<link>=(<-[\[\]|]>+?)\]\]/, { $<text> }, :g))\
        .map(*.subst(/\[\[$<text>=(<-[\[\]|]>+?)\]\]/, { $<text> }, :g))\
        .map(*.subst("[", "[", :g))\
        .map(*.subst("]", "]", :g))\
        .map(*.subst("&amp;", "&", :g))\
        .map(*.subst("&nbsp;", "", :g))\
        .map(*.subst(/:i [ \<\/?\s?br\> | \<br\s?\/?\> ]/, " ", :g))\
        .grep(/^\*<-[*]>/)\
        .map(*.subst(/^\*\s+/, ""));

        # Note: The order of array wikiquote API returned is agnostic.
        my Int $index = @members.pairs.grep({ .value eq $page<title> }).map(*.key).head;
        @documents.push(%(body => $_, personid => $index)) for @quotations;
    }
    @documents.sort({ $^a<personid> <=> $^b<personid> }).pairs.map({ %(docid => .key, personid => .value<personid>, body => .value<body>) }).list
}

where .map(*.subst(/\[\[$<text>=(<-[\[\]|]>+?)\|$<link>=(<-[\[\]|]>+?)\]\]/, { $<text> }, :g)) and .map(*.subst(/\[\[$<text>=(<-[\[\]|]>+?)\]\]/, { $<text> }, :g)) are coverting commands that extract texts for displaying and delete texts for internal linking from anchor texts. For example, [[Perl]] is reduced into Perl. For more syntax info, see: https://docs.perl6.org/language/regexes#Named_captures or https://docs.perl6.org/routine/subst

After some cleaning operations (.e.g., .map(*.subst("[", "[", :g))), we extract quotation lines.
.grep(/^\*<-[*]>/) finds lines starting with single asterisk because most of the quotations appear in such kind of lines.

Next, .map(*.subst(/^\*\s+/, "")) deletes each asterisk since asterisk itself isn’t a constituent of each quotation.

Finally, we save the documents and members (i.e., titles):

my $docfh = open "documents.txt", :w;
$docfh.say((.<docid>, .<personid>, .<body>).join(" ")) for @documents;
$docfh.close;

my $memfh = open "members.txt", :w;
$memfh.say($_) for @members;
$memfh.close;

Exercise 2: Execute LDA and Visualize the Result

In the previous section, we saved the cleaned documents.
In this section, we use the documents for LDA estimation and visualize the result.
The goal of this section is to plot a document-topic distribution and write a topic-word table.

The whole source code is:

use v6.c;
use Algorithm::LDA;
use Algorithm::LDA::Formatter;
use Algorithm::LDA::LDAModel;
use Chart::Gnuplot;
use Chart::Gnuplot::Subset;

sub create-model(@documents --> Algorithm::LDA::LDAModel) {
    my $stopwords = "stopwords/english".IO.lines.Set;
    my &tokenizer = -> $line { $line.words.map(*.lc).grep(-> $w { ($stopwords !(cont) $w) and $w !~~ /^[ <:S> | <:P> ]+$/ }) };
    my ($documents, $vocabs) = Algorithm::LDA::Formatter.from-plain(@documents.map({ my ($, $, *@body) = .words; @body.join(" ") }), &tokenizer);
    my Algorithm::LDA $lda .= new(:$documents, :$vocabs);
    my Algorithm::LDA::LDAModel $model = $lda.fit(:num-topics(10), :num-iterations(500), :seed(2018));
    $model
}

sub plot-topic-distribution($model, @members, @documents, $search-regex = rx/Larry/) {
    my $target-personid = @members.pairs.grep({ .value ~~ $search-regex }).map(*.key).head;
    my $docid = @documents.map({ my ($docid, $personid, *@body) = .words; %(docid => $docid, personid => $personid, body => @body.join(" ")) })\
    .grep({ .<personid> == $target-personid and .<body> ~~ /:i << perl >>/}).map(*<docid>).head;

    note("@documents[$docid] is selected");
    my ($row-size, $col-size) = $model.document-topic-matrix.shape;
    my @doc-topic = gather for ($docid X ^$col-size) -> ($i, $j) { take $model.document-topic-matrix[$i;$j]; }
    my Chart::Gnuplot $gnu .= new(:terminal("png"), :filename("topics.png"));
    $gnu.command("set boxwidth 0.5 relative");
    my AnyTicsTic @tics = @doc-topic.pairs.map({ %(:label(.key), :pos(.key)) });
    $gnu.legend(:off);
    $gnu.xlabel(:label("Topic"));
    $gnu.ylabel(:label("P(z|theta,d)"));
    $gnu.xtics(:tics(@tics));
    $gnu.plot(:vertices(@doc-topic.pairs.map({ @(.key, .value.exp) })), :style("boxes"), :fill("solid"));
    $gnu.dispose;
}

sub write-nbest($model) {
  my $topics := $model.nbest-words-per-topic(10);
  for ^(10/5) -> $part-i {
    say "|" ~ (^5).map(-> $t { "topic { $part-i * 5 + $t }" }).join("|") ~ "|";
    say "|" ~ (^5).map({ "----" }).join("|") ~ "|";
    for ^10 -> $rank {
        say "|" ~ gather for ($part-i * 5)..^($part-i * 5 + 5) -> $topic {
            take @($topics)[$topic;$rank].key;
        }.join("|") ~ "|";
    }
    "".say;
  }
}

sub save-model($model) {
  my @document-topic-matrix := $model.document-topic-matrix;
  my ($document-size, $topic-size) = @document-topic-matrix.shape;
  my $doctopicfh = open "document-topic.txt", :w;

  $doctopicfh.say: ($document-size, $topic-size).join(" ");
  for ^$document-size -> $doc-i {
    $doctopicfh.say: gather for ^$topic-size -> $topic { take @document-topic-matrix[$doc-i;$topic] }.join(" ");
  }
  $doctopicfh.close;

  my @topic-word-matrix := $model.topic-word-matrix;
  my ($, $word-size) = @topic-word-matrix.shape;
  my $topicwordfh = open "topic-word.txt", :w;

  $topicwordfh.say: ($topic-size, $word-size).join(" ");
  for ^$topic-size -> $topic-i {
    $topicwordfh.say: gather for ^$word-size -> $word { take @topic-word-matrix[$topic-i;$word] }.join(" ");
  }
  $topicwordfh.close;

  my @vocabulary := $model.vocabulary;
  my $vocabfh = open "vocabulary.txt", :w;

  $vocabfh.say($_) for @vocabulary;
  $vocabfh.close;
}

my @documents = "documents.txt".IO.lines;
my $model = create-model(@documents);
my @members = "members.txt".IO.lines;
plot-topic-distribution($model, @members, @documents);
write-nbest($model);
save-model($model);

First, we load the cleaned documents and call create-model:

my @documents = "documents.txt".IO.lines;
my $model = create-model(@documents);

create-model creates a LDA model by loading given documents:

sub create-model(@documents --> Algorithm::LDA::LDAModel) {
    my $stopwords = "stopwords/english".IO.lines.Set;
    my &tokenizer = -> $line { $line.words.map(*.lc).grep(-> $w { ($stopwords !(cont) $w) and $w !~~ /^[ <:S> | <:P> ]+$/ }) };
    my ($documents, $vocabs) = Algorithm::LDA::Formatter.from-plain(@documents.map({ my ($, $, *@body) = .words; @body.join(" ") }), &tokenizer);
    my Algorithm::LDA $lda .= new(:$documents, :$vocabs);
    my Algorithm::LDA::LDAModel $model = $lda.fit(:num-topics(10), :num-iterations(500), :seed(2018));
    $model
}

where $stopwords is a set of English stopwords from NLTK (I mentioned preliminary section), and &tokenizer is a custom tokenizer for Algorithm::LDA::Formatter.from-plain. The tokenizer converts given sentence as follows:

    1. Splits given sentence by whitespace and makes a list of tokens.
    1. Replaces each characters of the token with lower-case characters.
    1. Deletes token that exists in the stopwords list or one-length token categorized as Symbol or Punctuation.

Algorithm::LDA::Formatter.from-plain creates numerical native documents (i.e., each word in a document is mapped to its corresponding vocabulary id, and this id is represented by C int32) and vocabulary from a list of texts.

After creating an Algorithm::LDA instance using the above numerical documents, we can start LDA estimation by Algorithm::LDA.fit. In this example, we set the number of topics to 10, and the number of iterations to 100, and the seed for srand to 2018.

Next, we plot a document-topic distribution. Before this plotting, we load the saved members:

my @members = "members.txt".IO.lines;
plot-topic-distribution($model, @members, @documents);

plot-topic-distribution plots topic distribution with Chart::Gnuplot:

sub plot-topic-distribution($model, @members, @documents, $search-regex = rx/Larry/) {
    my $target-personid = @members.pairs.grep({ .value ~~ $search-regex }).map(*.key).head;
    my $docid = @documents.map({ my ($docid, $personid, *@body) = .words; %(docid => $docid, personid => $personid, body => @body.join(" ")) })\
    .grep({ .<personid> == $target-personid and .<body> ~~ /:i << perl >>/}).map(*<docid>).head;

    note("@documents[$docid] is selected");
    my ($row-size, $col-size) = $model.document-topic-matrix.shape;
    my @doc-topic = gather for ($docid X ^$col-size) -> ($i, $j) { take $model.document-topic-matrix[$i;$j]; }
    my Chart::Gnuplot $gnu .= new(:terminal("png"), :filename("topics.png"));
    $gnu.command("set boxwidth 0.5 relative");
    my AnyTicsTic @tics = @doc-topic.pairs.map({ %(:label(.key), :pos(.key)) });
    $gnu.legend(:off);
    $gnu.xlabel(:label("Topic"));
    $gnu.ylabel(:label("P(z|theta,d)"));
    $gnu.xtics(:tics(@tics));
    $gnu.plot(:vertices(@doc-topic.pairs.map({ @(.key, .value.exp) })), :style("boxes"), :fill("solid"));
    $gnu.dispose;
}

In this example, we plot topic distribution of a Larry Wall’s quotation (“Although the Perl Slogan is There’s More Than One Way to Do It, I hesitate to make 10 ways to do something.”):
"Although the Perl Slogan is There's More Than One Way to Do It, I hesitate to make 10 ways to do something."

After the plotting, we call write-nbest:

write-nbest($model);

In LDA, what topic XXX represents is expressed as a list of words. write-nbest writes a markdown style topic-word distribution table:

sub write-nbest($model) {
  my $topics := $model.nbest-words-per-topic(10);
  for ^(10/5) -> $part-i {
    say "|" ~ (^5).map(-> $t { "topic { $part-i * 5 + $t }" }).join("|") ~ "|";
    say "|" ~ (^5).map({ "----" }).join("|") ~ "|";
    for ^10 -> $rank {
        say "|" ~ gather for ($part-i * 5)..^($part-i * 5 + 5) -> $topic {
            take @($topics)[$topic;$rank].key;
        }.join("|") ~ "|";
    }
    "".say;
  }
}

The result is:

topic 0 topic 1 topic 2 topic 3 topic 4
would scotland black could one
it’s country mr. first work
believe one lot law new
one political play college human
took world official basic process
much need new speak business
don’t must reacher language becomes
ever national five every good
far many car matter world
fighting us road right knowledge
topic 5 topic 6 topic 7 topic 8 topic 9
apple united people like */
likely war would one die
company states i’m something und
jobs years know think quantum
even would think way play
steve american want things noble
life president get perl home
like human going long dog
end must say always student
small us go really ist

As you can see, the quotation of “Although the Perl Slogan is There’s More Than One Way to Do It, I hesitate to make 10 ways to do something.” contains “one”, “way” and “perl”. This is the reason why this quotation is mainly composed of topic 8.

For the next section, we save the model by save-model subroutine:

sub save-model($model) {
  my @document-topic-matrix := $model.document-topic-matrix;
  my ($document-size, $topic-size) = @document-topic-matrix.shape;
  my $doctopicfh = open "document-topic.txt", :w;

  $doctopicfh.say: ($document-size, $topic-size).join(" ");
  for ^$document-size -> $doc-i {
    $doctopicfh.say: gather for ^$topic-size -> $topic { take @document-topic-matrix[$doc-i;$topic] }.join(" ");
  }
  $doctopicfh.close;

  my @topic-word-matrix := $model.topic-word-matrix;
  my ($, $word-size) = @topic-word-matrix.shape;
  my $topicwordfh = open "topic-word.txt", :w;

  $topicwordfh.say: ($topic-size, $word-size).join(" ");
  for ^$topic-size -> $topic-i {
    $topicwordfh.say: gather for ^$word-size -> $word { take @topic-word-matrix[$topic-i;$word] }.join(" ");
  }
  $topicwordfh.close;

  my @vocabulary := $model.vocabulary;
  my $vocabfh = open "vocabulary.txt", :w;

  $vocabfh.say($_) for @vocabulary;
  $vocabfh.close;
}

Exercise 3: Create Quotation Search Engine

In this section, we create a quotation search engine which uses the model created in the previous section.
More specifically, we create LDA-based document model (Xing Wei and W. Bruce Croft 2006) and make a CLI tool that can search quotations. (Note that the words “token” and “word” are interchangable in this section)

The whole source code is:

use v6.c;

sub MAIN(Str :$query!) {
    my \doc-topic-iter = "document-topic.txt".IO.lines.iterator;
    my \topic-word-iter = "topic-word.txt".IO.lines.iterator;
    my ($document-size, $topic-size) = doc-topic-iter.pull-one.words;
    my ($, $word-size) = topic-word-iter.pull-one.words;

    my Num @document-topic[$document-size;$topic-size];
    my Num @topic-word[$topic-size;$word-size];

    for ^$document-size -> $doc-i {
        my \maybe-line := doc-topic-iter.pull-one;
        die "Error: Something went wrong" if maybe-line =:= IterationEnd;
        my Num @line = @(maybe-line).words>>.Num;
        for ^@line {
            @document-topic[$doc-i;$_] = @line[$_];
        }
    }

    for ^$topic-size -> $topic-i {
        my \maybe-line := topic-word-iter.pull-one;
        die "Error: Something went wrong" if maybe-line =:= IterationEnd;
        my Num @line = @(maybe-line).words>>.Num;
        for ^@line {
            @topic-word[$topic-i;$_] = @line[$_];
        }
    }

    my %vocabulary = "vocabulary.txt".IO.lines.pairs>>.antipair.hash;
    my @members = "members.txt".IO.lines;
    my @documents = "documents.txt".IO.lines;
    my @docbodies = @documents.map({ my ($, $, *@body) = .words; @body.join(" ") });
    my %doc-to-person = @documents.map({ my ($docid, $personid, $) = .words; %($docid => $personid) }).hash;
    my @query = $query.words.map(*.lc);

    my @sorted-list = gather for ^$document-size -> $doc-i {
        my Num $log-prob = gather for @query -> $token {
            my Num $log-ml-prob = Pml(@docbodies, $doc-i, $token);
            my Num $log-lda-prob = Plda($token, $topic-size, $doc-i, %vocabulary, @document-topic, @topic-word);
            take log-sum(log(0.2) + $log-ml-prob, log(0.8) + $log-lda-prob);
        }.sum;
        take %(doc-i => $doc-i, log-prob => $log-prob);
    }.sort({ $^b<log-prob> <=> $^a<log-prob> });

    for ^10 {
        my $docid = @sorted-list[$_]<doc-i>;
        sprintf("\"%s\" by %s %f", @docbodies[$docid], @members[%doc-to-person{$docid}], @sorted-list[$_]<log-prob>).say;
    }

}

sub Pml(@docbodies, $doc-i, $token --> Num) {
    my Int $num-tokens = @docbodies[$doc-i].words.grep({ /:i^ $token $/ }).elems;
    my Int $total-tokens = @docbodies[$doc-i].words.elems;
    return -100e0 if $total-tokens == 0 or $num-tokens == 0;
    log($num-tokens) - log($total-tokens);
}

sub Plda($token, $topic-size, $doc-i, %vocabulary is raw, @document-topic is raw, @topic-word is raw --> Num) {
    gather for ^$topic-size -> $topic {
        if %vocabulary{$token}:exists {
            take @document-topic[$doc-i;$topic] + @topic-word[$topic;%vocabulary{$token}];
        } else {
            take -100e0;
        }
    }.reduce(&log-sum);
}

sub log-sum(Num $log-a, Num $log-b --> Num) {
    if $log-a < $log-b {
        return $log-b + log(1 + exp($log-a - $log-b))
    } else {
        return $log-a + log(1 + exp($log-b - $log-a))
    }
}

At the beginning, we load the saved model and prepare @document-topic, @topic-word, %vocabulary, @documents, @docbodies, %doc-to-person and @members:

    my \doc-topic-iter = "document-topic.txt".IO.lines.iterator;
    my \topic-word-iter = "topic-word.txt".IO.lines.iterator;
    my ($document-size, $topic-size) = doc-topic-iter.pull-one.words;
    my ($, $word-size) = topic-word-iter.pull-one.words;

    my Num @document-topic[$document-size;$topic-size];
    my Num @topic-word[$topic-size;$word-size];

    for ^$document-size -> $doc-i {
        my \maybe-line = doc-topic-iter.pull-one;
        die "Error: Something went wrong" if maybe-line =:= IterationEnd;
        my Num @line = @(maybe-line).words>>.Num;
        for ^@line {
            @document-topic[$doc-i;$_] = @line[$_];
        }
    }

    for ^$topic-size -> $topic-i {
        my \maybe-line = topic-word-iter.pull-one;
        die "Error: Something went wrong" if maybe-line =:= IterationEnd;
        my Num @line = @(maybe-line).words>>.Num;
        for ^@line {
            @topic-word[$topic-i;$_] = @line[$_];
        }
    }

    my %vocabulary = "vocabulary.txt".IO.lines.pairs>>.antipair.hash;
    my @members = "members.txt".IO.lines;
    my @documents = "documents.txt".IO.lines;
    my @docbodies = @documents.map({ my ($, $, *@body) = .words; @body.join(" ") });
    my %doc-to-person = @documents.map({ my ($docid, $personid, $) = .words; %($docid => $personid) }).hash;

Next, we set @query using option :$query:

my @query = $query.words.map(*.lc);

After that, we compute the probability of P(query|document) based on Eq. 9 of the aforementioned paper (Note that we use logarithm to avoid undeflow and set the parameter mu to zero) and sort them.

    my @sorted-list = gather for ^$document-size -> $doc-i {
        my Num $log-prob = gather for @query -> $token {
            my Num $log-ml-prob = Pml(@docbodies, $doc-i, $token);
            my Num $log-lda-prob = Plda($token, $topic-size, $doc-i, %vocabulary, @document-topic, @topic-word);
            take log-sum(log(0.2) + $log-ml-prob, log(0.8) + $log-lda-prob);
        }.sum;
        take %(doc-i => $doc-i, log-prob => $log-prob);
    }.sort({ $^b<log-prob> <=> $^a<log-prob> });

Plda adds logarithmic topic given document probability (i.e., lnP(topic|theta,document)) and word given topic probability (i.e., lnP(word|phi,topic)) for each topic, and sums them by .reduce(&log-sum);:

sub Plda($token, $topic-size, $doc-i, %vocabulary is raw, @document-topic is raw, @topic-word is raw --> Num) {
    gather for ^$topic-size -> $topic {
        if %vocabulary{$token}:exists {
            take @document-topic[$doc-i;$topic] + @topic-word[$topic;%vocabulary{$token}];
        } else {
            take -100e0;
        }
    }.reduce(&log-sum);
}

and Pml (ml means Maximum Likelihood) counts $token and normalizes it by the number of the total tokens in the document (Note that this computation is also conducted in log space):

sub Pml(@docbodies, $doc-i, $token --> Num) {
    my Int $num-tokens = @docbodies[$doc-i].words.grep({ /:i^ $token $/ }).elems;
    my Int $total-tokens = @docbodies[$doc-i].words.elems;
    return -100e0 if $total-tokens == 0 or $num-tokens == 0;
    log($num-tokens) - log($total-tokens);
}

OK, then let’s execute!

query “perl”:

$ perl6 search-quotation.p6 --query="perl"
"Perl will always provide the null." by Larry Wall -3.301156
"Perl programming is an *empirical* science!" by Larry Wall -3.345189
"The whole intent of Perl 5's module system was to encourage the growth of Perl culture rather than the Perl core." by Larry Wall -3.490238
"I dunno, I dream in Perl sometimes..." by Larry Wall -3.491790
"At many levels, Perl is a 'diagonal' language." by Larry Wall -3.575779
"Almost nothing in Perl serves a single purpose." by Larry Wall -3.589218
"Perl has a long tradition of working around compilers." by Larry Wall -3.674111
"As for whether Perl 6 will replace Perl 5, yeah, probably, in about 40 years or so." by Larry Wall -3.684454
"Well, I think Perl should run faster than C." by Larry Wall -3.771155
"It's certainly easy to calculate the average attendance for Perl conferences." by Larry Wall -3.864075

query “apple”:

$ perl6 search-quotation.p6 --query="apple"
"Steve Jobs is the"With phones moving to technologies such as Apple Pay, an unwillingness to assure security could create a Target-like exposure that wipes Apple out of the market." by Rob Enderle -3.841538
"*:From Joint Apple / HP press release dated 1 January 2004 available [http://www.apple.com/pr/library/2004/jan/08hp.html here]." by Carly Fiorina -3.904489
"Samsung did to Apple what Apple did to Microsoft, skewering its devoted users and reputation, only better. ... There is a way for Apple to fight back, but the company no longer has that skill, and apparently doesn't know where to get it, either." by Rob Enderle -3.940359
"[W]hen it came to the iWatch, also a name that Apple didn't own, Apple walked away from it and instead launched the Apple Watch. Certainly, no risk of litigation, but the product's sales are a fraction of what they otherwise might have been with the proper name and branding." by Rob Enderle -4.152145
"[W]hen Apple wanted the name "iPhone" and it was owned by Cisco, Steve Jobs just took it, and his legal team executed so he could keep it. It turned out that doing this was surprisingly inexpensive. And, as the Apple Watch showcased, the Apple Phone likely would not have sold anywhere near as well as the iPhone." by Rob Enderle -4.187223
"The cause of [Apple v. Qualcomm] appears to be an effort by Apple to pressure Qualcomm into providing a unique discount, largely because Apple has run into an innovation wall, is under increased competition from firms like Samsung, and has moved to a massive cost reduction strategy. (I've never known this to end well, as it causes suppliers to create unreliable components and outright fail.)" by Rob Enderle -4.318575
"Apple tends to aggressively work to not discover problems with products that are shipped and certainly not talk about them." by Rob Enderle -4.380863
"Apple no longer owns the tablet market, and will likely lose dominance this year or next. ... this level of sustained dominance doesn't appear to recur with the same vendor even if it launched the category." by Rob Enderle -4.397954
"Apple is becoming more and more like a typical tech firm — that is, long on technology and short on magic. ... Apple is drifting closer and closer to where it was back in the 1990s. It offers advancements that largely follow those made by others years earlier, product proliferation, a preference for more over simple elegance, and waning excitement." by Rob Enderle -4.448473
"[T]he litigation between Qualcomm and Apple/Intel ... is weird. What makes it weird is that Intel appears to think that by helping Apple drive down Qualcomm prices, it will gain an advantage, but since its only value is as a lower cost, lower performing, alternative to Qualcomm's modems, the result would be more aggressively priced better alternatives to Intel's offerings from Qualcomm/Broadcom, wiping Intel out of the market. On paper, this is a lose/lose for Intel and even for Apple. The lower prices would flow to Apple competitors as well, lowering the price of competing phones. So, Apple would not get a lasting benefit either." by Rob Enderle -4.469852 Ronald McDonald of Apple, he is the face." by Rob Enderle -3.822949
"With phones moving to technologies such as Apple Pay, an unwillingness to assure security could create a Target-like exposure that wipes Apple out of the market." by Rob Enderle -3.849055
"*:From Joint Apple / HP press release dated 1 January 2004 available [http://www.apple.com/pr/library/2004/jan/08hp.html here]." by Carly Fiorina -3.895163
"Samsung did to Apple what Apple did to Microsoft, skewering its devoted users and reputation, only better. ... There is a way for Apple to fight back, but the company no longer has that skill, and apparently doesn't know where to get it, either." by Rob Enderle -4.052616
"*** The previous line contains the naughty word '$&'.\n if /(ibm|apple|awk)/; # :-)" by Larry Wall -4.088445
"The cause of [Apple v. Qualcomm] appears to be an effort by Apple to pressure Qualcomm into providing a unique discount, largely because Apple has run into an innovation wall, is under increased competition from firms like Samsung, and has moved to a massive cost reduction strategy. (I've never known this to end well, as it causes suppliers to create unreliable components and outright fail.)" by Rob Enderle -4.169533
"[T]he litigation between Qualcomm and Apple/Intel ... is weird. What makes it weird is that Intel appears to think that by helping Apple drive down Qualcomm prices, it will gain an advantage, but since its only value is as a lower cost, lower performing, alternative to Qualcomm's modems, the result would be more aggressively priced better alternatives to Intel's offerings from Qualcomm/Broadcom, wiping Intel out of the market. On paper, this is a lose/lose for Intel and even for Apple. The lower prices would flow to Apple competitors as well, lowering the price of competing phones. So, Apple would not get a lasting benefit either." by Rob Enderle -4.197869
"Apple tends to aggressively work to not discover problems with products that are shipped and certainly not talk about them." by Rob Enderle -4.204618
"Today's tech companies aren't built to last, as Apple's recent earnings report shows all too well." by Rob Enderle -4.209901
"[W]hen it came to the iWatch, also a name that Apple didn't own, Apple walked away from it and instead launched the Apple Watch. Certainly, no risk of litigation, but the product's sales are a fraction of what they otherwise might have been with the proper name and branding." by Rob Enderle -4.238582

Conclusions

In this article, we explored Wikiquote and created a LDA model using Algoritm::LDA.
After that we built an information retrieval application.

Thanks for reading my article! See you next time!

Citations

  • Blei, David M. “Probabilistic topic models.” Communications of the ACM 55.4 (2012): 77-84.
  • Wei, Xing, and W. Bruce Croft. “LDA-based document models for ad-hoc retrieval.” Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval. ACM, 2006.

License

Day 13 – Mining Wikipedia with Perl 6

Introduction

Hello, everyone!

Today, let me introduce how to mine Wikipedia Infobox with Perl 6.

Wikipedia Infobox plays a very important role in Natural Language Processing, and there are many applications that leverage Wikipedia Infobox:

  • Building a Knowlege Base (e.g. DBpedia [0])
  • Ranking the importance of attributes [1]
  • Question Answering [2]

Among them, I’ll focus on the infobox extraction issues and demonstrate how to parse the sophisticated structures of the infoboxes with Grammar and Actions.

Are Grammar and Actions difficult to learn?

No, they aren’t!

You only need to know just five things:

  • Grammar
    • token is the most basic one. You may normally use it.
    • rule makes whitespace significant.
    • regex makes match engine backtrackable.
  • Actions
    • make prepares an object to return when made calls on it.
    • made calls on its invocant and returns the prepared object.

For more info, see: https://docs.perl6.org/language/grammars

What is Infobox?

Have you ever heard the word “Infobox”?

For those who haven’t heard it, I’ll explain it briefly.

An easy way to understand Infobox is by using a real example:

perl6infobox

As you can see, the infobox displays the attribute-value pairs of the page’s subject at the top-right side of the page. For example, in this one, it says the designer (ja: 設計者) of Perl 6 is Larry Wall (ja: ラリー・ウォール).

For more info, see: https://en.wikipedia.org/wiki/Help:Infobox

First Example: Perl 6

Firstly to say, I’ll demonstrate the parsing techniques using Japanese Wikipedia not with English Wikipedia.

The main reason is that parsing Japanese Wikipedia is my $dayjob :)

The second reason is that I want to show how easily Perl 6 can handle Unicode strings.

Then, let’s start parsing the infobox in the Perl 6 article!

The code of the article written in wiki markup is:


{{Comp-stub}}
{{Infobox プログラミング言語
|名前 = Perl 6
|ロゴ = [[Image:Camelia.svg|250px]]
|パラダイム = [[マルチパラダイムプログラミング言語|マルチパラダイム]]
|登場時期 = [[2015年]]12月25日
|設計者 = [[ラリー・ウォール]]
|最新リリース = Rakudo Star 2016.04
|型付け = [[動的型付け]], [[静的型付け]]
|処理系 = [[Rakudo]]
|影響を受けた言語 = [[Perl|Perl 5]], [[Smalltalk]], [[Haskell]], [[Ruby]]
|ライセンス = [[Artistic License 2]]
|ウェブサイト = [https://perl6.org/ Perl6.org]
}}
{{プログラミング言語}}
'''Perl 6'''(パールシックス)は、[[ラリー・ウォール]]により設計された[[オブジェクト指向]][[スクリプト言語]]である。
Perl 6は、[[2000年]]に[[Perl]]の次期メジャーバージョンとして設計が始められ、[[2015年]]12月25日に公式のPerl 6正式安定版がリリースされた。しかし、言語仕様は現在のPerl (Perl 5)と互換性がなく、既存のPerl 5のソフトウェアをPerl 6用に「アップグレ
ード」するのは極めて困難である。したがって現在はPerl 5とPerl 6は別の言語であると考えられており、Perl 6はPerl 5の次期バージョンではないとされている。換言すれば、Perl 6はPerl 5から移行対象とはみなされていない。

There are three problematic portions of the code:

  1. There are superfluous elements after the infobox block, such as the template {{プログラミング言語}} and the lead sentence starting with '''Perl 6'''.
  2. We have to discriminate three types of tokens: anchor text (e.g. [[Rakudo]]), raw text (e.g. Rakudo Star 2016.04), weblink (e.g. [https://perl6.org/ Perl6.org]).
  3. The infobox doesn’t start at the top position of the article. In this example, {{Comb-stub}} is at the top of the article.

OK, then I’ll show how to solve the above problems in the order of Grammar, Actions, Caller (i.e. The portions of the code that calls Grammar and Actions).

Grammar

The code for Grammar is:


grammar Infobox::Grammar {
token TOP { <infobox> .+ } # (#1)
token infobox { '{{Infobox' <.ws> <name> \n <propertylist> '}}' }
token name { <[\n]>+ }
token propertylist {
[
| <property> \n
| \n
]+
}
token property {
'|' <key=.keycontent> '=' <value=.valuecontentlist>
}
token key-content { <[=\n]>+ }
token value-content-list {
<valuecontent>+
}
token value-content { # (#6)
[
| <anchortext>
| <weblink>
| <rawtext>
| <delimiter>
]+
}
token anchortext { '[[' <[\n]>+? ']]' } # (#2)
token weblink { '[' <[\n]>+? ']' } # (#3)
token rawtext { <[\|\[\]\n、\,\<\>\}\{]>+ } # (#4)
token delimiter { [ '、' | ',' ] } # (#5)
}

  • Solutions to the problem 1:
    • Use .+ to match superfluous portions. (#1)
  • Solutions to the problem 2:
    • Prepare three types of tokens: anchortext (#2), weblink (#3), and rawtext (#4).
      • The tokens may be separated by delimiter (e.g. ,), so prepare the token delimiter. (#5)
    • Represent the token value-content as an arbitrary length sequence of the four tokens (i.e. anchortext, weblink, rawtext, delimiter). (#6)
  • Solutions to the problem 3:
    • There are no particular things to mention.

Actions

The code for Actions is:


class Infobox::Actions {
method TOP($/) { make $<infobox>.made }
method infobox($/) {
make %( name => $<name>.made, propertylist => $<propertylist>.made )
}
method name($/) { make ~$/.trim }
method propertylist($/) {
make $<property>>>.made
}
method property($/) {
make $<key>.made => $<value>.made
}
method key-content($/) { make $/.trim }
method value-content-list($/) {
make $<value-content>>>.made
}
method value-content($/) { # (#1)
my $rawtext = $<rawtext>>>.made>>.trim.grep({ $_ ne "" });
make %(
anchortext => $<anchortext>>>.made,
weblink => $<weblink>>>.made,
rawtext => $rawtext.elems == 0 ?? $[] !! $rawtext.Array
);
}
method anchortext($/) {
make ~$/;
}
method weblink($/) {
make ~$/;
}
method rawtext($/) {
make ~$/;
}
}

  • Solutions to the problem 2:
    • Make the token value-content consist of the three keys: anchortext, weblink,  and rawtext.
  • Solutions to the problem 1 and 3:
    • There are no particular things to mention.

Caller

The code for Caller is:


my @lines = $*IN.lines;
while @lines {
my $chunk = @lines.join("\n"); # (#1)
my $result = Infobox::Grammar.parse($chunk, actions => Infobox::Actions).made; # (#2)
if $result<name>:exists {
$result<name>.say;
for @($result<propertylist>) -> (:$key, :value($content-list)) { # (#3)
$key.say;
for @($content-list) -> $content {
$content.say;
}
}
}
shift @lines;
}

  • Solutions to the problem 3:
    • Read the article line-by-line and make a chunk which contains the lines between the current line and the last line.  (#1)
    • If the parser determines that:
      • The chunk doesn’t contain the infobox, it returns an undefined value. One of the good ways to receive an undefined value is to use $ sigil. (#2)
      • The chunk contains the infobox, it returns a defined value. Use @()contextualizer and iterate the result. (#3)
  • Solutions to the problem 1 and 2:
    • There are no particular things to mention.

Running the Parser

Are you ready?
It’s time to run the 1st example!


$ perl6 parser.p6 < perl6.txt
プログラミング言語
名前
{anchortext => [], rawtext => [Perl 6], weblink => []}
ロゴ
{anchortext => [[[Image:Camelia.svg|250px]]], rawtext => [], weblink => []}
パラダイム
{anchortext => [[[マルチパラダイムプログラミング言語|マルチパラダイム]]], rawtext => [], weblink => []}
登場時期
{anchortext => [[[2015年]]], rawtext => [12月25日], weblink => []}
設計者
{anchortext => [[[ラリー・ウォール]]], rawtext => [], weblink => []}
最新リリース
{anchortext => [], rawtext => [Rakudo Star 2016.04], weblink => []}
型付け
{anchortext => [[[動的型付け]] [[静的型付け]]], rawtext => [], weblink => []}
処理系
{anchortext => [[[Rakudo]]], rawtext => [], weblink => []}
影響を受けた言語
{anchortext => [[[Perl|Perl 5]] [[Smalltalk]] [[Haskell]] [[Ruby]]], rawtext => [], weblink => []}
ライセンス
{anchortext => [[[Artistic License 2]]], rawtext => [], weblink => []}
ウェブサイト
{anchortext => [], rawtext => [], weblink => [[https://perl6.org/ Perl6.org]]}

The example we have seen may be too easy for you. Let’s challenge more harder one!

Second Example: Albert Einstein

As the second example, let’s parse the infobox of Albert Einstein.

The code of the article written in wiki markup is:


{{Infobox Scientist
|name = アルベルト・アインシュタイン
|image = Einstein1921 by F Schmutzer 2.jpg
|caption = [[1921年]]、[[ウィーン]]での[[講義]]中
|birth_date = {{生年月日と年齢|1879|3|14|no}}
|birth_place = {{DEU1871}}<br>[[ヴュルテンベルク王国]][[ウルム]]
|death_date = {{死亡年月日と没年齢|1879|3|14|1955|4|18}}
|death_place = {{USA1912}}<br />[[ニュージャージー州]][[プリンストン (ニュージャージー州)|プリンストン]]
|residence = {{DEU}}<br />{{ITA}}<br>{{CHE}}<br />{{AUT}}(現在の[[チェコ]])<br />{{BEL}}<br />{{USA}}
|nationality = {{DEU1871}}、ヴュルテンベルク王国(1879-96)<br />[[無国籍]](1896-1901)<br />{{CHE}}(1901-55)<br />{{AUT1867}}(1911-12)<br />{{DEU1871}}、{{DEU1919}}(1914-33)<br />{{USA1912}}(1940-55)
| spouse = [[ミレヴァ・マリッチ]]&nbsp;(1903-1919)<br />{{nowrap|{{仮リンク|エルザ・アインシュタイン|en|Elsa Einstein|label=エルザ・レーベンタール}}&nbsp;(1919-1936)}}
| children = [[リーゼル・アインシュタイン|リーゼル]] (1902-1903?)<br />[[ハンス・アルベルト・アインシュタイン|ハンス
・アルベルト]] (1904-1973)<br />[[エドゥアルト・アインシュタイン|エドゥアルト]] (1910-1965)
|field = [[物理学]]<br />[[哲学]]
|work_institution = {{Plainlist|
* [[スイス特許庁]] ([[ベルン]]) (1902-1909)
* {{仮リンク|ベルン大学|en|University of Bern}} (1908-1909)
* [[チューリッヒ大学]] (1909-1911)
* [[プラハ・カレル大学]] (1911-1912)
* [[チューリッヒ工科大学]] (1912-1914)
* [[プロイセン科学アカデミー]] (1914-1933)
* [[フンボルト大学ベルリン]] (1914-1917)
* {{仮リンク|カイザー・ヴィルヘルム協会|en|Kaiser Wilhelm Society|label=カイザー・ヴィルヘルム研究所}} (化学・物理学研究所長, 1917-1933)
* [[ドイツ物理学会]] (会長, 1916-1918)
* [[ライデン大学]] (客員, 1920-)
* [[プリンストン高等研究所]] (1933-1955)
* [[カリフォルニア工科大学]] (客員, 1931-33)
}}
|alma_mater = [[チューリッヒ工科大学]]<br />[[チューリッヒ大学]]
|doctoral_advisor = {{仮リンク|アルフレート・クライナー|en|Alfred Kleiner}}
|academic_advisors = {{仮リンク|ハインリヒ・フリードリヒ・ウェーバー|en|Heinrich Friedrich Weber}}
|doctoral_students =
|known_for = {{Plainlist|
*[[一般相対性理論]]
*[[特殊相対性理論]]
*[[光電効果]]
*[[ブラウン運動]]
*[[E=mc2|質量とエネルギーの等価性]](E=mc<sup>2</sup>)
*[[アインシュタイン方程式]]
*[[ボース分布関数]]
*[[宇宙定数]]
*[[ボース=アインシュタイン凝縮]]
*[[EPRパラドックス]]
*{{仮リンク|古典統一場論|en|Classical unified field theories}}
}}
| influenced = {{Plainlist|
* {{仮リンク|エルンスト・G・シュトラウス|en|Ernst G. Straus}}
* [[ネイサン・ローゼン]]
* [[レオ・シラード]]
}}
|prizes = {{Plainlist|
*{{仮リンク|バーナード・メダル|en|Barnard Medal for Meritorious Service to Science}}(1920)
*[[ノーベル物理学賞]](1921)
*[[マテウチ・メダル]](1921)
*[[コプリ・メダル]](1925)
*[[王立天文学会ゴールドメダル]](1926)
*[[マックス・プランク・メダル]](1929)
}}
|religion =
|signature = Albert Einstein signature 1934.svg
|footnotes =
}}
{{thumbnail:begin}}
{{thumbnail:ノーベル賞受賞者|1921年|ノーベル物理学賞|光電効果の法則の発見等}}
{{thumbnail:end}}
'''アルベルト・アインシュタイン'''<ref group="†">[[日本語]]における表記には、他に「アル{{Underline|バー}}ト・アインシュine|バー}}ト・アイン{{Underline|ス}}タイン」([[英語]]の発音由来)がある。</ref>({{lang-de-short|Albert Einstein}}<ref ɛrt ˈaɪnˌʃtaɪn}} '''ア'''ルベルト・'''ア'''インシュタイン、'''ア'''ルバート・'''ア'''インシュタイン</ref><ref group="†"taɪn}} '''ア'''ルバ(ー)ト・'''ア'''インスタイン、'''ア'''ルバ(ー)'''タ'''インスタイン</ref><ref>[http://dictionary.rein Einstein] (Dictionary.com)</ref><ref>[http://www.oxfordlearnersdictionaries.com/definition/english/albert-einstein?q=Albert+Einstein Albert Einstein] (Oxford Learner's Dictionaries)</ref>、[[1879年]][[3月14日]] – [[1955年]][[4月18日]])ツ]]生まれの[[理論物理学者]]である。

As you can see, there are five new problems here:

  1. Some of the templates
    1. contain newlines; and
    2. are nesting (e.g. {{nowrap|{{仮リンク|...}}...}})
  2. Some of the attribute-value pairs are empty.
  3. Some of the value-sides of the attribute-value pairs
    1. contain break tag; and
    2. consist of different types of the tokens (e.g. anchortext and rawtext).
      So you need to add positional information to represent the dependency between tokens.

I’ll show how to solve the above problems in the order of Grammar, Actions.

The code of the Caller is the same as the previous one.

Grammar

The code for Grammar is:


grammar Infobox::Grammar {
token TOP { <infobox> .+ }
token infobox { '{{Infobox' <.ws> <name> \n <propertylist> '}}' }
token name { <[\n]>+ }
token propertylist {
[
| <property> \n
| \n
]+
}
token property {
[
| '|' <key=.keycontent> '=' <value=.valuecontentlist>
| '|' <key=.keycontent> '=' # (#4)
]
}
token key-content { <[=\n]>+ }
token value-content-list {
[
| <valuecontent> <br> # (#6)
| <valuecontent>
| <br>
]+
}
token value-content-list-nl { # (#1)
[
| <valuecontent> <br> # (#7)
| <valuecontent>
| <br>
]+ % \n
}
token value-content {
[
| <anchortext>
| <weblink>
| <rawtext>
| <template>
| <delimiter>
| <sup>
]+
}
token br { # (#5)
[
| '<br />'
| '<br/>'
| '<br>'
]
}
token template {
[
| '{{' <[\n]>+? '}}'
| '{{nowrap' '|' <valuecontentlist> '}}' # (#3)
| '{{Plainlist' '|' \n <valuecontentlistnl> \n '}}' # (#2)
]
}
token anchortext { '[[' <[\n]>+? ']]' }
token weblink { '[' <[\n]>+? ']' }
token rawtext { <[\|\[\]\n、\,\<\>\}\{]>+ }
token delimiter { [ '、' | ',' | '&nbsp;' ] }
token sup { '<sup>' <[\n]>+? '</sup>'}
}

  • Solutions to the problem 1.1:
    • Create the token value-content-list-nl which is the newline separated version of the token value-content-list. It is useful to use modified quantifier % to represent this kind of sequence. (#1)
    • Create the token template. In this one, define a sequence that represents Plainlist template. (#2)
  • Solutions to the problem 1.2:
    • Make the token template enable to call the token value-content-list. This modification triggers recursive call and captures nesting structure, because  the token value-content-list contains the token template. (#3)
  • Solutions to the problem 2:
    • In the token property, define a sequence that value-side is empty (i.e. a sequence that ends with ‘=’). (#4)
  • Solutions to the problem 3.1:
    • Create the token br (#5)
    •  Let the token br follow the token value-content in the two tokens:
      • The token value-content-list (#6)
      • The token value-content-list-nl (#7)

Actions

The code for Actions is:


class Infobox::Actions {
method TOP($/) { make $<infobox>.made }
method infobox($/) {
make %( name => $<name>.made, propertylist => $<propertylist>.made )
}
method name($/) { make $/.trim }
method propertylist($/) {
make $<property>>>.made
}
method property($/) {
make $<key>.made => $<value>.made
}
method key-content($/) { make $/.trim }
method value-content-list($/) {
make $<value-content>>>.made
}
method value-content($/) {
my $rawtext = $<rawtext>>>.made>>.trim.grep({ $_ ne "" });
make %(
anchortext => $<anchortext>>>.made,
weblink => $<weblink>>>.made,
rawtext => $rawtext.elems == 0 ?? $[] !! $rawtext.Array,
template => $<template>>>.made;
);
}
method template($/) {
make %(body => ~$/, from => $/.from, to => $/.to); # (#1)
}
method anchortext($/) {
make %(body => ~$/, from => $/.from, to => $/.to); # (#2)
}
method weblink($/) {
make %(body => ~$/, from => $/.from, to => $/.to); # (#3)
}
method rawtext($/) {
make %(body => ~$/, from => $/.from, to => $/.to); # (#4)
}
}

  • Solutions to the problem 3.2:
    • Use Match.from and Match.to to get the match starting position and the match ending position respectively when calling make. (#1 ~ #4)

Running the Parser

It’s time to run!


$ perl6 parser.p6 < einstein.txt
Scientist
name
{anchortext => [], rawtext => [{body => アルベルト・アインシュタイン, from => 27, to => 42}], template => [], weblink => []}
image
{anchortext => [], rawtext => [{body => Einstein1921 by F Schmutzer 2.jpg, from => 51, to => 85}], template => [], weblink => []}
caption
{anchortext => [{body => [[1921年]], from => 97, to => 106} {body => [[ウィーン]], from => 107, to => 115} {body => [[講義]], from => 117, to => 123}], rawtext => [{body => , from => 96, to => 97} {body => での, from => 115, to => 117} {body => 中, from => 123, to => 124}], template => [], weblink => []}
birth_date
{anchortext => [], rawtext => [{body => , from => 138, to => 139}], template => [{body => {{生年月日と年齢|1879|3|14|no}}, from => 139, to => 163}], weblink => []}
birth_place
{anchortext => [], rawtext => [{body => , from => 178, to => 179}], template => [{body => {{DEU1871}}, from => 179, to => 190}], weblink => []}
{anchortext => [{body => [[ヴュルテンベルク王国]], from => 194, to => 208} {body => [[ウルム]], from => 208, to => 215}], rawtext => [], template => [], weblink => []}
death_date
{anchortext => [], rawtext => [{body => , from => 229, to => 230}], template => [{body => {{死亡年月日と没年齢|1879|3|14|1955|4|18}}, from => 230, to => 263}], weblink => []}
death_place
{anchortext => [], rawtext => [{body => , from => 278, to => 279}], template => [{body => {{USA1912}}, from => 279, to => 290}], weblink => []}
{anchortext => [{body => [[ニュージャージー州]], from => 296, to => 309} {body => [[プリンストン (ニュージャージー州)|プリンストン]], from => 309, to => 338}], rawtext => [], template => [], weblink => []}
residence
{anchortext => [], rawtext => [{body => , from => 351, to => 352}], template => [{body => {{DEU}}, from => 352, to => 359}], weblink => []}
{anchortext => [], rawtext => [], template => [{body => {{ITA}}, from => 365, to => 372}], weblink => []}
{anchortext => [], rawtext => [], template => [{body => {{CHE}}, from => 376, to => 383}], weblink => []}
{anchortext => [{body => [[チェコ]], from => 400, to => 407}], rawtext => [{body => (現在の, from => 396, to => 400} {body => ), from => 407, to => 408}], template => [{body => {{AUT}}, from => 389, to => 396}], weblink => []}
{anchortext => [], rawtext => [], template => [{body => {{BEL}}, from => 414, to => 421}], weblink => []}
{anchortext => [], rawtext => [], template => [{body => {{USA}}, from => 427, to => 434}], weblink => []}
nationality
{anchortext => [], rawtext => [{body => , from => 449, to => 450} {body => ヴュルテンベルク王国(1879-96), from => 462, to => 481}], template => [{body => {{DEU1871}}, from => 450, to => 461}], weblink => []}
{anchortext => [{body => [[無国籍]], from => 487, to => 494}], rawtext => [{body => (1896-1901), from => 494, to => 505}], template => [], weblink => []}
{anchortext => [], rawtext => [{body => (1901-55), from => 518, to => 527}], template => [{body => {{CHE}}, from => 511, to => 518}], weblink => []}
{anchortext => [], rawtext => [{body => (1911-12), from => 544, to => 553}], template => [{body => {{AUT1867}}, from => 533, to => 544}], weblink => []}
{anchortext => [], rawtext => [{body => (1914-33), from => 582, to => 591}], template => [{body => {{DEU1871}}, from => 559, to => 570} {body => {{DEU1919}}, from => 571, to => 582}], weblink => []}
{anchortext => [], rawtext => [{body => (1940-55), from => 608, to => 617}], template => [{body => {{USA1912}}, from => 597, to => 608}], weblink => []}
spouse
{anchortext => [{body => [[ミレヴァ・マリッチ]], from => 634, to => 647}], rawtext => [{body => , from => 633, to => 634} {body => (1903-1919), from => 653, to => 664}], template => [], weblink => []}
{anchortext => [], rawtext => [], template => [{body => {{nowrap|{{仮リンク|エルザ・アインシュタイン|en|Elsa Einstein|label=エルザ・レーベンタール}}&nbsp;(1919-1936)}}, from => 670, to => 754}], weblink => []}
children
{anchortext => [{body => [[リーゼル・アインシュタイン|リーゼル]], from => 771, to => 793}], rawtext => [{body => , from => 770, to => 771} {body => (1902-1903?), from => 793, to => 806}], template => [], weblink => []}
{anchortext => [{body => [[ハンス・アルベルト・アインシュタイン|ハンス・アルベルト]], from => 812, to => 844}], rawtext => [{body => (1904-1973), from => 844, to => 856}], template => [], weblink => []}
{anchortext => [{body => [[エドゥアルト・アインシュタイン|エドゥアルト]], from => 862, to => 888}], rawtext => [{body => (1910-1965), from => 888, to => 900}], template => [], weblink => []}
field
{anchortext => [{body => [[物理学]], from => 910, to => 917}], rawtext => [{body => , from => 909, to => 910}], template => [], weblink => []}
{anchortext => [{body => [[哲学]], from => 923, to => 929}], rawtext => [], template => [], weblink => []}
work_institution
{anchortext => [], rawtext => [{body => , from => 949, to => 950}], template => [{body => {{Plainlist|
* [[スイス特許庁]] ([[ベルン]]) (1902-1909)
* {{仮リンク|ベルン大学|en|University of Bern}} (1908-1909)
* [[チューリッヒ大学]] (1909-1911)
* [[プラハ・カレル大学]] (1911-1912)
* [[チューリッヒ工科大学]] (1912-1914)
* [[プロイセン科学アカデミー]] (1914-1933)
* [[フンボルト大学ベルリン]] (1914-1917)
* {{仮リンク|カイザー・ヴィルヘルム協会|en|Kaiser Wilhelm Society|label=カイザー・ヴィルヘルム研究所}} (化学・物理学研究所長, 1917-1933)
* [[ドイツ物理学会]] (会長, 1916-1918)
* [[ライデン大学]] (客員, 1920-)
* [[プリンストン高等研究所]] (1933-1955)
* [[カリフォルニア工科大学]] (客員, 1931-33)
}}, from => 950, to => 1409}], weblink => []}
alma_mater
{anchortext => [{body => [[チューリッヒ工科大学]], from => 1424, to => 1438}], rawtext => [{body => , from => 1423, to => 1424}], template => [], weblink => []}
{anchortext => [{body => [[チューリッヒ大学]], from => 1444, to => 1456}], rawtext => [], template => [], weblink => []}
doctoral_advisor
{anchortext => [], rawtext => [{body => , from => 1476, to => 1477}], template => [{body => {{仮リンク|アルフレート・ク
ライナー|en|Alfred Kleiner}}, from => 1477, to => 1516}], weblink => []}
academic_advisors
{anchortext => [], rawtext => [{body => , from => 1537, to => 1538}], template => [{body => {{仮リンク|ハインリヒ・フリ
ードリヒ・ウェーバー|en|Heinrich Friedrich Weber}}, from => 1538, to => 1593}], weblink => []}
doctoral_students
Nil
known_for
{anchortext => [], rawtext => [{body => , from => 1627, to => 1628}], template => [{body => {{Plainlist|
*[[一般相対性理論]]
*[[特殊相対性理論]]
*[[光電効果]]
*[[ブラウン運動]]
*[[E=mc2|質量とエネルギーの等価性]](E=mc<sup>2</sup>)
*[[アインシュタイン方程式]]
*[[ボース分布関数]]
*[[宇宙定数]]
*[[ボース=アインシュタイン凝縮]]
*[[EPRパラドックス]]
*{{仮リンク|古典統一場論|en|Classical unified field theories}}
}}, from => 1628, to => 1861}], weblink => []}
influenced
{anchortext => [], rawtext => [{body => , from => 1877, to => 1878}], template => [{body => {{Plainlist|
* {{仮リンク|エルンスト・G・シュトラウス|en|Ernst G. Straus}}
* [[ネイサン・ローゼン]]
* [[レオ・シラード]]
}}, from => 1878, to => 1968}], weblink => []}
prizes
{anchortext => [], rawtext => [{body => , from => 1978, to => 1979}], template => [{body => {{Plainlist|
*{{仮リンク|バーナード・メダル|en|Barnard Medal for Meritorious Service to Science}}(1920)
*[[ノーベル物理学賞]](1921)
*[[マテウチ・メダル]](1921)
*[[コプリ・メダル]](1925)
*[[王立天文学会ゴールドメダル]](1926)
*[[マックス・プランク・メダル]](1929)
}}, from => 1979, to => 2181}], weblink => []}
religion
Nil
signature
{anchortext => [], rawtext => [{body => Albert Einstein signature 1934.svg, from => 2206, to => 2241}], template => [], weblink => []}
footnotes
Nil

Conclusion

I demonstrated the parsing techniques of the infoboxes. I highly recommend you to create your own parser if you have a chance to use Wikipedia as a resource for NLP. It will deepen your knowledge about not only Perl 6 but also Wikipedia.

See you again!

Citations

[0] Lehmann, Jens, et al. “DBpedia–a large-scale, multilingual knowledge base extracted from Wikipedia.” Semantic Web 6.2 (2015): 167-195.

[1] Ali, Esraa, Annalina Caputo, and Séamus Lawless. “Entity Attribute Ranking Using Learning to Rank.”

[2] Morales, Alvaro, et al. “Learning to answer questions from wikipedia infoboxes.” Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. 2016.

License

All of the materials from Wikipedia are licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License.


Itsuki Toyota
A web developer in Japan.

Day 10 — Let’s learn and try double-ended priority queue with Perl 6

Hello! My name is Itsuki Toyota. I’m a web developer in Japan.

In this post, let me introduce Algorithm::MinMaxHeap.

Introduction

Algorithm::MinMaxHeap is a Perl 6 implementation of double-ended priority queue (i.e. min-max heap).1

Fig.1 shows an example of this data structure:

minmaxheap

Fig.1

This data structure has an interesting property, that is, it has both max-heap and min-heap in a single tree.

More precisely, the min-max ordering is maintained in a single tree as the below description:

  • values stored at nodes on even levels (i.e. min levels) are smaller than or equal to the values stored at their descendants (if any)
  • values stored at nodes on odd levels (i.e. max levels) are greater than or equal to the values stored at their descendants (if any)

Algorithm::MinMaxHeap Internals

I’ll shortly explain what this double-ended priority queue does internally when some frequently used methods (e.g. insert, pop) are called on.2
They maintain the min-max ordering by the following operations:

insert($item)

1. Pushes the given item at the first available leaf position.
2. Checks whether it maintains the min-max ordering by traversing the nodes from the leaf to the root. If it finds a violating node, it swaps this node with a proper node and continues the checking operation.

pop-max

1. Extracts the maximum value node.
2. Fills the vacant position with the last node of the queue.
3. Checks whether it maintains the min-max ordering by traversing the nodes from the filled position to the leaf. If it finds a violating node, it swaps this node with a proper node and continues the checking operation.

pop-min

1. Extracts the minimum value node.
2. Fills the vacant position with the last node of the queue.
3. Checks whether it maintains the min-max ordering by traversing the nodes from the filled position to the leaf. If it finds a violating node, it swaps this node with a proper node and continues the checking operation.

Let’s try !

Then let me illustrate how to use Algorithm::MinMaxHeap.

Example 1

The first example is the simplest one.

Source:

use Algorithm::MinMaxHeap;

my $queue = Algorithm::MinMaxHeap[Int].new; # (#1)
my Int @items = (^10).pick(*);
@items.say; # [1 2 6 4 5 9 0 3 7 8]
$queue.insert($_) for @items; # (#2)
$queue.pop-max.say; # 9 (#3)

In this example, Algorithm::MinMaxHeap[Int].new creates a queue object, where all of its nodes are type Int. (#1)

$queue.insert($_) for @items inserts the numbers in the list. (#2)

$queue.pop-max pops a node which stores the maximum value. (#3)

Example 2

The second example defines user-defined type constraints using subset:

Source (chimera.p6):

use Algorithm::MinMaxHeap;

my subset ChimeraRat of Cool where Num|Rat; # (#1)
my $queue = Algorithm::MinMaxHeap[ChimeraRat].new;

$queue.insert(10e0); # ok
$queue.insert(1/10); # ok
$queue.insert(10); # die # (#2)

Output:

$ perl6 chimera.p6
Type check failed in assignment to @!nodes; expected ChimeraRat but got Int (10)
in method insert at /home/itoyota/.rakudobrew/moar-nom/install/share/perl6/site/sources/240192C19BBAACD01AB9686EE53F67BC530F8545 (Algorithm::MinMaxHeap) line 12
in block  at chimera.p6 line 8

In this example, subset ChimeraRat is a Cool, but only allows Num or Rat. (#1)

Therefore, when you insert 10 to the queue, it returns the error message and die.

It’s because 10 is a Int object. (#2)

Example 3

The third example inserts user-defined classes to the queue.

Source (class.p6):

use Algorithm::MinMaxHeap;

my class State {
 also does Algorithm::MinMaxHeap::Comparable[State]; # (#1) 
 has DateTime $.time;
 has $.payload;
 submethod BUILD(:$!time, :$!payload) { }
 method compare-to(State $s) { # (#2) 
   if $!time == $s.time {
     return Order::Same;
   }
   if $!time > $s.time {
     return Order::More;
   }
   if $!time < $s.time {
     return Order::Less;
   }
 }
}

my @items;

@items.push(State.new(time => DateTime.new(:year(1900), :month(6)),
 payload => "Rola"));
@items.push(State.new(time => DateTime.new(:year(-300), :month(3)),
 payload => "Taro"));
@items.push(State.new(time => DateTime.new(:year(1963), :month(12)),
 payload => "Hanako"));
@items.push(State.new(time => DateTime.new(:year(2020), :month(6)),
 payload => "Jack"));

my Algorithm::MinMaxHeap[Algorithm::MinMaxHeap::Comparable] $queue .= new;
$queue.insert($_) for @items;
$queue.pop-max.say until $queue.is-empty;

Output:

 $ perl6 class.p6
 State.new(time => DateTime.new(2020,6,1,0,0,0), payload => "Jack")
 State.new(time => DateTime.new(1963,12,1,0,0,0), payload => "Hanako")
 State.new(time => DateTime.new(1900,6,1,0,0,0), payload => "Rola")
 State.new(time => DateTime.new(-300,3,1,0,0,0), payload => "Taro")

In this example, the class State does the role Algorithm::MinMaxHeap::Comparable[State], where Algorithm::MinMaxHeap::Comparable[State] defines the stub method compare-to as:
method compare-to(State) returns Order:D { ... };
(#1)

Therefore, in this case, the State class must define a method compare-to so that it accepts a State object and returns an Order:D object. (#2)

Footnote

  • 1 Atkinson, Michael D., et al. “Min-max heaps and generalized priority queues.” Communications of the ACM 29.10 (1986): 996-1000.
  • 2 For more exact information, please read the above paper.

Thank you for reading my post !