## Day 21 – Collatz Variations

December 21, 2012 by

The Collatz sequence is one of those interesting “simple” math problems that I’ve run into a number of times. Most recently a blog post on programming it in Racket showed up on Hacker News. As happens so often, I instantly wanted to implement it in Perl 6.

```sub collatz-sequence(Int \$start) {
\$start, { when * %% 2 { \$_ / 2 }; when * !%% 2 { 3 * \$_ + 1 }; } ... 1;
}

sub MAIN(Int \$min, Int \$max) {
say [max] (\$min..\$max).map({ +collatz-sequence(\$_) });
}
```

This is a very straightforward implementation of the Racket post’s `max-cycle-length-range` as a stand-alone p6 script. `collatz-sequence` generates the sequence using the p6 sequence operator. Start with the given number. If it is divisible by two, do so: `when * %% 2 { \$_ / 2 }`. If it is not, multiply by three and add 1: `when * !%% 2 { 3 * \$_ + 1 }`. Repeat this until the sequence reaches 1.

`MAIN(Int \$min, Int \$max)` sets up our main function to take two integers. Many times I don’t bother with argument types in p6, but this provides a nice feedback for users:

```> perl6 collatz.pl blue red
Usage:
collatz.pl <min> <max>
```

The core of it just maps the numbers from `\$min` to `\$max` (inclusive) to the length of the sequence (`+collatz-sequence`) and then says the max of the resulting list (`[max]`).

Personally I’m a big fan of using the sequence operator for tasks like this; it directly represents the algorithm constructing the Collatz sequence in a simple and elegant fashion. On the other hand, you should be able to memoize the recursive version for a speed increase. Maybe that would give it an edge over the sequence operator version?

Well, I was wildly wrong about that.

```sub collatz-length(\$start) {
given \$start {
when 1       { 1 }
when * !%% 2 { 1 + collatz-length(3 * \$_ + 1) }
when * %% 2  { 1 + collatz-length(\$_ / 2) }
}
}

sub MAIN(\$min, \$max) {
say [max] (\$min..\$max).map({ collatz-length(\$_) });
}
```

This recursive version, which makes no attempt whatsoever to be efficient, is actually better than twice as fast as the sequence operator version. In retrospect, this makes perfect sense: I was worried about the recursive version making a function call for every iteration, but the sequence version has to make two, one to calculate the next iteration and the other to check and see if the ending condition has been reached.

Well, once I’d gotten this far, I thought I’d better do things correctly. I wrote two framing scripts, one for timing all the available scripts, the other for testing them to make sure they work!

```my @numbers = 1..200, 10000..10200;

sub MAIN(Str \$perl6, *@scripts) {
my %results;
for @scripts -> \$script {
my \$start = now;
qqx/\$perl6 \$script { @numbers }/;
my \$end = now;

%results{\$script} = \$end - \$start;
}

for %results.pairs.sort(*.value) -> (:key(\$script), :value(\$time)) {
say "\$script: \$time seconds";
}
}
```

This script takes as an argument a string that can be used to call a Perl 6 executable and a list of scripts to run. It runs the scripts using the specified executable, and times them using p6’s `now` function. It then sorts the results into order and prints them. (A similar script I won’t post here tests each of them to make sure they are returning correct results.)

In the new framework, the Collatz script has changed a bit. Instead of taking a min and a max value and finding the longest Collatz sequence generated by a number in that range, it takes a series of numbers and generates and reports the length of the sequence for each of them. Here’s the sequence operator script in its full new version:

```sub collatz-length(Int \$start) {
+(\$start, { when * %% 2 { \$_ / 2 }; when * !%% 2 { 3 * \$_ + 1 }; } ... 1);
}

sub MAIN(*@numbers) {
for @numbers -> \$n {
say "\$n: " ~ collatz-length(\$n.Int);
}
}
```

For the rest of the scripts I will skip the `MAIN` sub, which is exactly the same in each of them.

Framework established, I redid the recursive version starting from the new sequence operator code.

```sub collatz-length(Int \$n) {
given \$n {
when 1       { 1 }
when * %% 2  { 1 + collatz-length(\$_ div 2) }
when * !%% 2 { 1 + collatz-length(3 * \$_ + 1) }
}
}
```

The sharp-eyed will notice this version is different from the first recursive version above in two significant ways. This time I made the argument `Int \$n`, which instantly turned up a bit of a bug in all implementations thus far: because I used `\$_ / 2`, most of the numbers in the sequence were actually rationals, not integers! This shouldn’t change the results, but is probably less efficient than using `Int`s. Thus the second difference about, it now uses `\$_ div 2` to divide by 2. This version remains a great improvement over the sequence operator version, running in 4.7 seconds instead of 13.3. Changing ` when * !%% 2` to a simple `default` shaves another .3 seconds off the running time.

Once I started wondering how much time was getting eaten up by the `when` statements, rewriting that bit using the ternary operator was an obvious choice.

```sub collatz-length(Int \$start) {
+(\$start, { \$_ %% 2 ?? \$_ div 2 !! 3 * \$_ + 1 } ... 1);
}
```

Timing results: Basic sequence 13.4 seconds. Sequence with `div` 11.5 seconds. Sequence with `div` and ternary 9.7 seconds.

That made me wonder what kind of performance I could get from a handcoded loop.

```sub collatz-length(Int \$n is copy) {
my \$length = 1;
while \$n != 1 {
\$n = \$n %% 2 ?? \$n div 2 !! 3 * \$n + 1;
\$length++;
}
\$length;
}
```

That’s by far the least elegant of these, I think, but it gets great performance: 3 seconds.

Switching back to the recursive approach, how about using the ternary operator there?

```sub collatz-length(Int \$n) {
return 1 if \$n == 1;
1 + (\$n %% 2 ?? collatz-length(\$n div 2) !! collatz-length(3 * \$n + 1));
}
```

This one just edges out the handcoded loop, 2.9 seconds.

Can we do better than that? How about memoization? `is cached` is supposed to be part of Perl 6; neither implementation has it yet, but last year’s Advent calendar has a Rakudo implementation that still works. Using the last version changed to `sub collatz-length(Int \$n) is cached {` works nicely, but takes 3.4 seconds to execute. Apparently the overhead of caching slows it down a bit. Interestingly, the non-ternary recursive version does speed up with `is cached`, from 4.4 seconds to 3.6 seconds.

```sub collatz-length(Int \$n) {
return 1 if \$n == 1;
state %lengths;
return %lengths{\$n} if %lengths{\$n}:exists;
%lengths{\$n} = 1 + (\$n %% 2 ?? collatz-length(\$n div 2) !! collatz-length(3 * \$n + 1));
}
```

Bingo! 2.7 seconds.

I’m sure there are lots of other interesting approaches for solving this problem, and encourage people to send them in. In the meantime, here’s my summary of results so far:

 Script Rakudo Niecza bin/collatz-recursive-ternary-hand-cached.pl 2.5 1.7 bin/collatz-recursive-ternary.pl 3 1.7 bin/collatz-loop.pl 3.1 1.7 bin/collatz-recursive-ternary-cached.pl 3.2 N/A bin/collatz-recursive-default-cached.pl 3.5 N/A bin/collatz-recursive-default.pl 4.4 1.8 bin/collatz-recursive.pl 4.9 1.9 bin/collatz-sequence-ternary.pl 9.9 3.3 bin/collatz-sequence-div.pl 11.6 3.5 bin/collatz-sequence.pl 13.5 3.8

The table was generated from timing-table-generator.pl.

## Day 20 – Dynamic variables and DSL-y things

December 20, 2012 by

## Post from the past: a motivating example

Two years ago I wrote a blog post about Nim, a game played with piles of stones. I just put in ASCII diagrams of the actual Nim stone piles, telling myself that if I had time, I would put in fancy SVG diagrams, generated with Perl 6.

Naturally, I didn’t have time. My self-imposed deadline ran out, and I published the post with simple ASCII diagrams.

But time is ever-regenerative, and there for people who want it. So, let’s generate some fancy SVG diagrams with Perl 6.

## Have bit array, want SVG

What do we need, exactly? Well, a subroutine that takes an array of piles as input and generates an SVG file would be a really good start.

Let’s take the last “image” in the post as an example:

```3      OO O
4 OOOO
5 OOOO    O```

For the moment, let’s ignore the numbers at the left margin; they’re just counting stones. We summarize the piles themselves as a kind of bitmap, which also forms the input to the function:

```my @piles =
[0, 0, 0, 0, 0, 0, 0, 0, 1],
[1, 1, 1, 1, 0, 1, 1, 0, 1],
[1, 1, 1, 1, 0, 0, 0, 0, 1];

nim-svg(@piles);```

At this point, we need only create the `nim-svg` function itself, and make it render SVG from this bitmap. Since I’ve long since tired of outputting SVG by hand, I use the SVG module, which comes bundled with Rakudo Star.

```use SVG;

sub nim-svg(@piles) {
my \$width = max map *.elems, @piles;
my \$height = @piles.elems;

my @elements = gather for @piles.kv -> \$row, @pile {
for @pile.kv -> \$column, \$is_filled {
if \$is_filled {
take 'circle' => [
:cx(\$column + 0.5),
:cy(\$row + 0.5),
:r(0.4)
];
}
}
}

say SVG.serialize('svg' => [ :\$width, :\$height, @elements ]);
}```

I think you can follow the logic in there. The subroutine simply iterates over the bitmap, turning 1s into circles with appropriate coordinates.

## That’s it?

Well, this will indeed generate an SVG image for us, with the stones correctly placed. But let’s look again at the input that helped create this image:

```    [0, 0, 0, 0, 0, 0, 0, 0, 1],
[1, 1, 1, 1, 0, 1, 1, 0, 1],
[1, 1, 1, 1, 0, 0, 0, 0, 1];```

Clearly, though we can discern the stones and gaps in there if we squint in a bit-aware programmer’s fashion, the input isn’t… visually attractive. (The zeroes even look like stones, even though they’re gaps!)

## We can do better

Instead of using a bit array, let’s start from the desired SVG image and try to make the input look like that.

So, this is what I would prefer to write instead of a bitmask:

```nim {
_ _ _ _ _ _ _ _ o;
o o o o _ o o _ o;
o o o o _ _ _ _ o;
}```

That’s better. That looks more like my original ASCII diagram, while still being syntactic Perl 6 code.

## Making a DSL

Wikipedia talks about a DSL as a language “dedicated to a particular problem domain”. Well, the above way of specifying the input would be a DSL dedicated to solving the draw-SVG-images-of-Nim-positions domain. (Admittedly a fairly narrow domain. But I’m mostly out to show the potential of DSLs in Perl 6, not to change the world with this particular DSL.)

Now that we have the desired end state, how do we connect the wires and make the above work? Clearly we need to declare three subroutines: `nim`, `_`, `o`. (Yes, you can name a subroutine `_`, no sweat.)

```sub nim(&block) {
my @*piles;
my @*current-pile;

&block();
finish-last-pile();

nim-svg(@*piles);
}

sub _(@rest?) {
unless @rest {
finish-last-pile();
}
@*current-pile = 0, @rest;
return @*current-pile;
}

sub o(@rest?) {
unless @rest {
finish-last-pile();
}
@*current-pile = 1, @rest;
return @*current-pile;
}```

A couple of things are going on here.

• The two variables `@*piles` and `@*current-pile` are dynamic variables which means that they are visible not just in the current lexical scope, but also in all subroutines called before the current scope has finished. Notably, the two subroutines `_` and `o`.
• The two subroutines `_` and `o` take an optional parameter. On each row, the rightmost `_` or `o` acts as a silent “start of pile” marker, taking the time to do a bit of bookkeeping with the piles, storing away the last pile and starting on a new one.
• Each row in the DSL-y input basically forms a chain of subroutine calls. We take this into account by both incrementally building the `@*current-pile` array at each step, all the while returning it as (possible) input for the next subroutine call in the chain.

And that’s it. Oh yeah, we need the bookkeeping routine `finish-last-pile`, too:

```sub finish-last-pile() {
if @*current-pile {
push @*piles, [@*current-pile];
}
@*current-pile = ();
}```

## So, it works?

Now, the whole thing works. We can turn this DSL-y input:

```nim {
_ _ _ _ _ _ _ _ o;
o o o o _ o o _ o;
o o o o _ _ _ _ o;
}```

…into this SVG output:

```<svg
xmlns="http://www.w3.org/2000/svg"
xmlns:svg="http://www.w3.org/2000/svg"
width="9" height="3">

<circle cx="8.5" cy="0.5" r="0.4" />
<circle cx="0.5" cy="1.5" r="0.4" />
<circle cx="1.5" cy="1.5" r="0.4" />
<circle cx="2.5" cy="1.5" r="0.4" />
<circle cx="3.5" cy="1.5" r="0.4" />
<circle cx="5.5" cy="1.5" r="0.4" />
<circle cx="6.5" cy="1.5" r="0.4" />
<circle cx="8.5" cy="1.5" r="0.4" />
<circle cx="0.5" cy="2.5" r="0.4" />
<circle cx="1.5" cy="2.5" r="0.4" />
<circle cx="2.5" cy="2.5" r="0.4" />
<circle cx="3.5" cy="2.5" r="0.4" />
<circle cx="8.5" cy="2.5" r="0.4" />
</svg>```

Yay!

## Summary

The principles I used in this post are fairly easy to generalize. Start from your desired DSL, and create the subroutines to make it happen. Have dynamic variables handle the communication between separate subroutines.

DSLs are nice because they allow us to shape the code we’re writing around the problem we’re solving. Using relatively little “adapter code”, we’re left to focus on describing and solving problems in a natural way, making the programming language rise to our needs instead of lowering ourselves down to its needs.

## Day 19 – Gather and/or Coroutines

December 19, 2012 by

Today I’ll write about coroutines, gather-take and why they are as much fun as one another. But since it’s all about manipulating control flow, I took the liberty to reorganize the control flow of this advent post, so coroutines will finally appear somewhere at the end of it. In the meantime I’ll introduce the backstory, the problems that coroutines solved and how it looks from the Perl 6 kitchen.

LWP::Simple is all fun and games, but sometimes you can’t afford to wait for the result to come. It would make sense to say “fetch me this webpage and drop me a note when you’re done with it”. That’s non trivial though; LWP::Simple is a black box, which we tell “get() this, get() that” and it gives us the result back. There is no possible way to intercept the internal data it sends there and around. Or is there?

If you look at Perl 5’s AnyEvent::HTTP, you’ll see that it reimplemented the entire HTTP client to have it non-blocking. Let’s see if we can do better than that.

First thing, where does LWP::Simple actually block? Behind our backs it uses the built-in IO::Socket::INET class. When it wants data from it, it calls .read() or .recv() and patiently waits until they’re done. If only we could somehow make it not rely on those two directly, hmm…

„I know!”, a gemstone-fascinated person would say, „We can monkey-patch IO::Socket::INET”. And then we have two problems. No, we’ll go the other way, and follow the glorious path of Dependency Injection.

That sounds a bit scary. I’ve heard about as many definitions of Dependency Injection as many people I know. The general idea is to not create objects inside other objects directly; it should be possible to supply them from the outside. I like to compare it to elimination of „magic constants”. No one likes those; if you think of classes as another kind of magic constants which may appear in somebody else’s code, this is pretty much what this is about. In our case it looks like this:

``````# LWP::Simple make_request
my IO::Socket::INET \$sock .= new(:\$host, :\$port);
``````

There we go. “IO::Socket::INET” is the magic constant here; if you want to use a different thing, you’re doomed. Let’s mangle it for a bit and allow the socket class to come from the outside.

We’ll add an attribute to LWP::Simple, let’s call it \$!socketclass

``````has \$.socketclass = IO::Socket::INET;
``````

If we don’t supply any, it will just fallback to IO::Socket::INET, which is a sensible default. Then, instead of the previous .new() call, we do

``````my \$sock = \$!socketclass.new(:\$host, :\$port);
``````

The actual patch (https://github.com/tadzik/perl6-lwp-simple/commit/93c182ac2) is a bit more complicated, as LWP::Simple supports calling get() not only on constructed objects but also on type objects, which have no attributes set, but we only care about the part shown above. We have an attribute \$!socketclass, which defaults to IO::Socket::INET but we’re free to supply another class – dependency-inject it. Cool! So in the end it’ll look like this:

``````class Fakesocket is IO::Socket::INET {
method recv(\$) {
note 'We intercepted recv()';
callsame;
}

callsame;
}
}

# later
my \$lwp = LWP::Simple.new(socketclass => Fakesocket);
``````

And so our \$lwp is a fine-crafted LWP::Simple which could, theorically, give the control flow back to us while it waits for read() and recv() to finish. So, how about we put theory into practice?

### Here start the actual coroutines, sorry for being late :)

What do we really need in our modified recv() and read()? We need a way to say „yeah, if you could just stop executing and give time to someone else, that would be great.” Oh no, but we have no threads! Luckily, we don’t need any. Remember lazy lists?

``````my @a := gather { for 1..* -> \$n { take \$n } }
``````

So on one hand we run an infinite for loop, and on the other we have a way to say „give back what you’ve come up with, I’ll catch up with you later”. That’s what take() does: it temporarily jumps out of the gather block, and is ready to get back to it whenever you want it. Do I hear the sound of puzzles clicking together? That’s exactly what we need! Jump out of the execution flow and wait until we’re asked to continue.

``````class Fakesocket is IO::Socket::INET {
method recv(\$) {
take 1;
callsame;
}

take 1;
callsame;
}
}

# later
my @a := gather {
\$lwp.get("http://jigsaw.w3.org/HTTP/300/301.html");
take "done";
}

# give time to LWP::Simple, piece by piece
while ~@a.shift ne "done" {
say "The coroutine is still running"
}
say "Yay, done!";
``````

There we go! We just turned LWP::Simple into a non-blocking beast, using almost no black magic at all! Ain’t that cool.

We now know enough to create some syntactic sugar around it all. Everyone likes sugar.

``````module Coroutines;
my @coroutines;
enum CoroStatus <still_going done>;

sub async(&coroutine) is export {
@coroutines.push(\$(gather {
&coroutine();
take CoroStatus::done;
}));
}

#= must be called from inside a coroutine
sub yield is export {
take CoroStatus::still_going;
}

#= should be called from mainline code
sub schedule is export {
return unless +@coroutines;
my \$r = @coroutines.shift;
if \$r.shift ~~ CoroStatus::still_going {
@coroutines.push(\$r);
}
}
``````

We maintain a list of coroutines currently running. Our async() sub just puts a block of code in the execution queue. Then every call to yield() will make it jump back to the mainline code. schedule(), on the other hand, will pick the first available coroutine to be run and will give it some time to do whatever it wants.

Now, let us wait for the beginning of the post to catch up.

## Day 18 – Formulas: resistance is futile

December 18, 2012 by

Today, Perl turns 25: happy birthday Perl! There’s too much to say about this language, its philosophy, its culture, … So here, I would just thank all people who make Perl a success, for such a long time.

## Introduction

A formula is “an entity constructed using the symbols and formation rules of a given language“, according to Wikipedia as of this writing. These words sound really familiar for any Perl 6 users who have already played with grammars, however this is not the purpose of this article. Instead, the aim is to demonstrate how the Perl 6 language can be easily extended in order to use formulas literally in the code.

There are many domains, like Mathematics, Physics, finance, etc., that use their own specific languages. When writing programs for such a domain, it could be less error-prone and simpler to use its specific language instead of using a specific API. For example, someone who has knowledge in electronic may find the formula below:

`4.7kΩ ± 5%`

far more understandable than the following piece of code:

```my \$factory MeasureFactory.getSharedInstance();
my \$resistance = \$factory.createMeasure(value     => 4700,
unit      => Unit::ohm,
precision => 5);```

The formula `4.7kΩ ± 5%` will be used all along this article as an example.

## Symbol `k`: return a modified value

Let’s start with the simplest symbol: `k`. Basically this is just a multiplier placed after a numeric value. To make the Perl 6 language support this new operator, there’s no need to know much about Perl 6 guts: operators are just funny looking sub-routines:

`sub postfix:<k> (\$a) is tighter(&infix:<*>) { \$a * 1000 }`

This just makes `4.7k` return `4.7 * 1000`, for example. To be a little bit picky, such kind of multiplier should not be used without a unit (ex. `Ω`) and not be coupled to another multiplier (ex. `μ`). This would have made this article a little bit more complex, so this is left as an exercise to the reader :) Regarding the `tighter` trait, it is already well explained in three other articles.

## Symbols `%`: return a closure

The next symbol is `%`: it is commonly used to compute a ratio of something, that’s why `5%` shouldn’t naively be transformed into `0.05`. Instead, it creates a closure that computes the given percent of whatever you want:

`sub postfix:<%> (\$a) is tighter(&infix:<*>) { * * \$a / 100 }`

It’s now possible to write `\$f = 5%; \$f(42)` or `5%(42)` directly, and this returns `2.1`. It is worth saying this doesn’t conflict with the `infix:<%>` operator (modulo), that is, `5 % 42` still returns `5`.

## Symbol `Ω`: create a new `Measure` object

Let’s go on with the `Ω` symbol. One possibility is to tie the unit and the value in the same object, as in the `Measure` class defined below. The `ACCEPTS` method is explained later but the idea in this case is that two `Measure` objects with two different units can’t match together:

```enum Unit <volt ampere ohm>;

class Measure {
has Unit \$.unit;
has \$.value;

method ACCEPTS (Measure:D \$a) {
\$!unit == \$a.unit && \$!value.ACCEPTS(\$a.value);
}
}```

Then, one operator per unit can be defined in order to hide the underlying API, that is, to allow `4.7kΩ` as an equivalent of `Measure.new(value => 4.7k, unit => ohm)`:

```sub postfix:<V> (Real:D \$a) is looser(&postfix:<k>) {
Measure.new(value => \$a, unit => volt)
}
sub postfix:<A> (Real:D \$a) is looser(&postfix:<k>) {
Measure.new(value => \$a, unit => ampere)
}
sub postfix:<Ω> (Real:D \$a) is looser(&postfix:<k>) {
Measure.new(value => \$a, unit => ohm)
}```

Regarding the `ACCEPTS` method, it is used by `~~`, the smartmatch operator, to check if the left operand can match the right operand, the one with the `ACCEPTS` method. In other terms, `\$a ~~ \$b` is equivalent to `\$b.ACCEPTS(\$a)`. Typically, this allows the intuitive comparison between two different types, like scalars and containers for example.

In this example, this method is overloaded to ensure two `Measure `objects can match only if they have the same unit and if their values match. That means `4kΩ ~~ 4.0kΩ` is `True` whereas `4kΩ ~~ 4kV` is `False`. Actually, there are many units that can mix altogether, typically currencies (¥€\$) and the ones derived from the International System of Unit. But as usual, when something is a little bit more complex, it is left as an exercise to the reader ;)

## Symbol `±`: create a `Range` object

There’s only one symbol left so far: `±`. In the example, it is used to indicate the tolerance of the resistance. This tolerance could be either absolute (expressed in `Ω`) or relative (expressed in `%`), thus the new `infix:<±>` operator has several signatures and have to be declared with a `multi` keyword. In both cases, the `value` is a new `Range` objects with the right bounds:

```multi sub infix:<±> (Measure:D \$a, Measure:D \$b) is looser(&postfix:<Ω>) {
die if \$a.unit != \$b.unit;
Measure.new(value => Range.new(\$a.value - \$b.value,
\$a.value + \$b.value),
unit => \$a.unit);
}

multi sub infix:<±> (Measure:D \$a, Callable:D \$b) is looser(&postfix:<Ω>) {
Measure.new(value => Range.new(\$a.value - \$b(\$a.value),
\$a.value + \$b(\$a.value)),
unit => \$a.unit);
}```

Actually, any `Callable` object could be used in the second variant, not only the closures created by the `%` operators.

So far, so good! It’s time to check in the Perl6 REPL interface if everything works fine:

```> 4.7kΩ ± 1kΩ
Measure.new(unit => Unit::ohm, value => 3700/1..5700/1)

> 4.7kΩ ± 5%
Measure.new(unit => Unit::ohm, value => 4465/1..4935/1)```

It looks good, so all the code above ought to be moved into a dedicated module in order to be re-used at will. Then, a customer could load it and write literally:

```my \$resistance = 4321Ω;
die "resistance is futile" if !(\$resistance ~~ 4.7kΩ ± 5%);```

As of this writing, this works both in Niecza and Rakudo, the two most advanced implementations of Perl 6.

## Symbols that aren’t operators

Symbols in a formula are not always operators, they can be symbolic constants too, like π. In many languages, constants are just read-only variables, which sounds definitely weird: a variable isn’t supposed to be … variable? In Perl 6, a constant can be a read-only variable too (hmm) or a read-only term (this sounds better). For example, to define the constant term `φ`:

`constant φ = (1 + sqrt(5)) / 2;`

## Conclusion

In this article the Perl 6 language was slightly extended with several new symbols in order to embed simple formulas. Although it is possible to go further by changing the Perl 6 grammar in order to embed more specific languages, that is, languages that don’t have the same grammar rules. Indeed, there are already two such languages supported by Perl 6: regexp and quotes. The same way, Niecza use a custom language to connect its portable parts to the unportable.

## Bonus: How to type these exotic symbols?

Most of the Unicode symbols can be type in Xorg — the most used interface system on Linux — thanks to the `Compose` key, also named `Multi` key. When this special key is pressed, all the following key-strokes are somewhat merged in order to compose a symbol.

There’s plenty of documentation about this support elsewhere on Internet, so only the minimal information is provided here. First, to map the `Compose` key to the `Caps Lock` key, write in a X terminal:

`sh> setxkbmap -option compose:caps`

Some compositions are likely already defined, for instance `<caps>` followed by `+` then `-` should now produce `±`, but both `Ω` and `φ` are likely not defined. One solution is to write a
`~/.XCompose` file with the following content:

```include "%L" # Don't discard the current locale setting.

<Multi_key> <o> <h> <m>      : "Ω"  U03A9
<Multi_key> <O> <underscore> : "Ω"  U03A9
<Multi_key> <underscore> <O> : "Ω"  U03A9

<Multi_key> <p> <h> <y> : "φ"  U03C6
<Multi_key> <o> <bar>   : "φ"  U03C6
<Multi_key> <bar> <o>   : "φ"  U03C6```

This takes effect for each newly started applications. Feel free to leave a comment if you know how to add such a support on other
systems.

## Day 17 – Perl 6 from 30,000 feet

December 17, 2012 by

Many people have heard of Perl 6, especially in the greater Perl community.  However, Perl 6 has a complicated ecosystem which can be a littled daunting, so as a newcomer to the Perl 6 community myself, I thought I would share what I’ve learned.

## How do I install Perl 6?

It’s simple; you can just download one of the existing implementations of the language (as Perl 6 is a specification), build it, and install it! There are several implementations out there right now, in various states of completion. Rakudo is an implementation that targets Parrot, and is the implementation that I will discuss most in this post. Niecza is another implementation that targets the CLR (the .NET runtime). For more information on these implementations and on other implementations, please see Perl 6 Compilers. Perl 6 is an ever-evolving language, and any compiler that passes the official test suite can be considered a Perl 6 implementation.

## You mentioned “Parrot”; what’s that?

Parrot is a virtual machine that is designed to run dynamically typed languages. Along with the virtual machine, it includes tools for generating virtual machine code from intermediate languages (named PIR and PASM), as well as a suite of tools to make writing compilers easier.

## What is Rakudo written in?

Rakudo itself is written primarly in Perl 6, with some bits of C for some of the lower-level operations, like binding method arguments and adding additional opcodes to the Parrot VM. It may seem strange to implement a Perl 6 compiler in Perl 6 itself; Rakudo uses NQP for building itself.

## What’s NQP?

NQP (or Not Quite Perl 6) is an implementation of Perl 6 that is focused on creating compilers for the Parrot Compiler Toolkit. It is currently focused on targetting Parrot, but in the future, it may support various compilation targets, so you will be able to use Rakudo to compile your Perl 6 programs to Parrot opcodes, a JVM class file, or perhaps Javascript so you can run it in the browser. NQP is written in NQP, and uses a pre-compiled version of NQP to compile itself.

I hope that this information was useful to you, dear reader, and that it helps to clarify the different pieces of the Perl 6 ecosystem. As I learn more about each piece, I intend to write blog posts that will hopefully help others to get started contributing to Perl 6!

-Rob

## Day 16 – Operator precedence

December 16, 2012 by

## All the precedence men

As I was taking a walk today, I realized one of the reasons why I like Perl. Five as well as six. I often hear praise such as “Perl fits the way I think”. And I have that feeling too sometimes.

If I were the president (or prime minister, as I’m Swedish), and had a bunch of advisers, maybe some of them would be yes-men, trying to give me advice that they think I will want to hear, instead of advice that would be useful to me. Some languages are like that, presenting us with an incomplete subset of the necessary tools. The Perl languages, if they were advisers, wouldn’t be yes-men. They’d give me an accurate view of the world, even if that view would be a bit messy and hairy sometimes.

Which, I guess, is why Perl five and six are so often used in handling messy data and turning it into something useful.

To give a few specific examples:

• Perl 5 takes quotes and quoting very seriously. Not just strings but lists of strings, too. (See the `qw` keyword.) Perl 6 does the same, but takes quoting further. See see the recent post on quoting.
• jnthn shows in yesterday’s advent post that Perl 6 takes compiler phases seriously, and allows us to bundle together code that belongs together conceptually but not temporally. We need to do this because the world is gnarly and running a program happens in phases.
• Grammars in Perl 6 are not just powerful, but in some sense honest, too. They don’t oversimplify the task for the programmer, because then they would also limit the expressibility. Even though grammars are complicated and intricate, they should be, because they describe a process (parsing) that is complicated and intricate.

## Operators

Perl is known for its many operators. Some would describe it as an “operator-oriented” language. Where many other language will try to guess how you want your operators to behave on your values, or perhaps demand that you pre-declare all your types so that there’ll be no doubt, Perl 6 carries much of the typing information in its operators:

```my \$a = 5;
my \$b = 6;

say \$a + \$b;      # 11 (numeric addition)
say \$a * \$b;      # 30 (numeric multiplication)

say \$a ~ \$b;      # "56" (string concatenation)
say \$a x \$b;      # "555555" (string repetition)

say \$a || \$b;     # 5 (boolean disjunction)
say \$a && \$b;     # 6 (boolean conjunction)
```

Other languages will want to bunch together some of these for us, using the `+` operator for both numeric addition and string concatenation, for example. Not so Perl. You’re meant to choose yourself, because the choice matters. In return, Perl will care a little less about the types of the operands, and just deliver the appropriate result for you.

“The appropriate result” is most often a number if you used a numeric operator, and a string if you used a string operator. But sometimes it’s more subtle than that. Note that the boolean operators above actually preserved the numbers 5 and 6 for us, even though internally it treated them both as true values. In C, if we do the same, C will unhelpfully “flatten” these results down to the value 1, its spelling of the value `true`. Perl knows that truthiness comes in many flavors, and retains the particular flavor for you.

## Operator precedence

“All operators are equal, but some operators are more equal than others.” It is when we combine operators that we realize that the operators have different “tightness”.

```say 2 * 3 + 1;      # 7, because (2 * 3) + 1
say 1 + 2 * 3;      # 7, because 1 + (2 * 3), not 9
```

We can always be 100% explicit and surround enough of our operations with parentheses… but when we don’t, the operators seem to order themselves in some order, which is not just simple left-to-right evaluation. This ordering between operators is what we refer to as “precedence”.

No doubt you were taught in math class in school that multiplications should be evaluated before additions in the way we see above. It’s as if factors group together closer than terms do. The fact that this difference in precedence is useful is backed up by centuries of algebra notation. Most programming languages, Perl 6 included, incorporates this into the language.

By the way, this difference in precedence is found between other pairs of operators, even outside the realm of mathematics:

```      Additive (loose)    Multiplicative (tight)
================    ======================
number      +                       *
string      ~                       x
bool        ||                      &&```

It turns out that they make as much sense for other types as they do for numbers. And group theory bears this out: these other operators can be seen as a kind of addition and multiplication, if we squint.

## Operator precedence parser

Deep in the bowels of the Perl 6 parser sits a smaller parser which is very good at parsing expressions. The bigger parser which parses your Perl 6 program is a really good recursive-descent parser. It works great for creating syntax trees out of the larger program structure. It works less well on the level of expressions. Essentially, what trips up a recursive-descent parser is that it always has to create AST nodes for all the possible precedence levels, whether they’re present or not.

So this smaller parser is an operator-table parser. It knows what to do with each type of operator (prefix, infix, postfix…), and kind of weaves all the terms and operators into a syntax tree. Only the precedence levels actually used show up in the tree.

The optable parser works by comparing each new operator to the top operator on a stack of operators. So when it sees an expression like this:

`\$x ** 2 + 3 * \$x - 5`

it will first compare `**` against `+` and decide that the former is tighter, and thus `\$x ** 2` should be put together into a small tree. Later, it compares `+` against `*`, and decides to turn `3 * \$x` into a small tree. It goes on like this, eventually ending up with this tree structure:

```infix:<->
+-- infix:<+>
+-- infix:<**>
|    +-- term:<\$x>
|    +-- term:<2>
+-- infix:<*>
+-- term:<3>
+-- term:<\$x>```

Because leaf nodes are evaluated first and the root node last, this tree structure determines the order of evaluation for the expression. The order ends up being the same as if the expression had these parentheses:

`((\$x ** 2) + (3 * \$x)) - 5`

Which, again, is what we’ve learned to expect.

## Associativity

Another factor also governs how these invisible parentheses are to be distributed: operator associativity. It’s the concern of how the operator combines with multiple copies of itself, or other sufficiently similar operators on the same precedence level.

Some examples serve to explain the difference:

```\$x = \$y = \$z;     # becomes \$x = (\$y = \$z)
\$x / \$y / \$z;     # becomes (\$x / \$y) / \$z```

In both of these cases, we may look at the way the parentheses are doled out, and say “well, of course”. Of course we must first assign to `\$y` and only then to `\$x`. And of course we first divide by `\$y` and only then by `\$z`. So operators naturally have different associativity.

The optable parser compares not just the precedence of two operators but also, when needed, their associativity. And it puts the parentheses in the right place, just as above.

## User-defined operators

Now we come back to Perl not being a yes-man, and working hard to give you the appropriate tools for the job.

Perl 6 allows you to define operators. See my post from last year on the details of how. But it also allows you to specify precedence and associativity of each new operator.

As you specify a new operator, a new Perl 6 parser is automatically constructed for you behind the scenes, which contains your new operator. In this sense, the optable parser is open and extensible. And Perl 6 gives you exactly the same tools for talking about precedence and associativity as the compiler itself uses internally.

Perl treats you like a grown-up, and expects you to make good decisions based on a thorough understanding of the problem space. I like that.

## Day 15 – Phasers set to stun

December 15, 2012

When writing programs, it’s important not only to separate the concerns that need separating, but also to try and keep related things close to each other. This gives the program a sense of cohesion, and helps to avoid the inevitable problems that arise when updating one part of a program demands an update in another far-away part. One especially tricky problem can be when the things we want to do are distributed over time. This can cause us to move related things apart in order to get them to happen at the times we want.

Phasers in Perl 6 help you keep related concepts together in your code, while also indicating that certain aspects of them should happen at different points during the lifetime of the current program, invocation or loop construct. Let’s take a look at some of them.

### ENTER and LEAVE

One of the things I had most fun writing in Perl 6 recently was the debugger. There are various things that need a little care. For example, the debugger needs to look out for exceptions and, when they are thrown, give the user a prompt to let them debug why the exception was thrown. However, there is also a feature where, at the prompt, you can evaluate an expression. The debugger shouldn’t re-enter itself if this expression throws, so we need to keep track of if we’re already showing the prompt. This meant setting and clearing a flag. Thing is, the prompt method is relatively lengthy; it has a given/when to identify the various different commands. I could, of course, have set the prompt flag at the start and cleared it at the end. But that would have spread out the concern of maintaining the flag. Here’s what I did instead:

```method issue_prompt(\$ctx, \$cur_file) {
ENTER \$in_prompt = True;
LEAVE \$in_prompt = False;

# Lots of stuff here
}```

This ensures the flag is set when we enter the method, cleared when we leave the method – and lets me keep the two together.

### INIT and END

We’re writing a small utility and want to log what happens as we run it. Time wise, we want to:

• Open the log file at the start of the program, creating it if needed and overwriting an existing one otherwise
• Write log entries at various points during the program’s execution
• Close the log file at the end

Those three actions are fairly spread out in time, but we’d like to collect them together. This time, the INIT and END phasers come to the rescue.

```sub log(\$msg) {
my \$fh = INIT open("logfile", :w);
\$fh.say(\$msg);
END \$fh.close;
}```

Here, we use INIT to perform an action at program start time. It turns out that INIT also keeps around the value produced by the expression following it, meaning it can be used as an r-value. This means we have the file handle available to us, and can write to it during the program. Then, at the END of the program, we close the file handle. All of these have block forms, should you wish to do something more involved:

```sub log(\$msg) {
my \$fh = INIT open("logfile", :w);
\$fh.say(\$msg);
END {
\$fh.say("Ran in {now - INIT now} seconds");
\$fh.close;
}
}```

Note the second use of INIT in this example, to compute and remember the program start time so we can use it in the subtraction later on.

### FIRST, NEXT and LAST

These phasers work with loops. They fire the first time the loop body executes, at the end of every loop body execution, and after the last loop body execution. FIRST and LAST are especially powerful in so far as they let us move code that wants to special-case the first and last time the loop body runs inside of the loop construct itself. This makes the relationship between these bits of code and the loop especially clear, and lessens the chance somebody moves or copies the loop and forgets the related bits it has.

As an example, let’s imagine we are rendering a table of scores from a game. We want to write a header row, and also do a little ASCII art to denote the start and end of the table. Furthermore, we’d like to keep track of the best score each time around the loop, and then at the end print out the best score. Here’s how we could write it.

```for %scores.kv -> \$player, \$score {
FIRST say "Score\tPlayer";
FIRST say "-----\t------";
LAST  say "-----\t------";

NEXT (state \$best_score) max= \$score;
LAST say "BEST SCORE: \$best_score";

say "\$score\t\$player";
}```

Notice how we keep the header/footer code together, as well as being able to keep the best score tracking code together. It’s also all inside the loop, making its relationship to the loop clear. Note how the state variable also comes in useful here. It too is a construct that lets us keep a variable scoped inside a block even if its usage spans multiple invocations of the block.

### KEEP and UNDO

These are variants of LEAVE that trigger conditional on the block being successful (KEEP) or not (UNDO). A successful block completes without unhandled exceptions and returns a defined value. An unsuccessful block exits due to an exception or because it returns an undefined value. Say we were processing a bunch of files and want to build up arrays of successful files and failed files. We could write something like:

```sub process(\$file) {
KEEP push @success, \$file;
UNDO push @failure, \$file;

my \$fh = open(\$file);
# ...
}```

There are probably a bunch of transaction-like constructs that can also be very neatly implemented with these two.

### And there’s more!

While I’ve covered a bunch of the phasers here, there are some others. For example, there’s also BEGIN, which lets you do some computation at compile time. Hopefully, though, this set of examples gives you some inspiration in how phasers can be used effectively, as well as a better grasp of the motivation for them. Bringing related things together and setting unrelated things apart is something we need to think carefully about every day as developers, and phasers help us keep related concerns together, even if they should take place at different phases of our program’s execution.

## Day 14 – Primal Needs

December 14, 2012

Our brains are hard-wired to look for patterns, even where none exist. So, it’s no surprise that as soon as mankind started counting things, he would look for patterns in numbers. One group of numbers that have resisted the pattern matching capabilities of the human brain are the so-called “prime numbers”. These are numbers that can only be evenly divided by 1 or themselves–they have no other factors.

But you knew that already, so why am I talking about prime numbers instead of Perl 6? Because, just like our ancestors, the people that created Perl 6 and continue to shape it to be around for the next 100 years or more find prime numbers interesting. So interesting, in fact, that the language specification was modified to include a routine for determining whether or not a number is prime.

## Alpha

At first, implmementations of this prime-number-finder were pure Perl 6 and took advantage of other features of the language such as ranges and junctions. An example implementation is shown below:

`    sub is-prime(\$n) { \$n %% none 2..sqrt \$n }`

This implementation checks to see that none of numbers from 2 to the square root of `\$n` will evenly divide `\$n`. If this is the case, then the number is prime.

While the above implementation works fine, it is a little slow and it does suffer a little redundancy in the numbers it checks. For instance, if you know a number isn’t evenly divisible by 2, there’s no need to check if it’s evenly divisible by 4, yet the above algorithm does so anyway.

## Beta

An improvement on the algorithm is to only check if the I between 2 and the square root of the number evenly divide the number. But … but … that’s like like defining a word in terms of itself. Thanks to ubiquitous lazy evaluation in Perl 6, that’s entirely possible. Here’s an implementation:

```    my @primes := 2, 3, 5, -> \$p { (\$p+2, \$p+4 ... &is-prime)[*-1] } ... *;
sub is-prime(\$n) { \$n %% none @primes ...^  * > sqrt \$n }```

The array `@primes` is an infinite, lazily evaluated sequence of numbers starting with 2, 3, and 5. The next number in the sequence is generated by creating a new sequence of odd numbers that start from the last odd number and continue until we reach a prime. That prime is the next number in the sequence. But how do we know if it’s a prime? We check with our handy C subroutine that actually uses the lazy list of primes up to the square root of the number we’re testing to see if any of them are factors.

There’s a kind of mutual recursion going on here where the `@primes` array effectively memoizes the primes we’ve seen so far. But … then there’s the problem that `@primes` will continue to grow as you check bigger and bigger numbers for prime-ness. Can we do better?

Indeed we can.

## Gamma: Rabin-Miller test

Well … maybe we can. It depends on your idea of “better”. The Rabin-Miller primality test is probabalistic in nature. It doesn’t require storing an ever increasing cache of prime numbers to test if they are factors of the potential prime, but there is a chance that it will tell you that a number is prime when it actually isn’t. The good news is that we can adjust the odds so that we are reasonably confident that the number is prime. Here’s an implementation (taken from http://rosettacode.org/wiki/Miller-Rabin_primality_test#Perl_6):

```sub expmod(Int \$a is copy, Int \$b is copy, \$n) {
my \$c = 1;
repeat while \$b div= 2 {
(\$c *= \$a) %= \$n if \$b % 2;
(\$a *= \$a) %= \$n;
}
\$c;
}

subset PrimeCandidate of Int where { \$_ > 2 and \$_ % 2 };

my Bool multi sub is-prime(Int \$n, Int \$k)            { return False; }
my Bool multi sub is-prime(2, Int \$k)                 { return True; }
my Bool multi sub is-prime(PrimeCandidate \$n, Int \$k) {
my Int \$d = \$n - 1;
my Int \$s = 0;

while \$d %% 2 {
\$d div= 2;
\$s++;
}

for (2 ..^ \$n).pick(\$k) -> \$a {
my \$x = expmod(\$a, \$d, \$n);

next if \$x == 1 or \$x == \$n - 1;

for 1 ..^ \$s {
\$x = \$x ** 2 mod \$n;
return False if \$x == 1;
last if \$x == \$n - 1;
}
return False if \$x !== \$n - 1;
}

return True;
}```

The third multi variant of `is-prime` with the signature `(PrimeCandidate \$n, Int \$k)` is where all of the magic happens. This multi is only triggered when the prime candidate (`\$n`) is an odd number because of the definition of the `PrimeCandidate` type.

First, we factor out the powers of 2 from `\$n - 1`. Since `\$n` is an odd number, `\$n - 1` is even and so has at least one factor of 2. What we end up with is an odd number and some power-of-2 factors of `\$n - 1`. We then use those factors to see if a random sample of `\$k` numbers less than `\$n` are congruent to the square roots of unity modulo `\$n` (`expmod` handles the modular exponentiation). We repeat this for all of the powers of 2 we factored out of the original number. Fermat’s little theorem says that if we find any number where the congruence does not hold, then the number can not be prime.

The probability that this method will select a composite number as prime is based on how many numbers less than `\$n` we choose to sample. If we select `\$k` numbers to try, the probability is `4 ** -\$k`. By choosing to sample more numbers, we can quickly decrease the odds of a false positive to a negligible amount.

## Wrap up

But … most people don’t really have to worry about the implementation details of `is-prime`. Not only have `is-prime` and `expmod` been added to the Perl 6 specification, but actual implementations (ala Rabin-Miller) have been added to the Rakudo and Niecza Perl 6 compilers. So, if you want to test your new cryptographic algorithm and need some large prime numbers, or if you’re developing a new random number generator and need some candidates for the modulus, or maybe you’re developing a new hashing algorithm, Perl 6 has a built-in `is-prime` that can help.

## Day 13 – Bags and Sets

December 13, 2012 by

Over the years, I’ve written many variations on this code:

```my %words;
for slurp.comb(/\w+/).map(*.lc) -> \$word {
%words{\$word}++;
}
```

(Aside: `slurp.comb(/\w+/).map(*.lc)` does the standard Perl trick of reading files specified on the command line or standard in, goes through the data for words, and makes them lowercase.)

Perl 6 introduces two new Associative types for dealing with this sort of functionality. `KeyBag` is drop-in replacement for `Hash` in this sort of case:

```my %words := KeyBag.new;
for slurp.comb(/\w+/).map(*.lc) -> \$word {
%words{\$word}++;
}
```

Why would you prefer `KeyBag` over `Hash` in this case, considering that it’s a bit more code? Well, it does a better job of saying what you mean, if what you want is a positive `Int`-valued `Hash`. It actually enforces this as well:

```> %words{"the"} = "green";
Unhandled exception: Cannot parse number: green
```

That’s Niecza’s error; Rakudo’s is less clear, but the important point is you get an error; Perl 6 detects that you’ve violated your contract and complains.

And `KeyBag` has a couple more tricks up its sleeve. First, four lines to initialize your `KeyBag` isn’t terribly verbose, but Perl 6 has no trouble getting it down to one line:

```my %words := KeyBag.new(slurp.comb(/\w+/).map(*.lc));
```

`KeyBag.new` does its best to turn whatever it is given into the contents of a `KeyBag`. Given a `List`, each of the elements is added to the `KeyBag`, with the exact same result of our earlier block of code.

If you don’t need to modify the bag after its creation, then you can use `Bag` instead of `KeyBag`. The difference is `Bag` is immutable; if `%words` is a `Bag`, then `%words{\$word}++` is illegal. If immutability is okay for your application, then you can make the code even more compact:

```my %words := bag slurp.comb(/\w+/).map(*.lc);
```

`bag` is a helper sub that just calls `Bag.new` on whatever you give it. (I’m not sure why there is no equivalent `keybag` sub.)

`Bag` and `KeyBag` have a couple more tricks up their sleeve. They have their own versions of `.roll` and `.pick` which weigh their results according to the given values:

```> my \$bag = bag "red" => 2, "blue" => 10;
> say \$bag.roll(10);
> say \$bag.pick(*).join(" ");
blue blue blue blue blue blue red blue red blue
blue red blue blue red blue blue blue blue blue blue blue
```

This wouldn’t be too hard to emulate using a normal `Array`, but this version would be:

```> \$bag = bag "red" => 20000000000000000001, "blue" => 100000000000000000000;
> say \$bag.roll(10);
> say \$bag.pick(10).join(" ");
blue blue blue blue red blue red blue blue blue
blue blue blue red blue blue blue red blue blue
```

They also work with all the standard `Set` operators, and have a few of their own as well. Here’s a simple demonstration:

```sub MAIN(\$file1, \$file2) {
my \$words1 = bag slurp(\$file1).comb(/\w+/).map(*.lc);
my \$words2 = set slurp(\$file2).comb(/\w+/).map(*.lc);
my \$unique = (\$words1 (-) \$words2);
for \$unique.list.sort({ -\$words1{\$_} })[^10] -> \$word {
say "\$word: { \$words1{\$word} }";
}
}
```

Passed two filenames, this makes a `Bag` from the words in the first file, a `Set` from the words in the second file, uses the set difference operator `(-)` to compute the set of words which are only in the first file, sorts those words by their frequency of appearance, and then prints out the top ten.

This is the perfect point to introduce `Set`. As you might guess from the above, it works much like `Bag`. Where `Bag` is a `Hash` from `Any` to positive `Int`, `Set` is a `Hash` from `Any` to `Bool::True`. `Set` is immutable, and there is also a mutable `KeySet`.

Between `Set` and `Bag` we have a very rich collection of operators:

 Operation Unicode “Texas” Result Type is an element of ∈ (elem) Bool is not an element of ∉ !(elem) Bool contains ∋ (cont) Bool does not contain ∌ !(cont) Bool union ∪ (|) Set or Bag intersection ∩ (&) Set or Bag set difference (-) Set set symmetric difference (^) Set subset ⊆ (<=) Bool not a subset ⊈ !(<=) Bool proper subset ⊂ (<) Bool not a proper subset ⊄ !(<) Bool superset ⊇ (>=) Bool not a superset ⊉ !(>=) Bool proper superset ⊃ (>) Bool not a proper superset ⊅ !(>) Bool bag multiplication ⊍ (.) Bag bag addition ⊎ (+) Bag

Most of these are self-explanatory. Operators that return `Set` promote their arguments to `Set` before doing the operation. Operators that return `Bag` promote their arguments to `Bag` before doing the operation. Operators that return `Set` or Bag promote their arguments to `Bag` if at least one of them is a `Bag` or `KeyBag`, and to `Set` otherwise; in either case they return the type promoted to.

Please note that while the set operators have been in Niecza for some time, they were only added to Rakudo yesterday, and only in the Texas variations.

A bit of a word may be needed for the different varieties of unions and intersections of `Bag`. The normal union operator takes the max of the quantities in either bag. The intersection operator takes the min of the quantities in either bag. Bag addition adds the quantities from either bag. Bag multiplication multiplies the quantities from either bag. (There is some question if the last operation is actually useful for anything — if you know of a use for it, please let us know!)

```> my \$a = bag <a a a b b c>;
> my \$b = bag <a b b b>;

> \$a (|) \$b;
bag("a" => 3, "b" => 3, "c" => 1)

> \$a (&) \$b;
bag("a" => 1, "b" => 2)

> \$a (+) \$b;
bag("a" => 4, "b" => 5, "c" => 1)

> \$a (.) \$b;
bag("a" => 3, "b" => 6)
```

I’ve placed my full set of examples for this article and several data files to play with on Github. All the sample files should work on the latest very latest Rakudo from Github; I think all but `most-common-unique.pl` and `bag-union-demo.pl` should work with the latest proper Rakudo releases. Meanwhile those two scripts will work on Niecza, and with any luck I’ll have the bug stopping the rest of the scripts from working there fixed in the next few hours.

```> perl6 bin/most-common-unique.pl data/Hamlet.txt data/Much_Ado_About_Nothing.txt
ham: 358
queen: 119
hamlet: 118
hor: 111
pol: 86
laer: 62
oph: 58
ros: 53
horatio: 48
clown: 47
```

## Day 12 – Exceptions

December 12, 2012 by

Sometimes things go horribly wrong, and the only thing you can do is not to go on. Then you throw an exception.

But of course the story doesn’t end there. The caller (or the caller’s caller) must somehow deal with the exception. To do that in a sensible manner, the caller needs to have as much information as possible.

In Perl 6, exceptions should inherit from the type `Exception`, and by convention they go into the `X::` namespace.

So for example if you write a HTTP client library, and you decide that an exception should be thrown when the server returns a status code starting with 4 or 5, you could declare your exception class as

`````` class X::HTTP is Exception {
has \$.request-method;
has \$.url;
has \$.status;
has \$.error-string;

method message() {
"Error during \$.request-method request"
~ " to \$.url: \$.status \$.error-string";
}
}``````

And throw an exception as

`````` die X::HTTP.new(
request-method  => 'GET',
url             => 'http://example.com/no-such-file',
status          => 404,
);``````

The error message then looks like this:

`````` Error during GET request to

(line wrapped for the benefit of small browser windows).

If the exception is not caught, the program aborts and prints the error message, as well as a backtrace.

There are two ways to catch exceptions. The simple Pokemon style “gotta catch ‘em all” method catches exception of any type with `try`:

`````` my \$result = try do-operation-that-might-die();
if (\$!) {
note "There was an error: \$!";
note "But I'm going to go on anyway";
}``````

Or you can selectively catch some exception types and handle only them, and rethrow all other exceptions to the caller:

`````` my \$result =  do-operation-that-might-die();
CATCH {
when X::HTTP {
note "Got an HTTP error for URL \$_.url()";
# do some proper error handling
}
# exceptions not of type X::HTTP are rethrown
}``````

Note that the CATCH block is inside the same scope as the one where the error might occur, so that by default you have access to all the interesting varibles from that scope, which makes it easy to generate better error messages.

Inside the CATCH block, the exception is available as `\$_`, and is matched against all `when` blocks.

Even if you don’t need to selectively catch your exceptions, it still makes sense to declare specific classes, because that makes it very easy to write tests that checks for proper error reporting. You can check the type and the payload of the exceptions, without having to resort to checking the exact error message (which is always brittle).

But Perl 6 being Perl, it doesn’t force you to write your own exception types. If you pass a non-`Exception` objects to `die()`, it simply wraps them in an object of type `X::AdHoc` (which in turn inherits from `Exception`), and makes the argument available with the `payload` method:

``````    sub I-am-fatal() {
die "Neat error message";
}
try I-am-fatal();
say \$!;             # Neat error message;