Bonus Xmas – Concurrent HTTP Server implementation and the scripter’s approach

First of all, I want to highlight Jonathan Worthington‘s work with Rakudo Perl6 and IO::Socket::Async. Thanks Jon!

***

I like to make scripts; write well-organized sequences of actions, get results and do things with them.

When I began with Perl6 I discovered a spectacular ecosystem, where I could put my ideas into practice in the way that I like: script manner. One of these ideas was to implement a small HTTP server to play with it. Looking at other projects and modules related to Perl6, HTTP and sockets I discovered that the authors behind were programmers with a great experience with Object-Oriented programming.

Continue reading “Bonus Xmas – Concurrent HTTP Server implementation and the scripter’s approach”

Day 24 – Solving a Rubik’s Cube

Intro

I have a speed cube on my wish list for Christmas, and I'm really excited about it. :) I wanted to share that enthusiasm with some Perl 6 code.

I graduated from high school in '89, so I'm just the right age to have had a Rubik's cube through my formative teen years. I remember trying to show off on the bus and getting my time down to just under a minute. I got a booklet from a local toy store back in the 80s that showed an algorithm on how to solve the cube, which I memorized. I don't have the booklet anymore. I've kept at it over the years, but never at a competitive level.

In the past few months, YouTube has suggested a few cube videos to me based on my interest in the standupmaths channel; seeing the world record come in under 5 seconds makes my old time of a minute seem ridiculously slow.

Everyone I've spoken to who can solve the cube has been using a different algorithm than I learned, and the one discussed on standupmaths is yet a different one. The advanced version of this one seems to be commonly used by those who are regularly setting world records, though.

Picking up this algorithm was not too hard; I found several videos, especially one describing how to solve the last layer. After doing this for a few days, I transcribed the steps to a few notes showing the list of steps, and the crucial parts for each step: desired orientation, followed by the individual turns for that step. I was then able to refer to a single page of my notebook instead of a 30-minute video, and after a few more days, had memorized the steps: being able to go from the notation to just doing the moves is a big speed up.

After a week, I was able to solve it reliably using the new method in under two minutes; a step back, but not bad for a week's effort in my off hours. Since then (a few weeks now), I've gotten down to under 1:20 pretty consistently. Again, this is the beginner method, without any advanced techniques, and I'm at the point where I can do the individual algorithm steps without looking at the cube. (I still have a long way to go to be competitive though.)

Notation

A quick note about the notation for moves – given that you're holding the cube with a side on the top, and one side facing you, the relative sides are:

L (Left) R (Right) U (Up) D (Down) F (Front) B (Back)

If you see a lone letter in the steps, like B, that means to turn that face clockwise (relative to the center of the cube, not you). If you add a ʼ to the letter, that means counter clockwise, so would have the top piece coming down, while a R would have the bottom piece coming up.

Additionally, you might have to turn a slice twice, which is written as U2; (Doesn't matter if it's clockwise or not, since it's 180º from the starting point.)

Algorithm

The beginner's algorithm I'm working with has the following basic steps:

1. White cross 2. White corners 3. Second layer 4. Yellow cross 5. Yellow edges 6. Yellow corners 7. Orient yellow corners

If you're curious as to what the individual steps are in each, you'll be able to dig through the Rubik's wiki or the YouTube video linked above. More advanced versions of this algorithm (CFOP by Jessica Fridrich) allow you to combine steps, have specific "shortcuts" to deal with certain cube states, or solve any color as the first side, not just white.

Designing a Module

As I began working on the module, I knew I wanted to get to a point where I could show the required positions for each step in a way that was natural to someone familiar with the algorithm, and to have the individual steps also be natural, something like:

 F.R.U.Rʼ.Uʼ.Fʼ 

I also wanted to be able to dump the existing state of the cube; For now as text, but eventually being able to tie it into a visual representation as well,

We need to be able to tell if the cube is solved; We need to be able to inspect pieces relative to the current orientation, and be able to change our orientation.

Since I was going to start with the ability to render the state of the cube, and then quickly add the ability to turn sides, I picked an internal structure that made that fairly easy.

The Code

The latest version of the module is available on github. The code presented here is from the initial version.

Perl 6 lets you create Enumerations so you can use actual words in your code instead of lookup values, so let's start with some we'll need:

enum Side «:Up('U') :Down('D') :Front('F') :Back('B') :Left('L') :Right('R')»;
enum Colors «:Red('R') :Green('G') :Blue('B') :Yellow('Y') :White('W') :Orange('O')»;

With this syntax, we can use Up directly in our code, and its associated value is U.

We want a class so we can store attributes and have methods, so our class definition has:

class Cube::Three {
has %!Sides;
...
submethod BUILD() {
%!Sides{Up} = [White xx 9];
%!Sides{Front} = [Red xx 9];
...
}
}

We have a single attribute, a Hash called %.Sides; Each key corresponds to one of the Enum sides. The value is a 9-element array of Colors. Each element on the array corresponds to a position on the cube. With white on top and red in front as the default, the colors and cell positions are shown here with the numbers & colors. (White is Up, Red is Front)

         W0 W1 W2
         W3 W4 W5
         W6 W7 W8
G2 G5 G8 R2 R5 R8 B2 B5 B8 O2 O5 O8
G1 G4 G7 R1 R4 R7 B1 B4 B7 O1 O4 O7
G0 G3 G6 R0 R3 R6 B0 B3 B6 B0 B3 B6
         Y0 Y1 Y2
         Y3 Y4 Y5
         Y6 Y7 Y8

The first methods I added were to do clockwise turns of each face.

method F {
self!rotate-clockwise(Front);
self!fixup-sides([
Pair.new(Up, [6,7,8]),
Pair.new(Right, [2,1,0]),
Pair.new(Down, [2,1,0]),
Pair.new(Left, [6,7,8]),
]);
self;
}

This public method calls two private methods (denoted with the !); one rotates a single Side clockwise, and the second takes a list of Pairs, where the key is a Side, and the value is a list of positions. If you imagine rotating the top of the cube clockwise, you can see that the positions are being swapped from one to the next.

Note that we return self from the method; this allows us to chain the method calls as we wanted in the original design.

The clockwise rotation of a single side shows a raw Side being passed, and uses array slicing to change the order of the pieces in place.

# 0 1 2 6 3 0
# 3 4 5 -> 7 4 1
# 6 7 8 8 5 2
method !rotate-clockwise(Side \side) {
%!Sides{side}[0,1,2,3,5,6,7,8] = %!Sides{side}[6,3,0,7,1,8,5,2];
}

To add the rest of the notation for the moves, we add some simple wrapper methods:

method F2 { self.F.F; }
method Fʼ { self.F.F.F; }

F2 just calls the move twice; Fʼ cheats: 3 rights make a left.

At this point, I had to make sure that my turns were doing what they were supposed to, so I added a gist method (which is called when an object is output with say).

say Cube::Three.new.U2.D2.F2.B2.R2.L2;
      W Y W
      Y W Y
      W Y W
G B G R O R B G B O R O
B G B O R O G B G R O R
G B G R O R B G B O R O
      Y W Y
      W Y W
      Y W Y

The source for the gist is:

method gist {
my $result;
$result = %!Sides{Up}.rotor(3).join("\n").indent(6);
$result ~= "\n";
for 2,1,0 -> $row {
for (Left, Front, Right, Back) -> $side {
my @slice = (0,3,6) >>+>> $row;
$result ~= ~%!Sides{$side}[@slice].join(' ') ~ ' ';
}
$result ~= "\n";
}
$result ~= %!Sides{Down}.rotor(3).join("\n").indent(6);
$result;
}

A few things to note:

  • use of .rotor(3) to break up the 9-cell array into 3 3-element lists.

  • .indent(6) to prepend whitespace on the Up and Down sides.
  • (0,3,6) >>+>> $row, which increments each value in the list

The gist is great for stepwise inspection, but for debugging, we need something a little more compact:

method dump {
gather for (Up, Front, Right, Back, Left, Down) -> $side {
take %!Sides{$side}.join('');
}.join('|');
}

This iterates over the sides in a specific order, and then uses the gather take syntax to collect string representations of each side, then joining them all together with a |. Now we can write tests like:

use Test; use Cube::Three;
my $a = Cube::Three.new();
is $a.R.U2...R....U2.L.U..U.L.dump,
'WWBWWWWWB|RRRRRRRRW|BBRBBBBBO|OOWOOOOOO|GGGGGGGGG|YYYYYYYYY',
'corners rotation';

This is actually the method used in the final step of the algorithm. With this debug output, I can take a pristine cube, do the moves myself, and then quickly transcribe the resulting cube state into a string for testing.

While the computer doesn't necessarily need to rotate the cube, it will make it easier to follow the algorithm directly if we can rotate the cube, so we add one for each of the six possible turns, e.g.:

method rotate-F-U {
self!rotate-clockwise(Right);
self!rotate-counter-clockwise(Left);
# In addition to moving the side data, have to
# re-orient the indices to match the new side.
my $temp = %!Sides{Up};
%!Sides{Up} = %!Sides{Front};
self!rotate-counter-clockwise(Up);
%!Sides{Front} = %!Sides{Down};
self!rotate-clockwise(Front);
%!Sides{Down} = %!Sides{Back};
self!rotate-clockwise(Down);
%!Sides{Back} = $temp;
self!rotate-counter-clockwise(Back);
self;
}

As we turn the cube from Front to Up, we rotate the Left and Right sides in place. Because the orientation of the cells changes as we change faces, as we copy the cells from face to face, we also may have to rotate them to insure they end up facing in the correct direction. As before, we return self to allow for method chaining.

As we start testing, we need to make sure that we can tell when the cube is solved; we don't care about the orientation of the cube, so we verify that the center color matches all the other colors on the face:

method solved {
for (Up, Down, Left, Right, Back, Front) -> $side {
return False unless
%!Sides{$side}.all eq %!Sides{$side}[4];
}
return True;
}

For every side, we use a Junction of all the colors on a side to compare to the center cell (always position 4). We fail early, and then succeed only if we made it through all the sides.

Next I added a way to scramble the cube, so we can consider implementing a solve method.

method scramble {
my @random = <U D F R B L>.roll(100).squish[^10];
for @random -> $method {
my $actual = $method ~ ("", "2", "ʼ").pick(1);
self."$actual"();
}
}

This takes the six base method names, picks a bunch of random values, then squishes them (insures that there are no dupes in a row), and then picks the first 10 values. We then potentially add on a 2 or a ʼ. Finally, we use the indirect method syntax to call the individual methods by name.

Finally, I'm ready to start solving! And this is where things got complicated. The first steps of the beginner method are often described as intuitive. Which means it's easy to explain… but not so easy to code. So, spoiler alert, as of the publish time of this article, only the first step of the solve is complete. For the full algorithm for the first step, check out the linked github site.

method solve {
self.solve-top-cross;
}
method solve-top-cross {
sub completed {
%!Sides{Up}[1,3,5,7].all eq 'W' &&
%!Sides{Front}[5] eq 'R' &&
%!Sides{Right}[5] eq 'B' &&
%!Sides{Back}[5] eq 'O' &&
%!Sides{Left}[5] eq 'G';
}
...
MAIN:
while !completed() {
# Move white-edged pieces in second row up to top
# Move incorrectly placed pieces in the top row to the middle
# Move pieces from the bottom to the top
}
}

Note the very specific checks to see if we're done; we use a lexical sub to wrap up the complexity – and while we have a fairly internal check here, we see that we might want to abstract this to a point where we can say "is this edge piece in the right orientation". To start with, however, we'll stick with the individual cells.

The guts of solve-top-cross are 100+ lines long at the moment, so I won't go through all the steps. Here's the "easy" section

my @middle-edges =
[Front, Right],
[Right, Back],
[Back, Left],
[Left, Front],
;
for @middle-edges -> $edge {
my $side7 = $edge[0];
my $side1 = $edge[1];
my $color7 = %!Sides{$side7}[7];
my $color1 = %!Sides{$side1}[1];
if $color7 eq 'W' {
# find number of times we need to rotate the top:
my $turns = (
@ordered-sides.first($side1, :k) -
@ordered-sides.first(%expected-sides{~$color1}, :k)
) % 4;
self.U for 1..$turns;
self."$side1"();
self.for 1..$turns;
next MAIN;
} elsif $color1 eq 'W' {
my $turns = (
@ordered-sides.first($side7, :k) -
@ordered-sides.first(%expected-sides{~$color7}, :k)
) % 4;
self.for 1..$turns;
self."$side1"();
self.U for 1..$turns;
next MAIN;
}
}

When doing this section on a real cube, you'd rotate the cube without regard to the side pieces, and just get the cross in place. To make the algorithm a little more "friendly", we keep the centers in position for this; we rotate the Up side into place, then rotate the individual side into place on the top, then rotate the Up side back into the original place.

One of the interesting bits of code here is the .first(..., :k) syntax, which says to find the first element that matches, and then return the position of the match. We can then look things up in an ordered list so we can calculate the relative positions of two sides.

Note that the solving method only calls to the public methods to turn the cube; While we use raw introspection to get the cube state, we only use "legal" moves to do the solving.

With the full version of this method, we now solve the white cross with this program:

#!/usr/bin/env perl6
use Cube::Three;
my $cube = Cube::Three.new();
$cube.scramble;
say $cube;
say '';
$cube.solve;
say $cube;

which generates this output given this set of moves (Fʼ L2 B2 L Rʼ Uʼ R Fʼ D2 B2). First is the scramble, and then is the version with the white cross solved.

      W G G
      Y W W
      Y Y Y
O O B R R R G B O Y Y B
R G O B R R G B G W O B
Y B B R O W G G G W W O
      W W O
      Y Y O
      B R R

      Y W W
      W W W
      G W R
O G W O R Y B B G R O G
Y G G R R B R B Y R O G
O O R Y O W O O R W Y B
      G G B
      B Y Y
      Y B B

This sample prints out the moves used to do the scramble, shows the scrambled cube, "solves" the puzzle (which, as of this writing, is just the white cross), and then prints out the new state of the cube.

Note that as we get further along, the steps become less "intuitive", and, in my estimation, much easier to code. For example, the last step requires checking the orientationof four pieces, rotating the cube if necessary, and then doing a 14-step set of moves. (shown in the test above).

Hopefully my love of cubing and Perl 6 have you looking forward to your next project!

I'll note in the comments when the module's solve is finished, for future readers.

Day 23 – The Wonders of Perl 6 Golf

Ah, Christmas! What could possibly be better than sitting around the table with your friends and family and playing code golf! … Wait, what?

Oh, right, it’s not Christmas yet. But you probably want to prepare yourself for it anyway!

If you haven’t noticed already, there’s a great website for playing code golf: https://code-golf.io/. The cool thing about it is that it’s not just for perl 6! At the time of writing, 6 other langs are supported. Hmmm…

Anyway, as I’ve got some nice scores there, I thought I’d share some of the nicest bits from my solutions. All the trickety-hackety, unicode-cheatery and mind-blowety. While we are at it, maybe we’ll even see that perl 6 is quite concise and readable even in code golf. That is, if you have a hard time putting your Christmas wishes on a card, maybe a line of perl 6 code will do.

I won’t give full solutions to not spoil your Christmas fun, but I’ll give enough hints for you to come up with competitive solutions.

All I want for Christmas is for you to have some fun. So get yourself rakudo to make sure you can follow along. Later we’ll have some pumpkin pie and we’ll do some caroling. If you have any problems running perl 6, perhaps join #perl6 channel on freenode to get some help. That being said, https://code-golf.io/ itself gives you a nice editor to write and eval your code, so there should be no problem.

Some basic examples

Let’s take Pascal’s Triangle task as an example. I hear ya, I hear! Math before Christmas, that’s cruel. Cruel, but necessary.

There’s just one basic trick you have to know. If you take any row from the Pascal’s Triangle, shift it by one element and zip-sum the result with the original row, you’ll get the next row!

So if you had a row like:

1 3 3 1

All you do is just shift it to the right:

0 1 3 3 1

And sum it with the original row:

1 3 3 1
+ + + +
0 1 3 3 1
=
1 4 6 4 1

As simple as that! So let’s write that in code:

for ^16 { put (+combinations($^row,$_) for 0..$row) }

You see! Easy!

… oh… Wait, that’s a completely different solution. OK, let’s see:

.put for 1, { |$_,0 Z+ 0,|$_ } … 16

Output:

1
1 1
1 2 1
1 3 3 1
1 4 6 4 1
1 5 10 10 5 1
1 6 15 20 15 6 1
1 7 21 35 35 21 7 1
1 8 28 56 70 56 28 8 1
1 9 36 84 126 126 84 36 9 1
1 10 45 120 210 252 210 120 45 10 1
1 11 55 165 330 462 462 330 165 55 11 1
1 12 66 220 495 792 924 792 495 220 66 12 1
1 13 78 286 715 1287 1716 1716 1287 715 286 78 13 1
1 14 91 364 1001 2002 3003 3432 3003 2002 1001 364 91 14 1
1 15 105 455 1365 3003 5005 6435 6435 5005 3003 1365 455 105 15 1

Ah-ha! There we go. So what happened there? Well, in perl 6 you can create sequences with a very simple syntax: 2, 4, 8 … ∞. Normally you’ll let it figure out the sequence by itself, but you can also provide a code block to calculate the values. This is awesome! In other languages you’d often need to have a loop with a state variable, and here it does all that for you! This feature alone probably needs an article or 𝍪.

The rest is just a for loop and a put call. The only trick here is to understand that it is working with lists, so when you specify the endpoint for the sequence, it is actually checking for the number of elements. Also, you need to flatten the list with |.

If you remove whitespace and apply all tricks mentioned in this article, this should get you to 26 characters. That’s rather competitive.

Similarly, other tasks often have rather straightforward solutions. For example, for Evil Numbers you can write something like this:

.base(2).comb(~1) %% 2 && .say for ^50

Remove some whitespace, apply some tricks, and you’ll be almost there.

Let’s take another example: Pangram Grep. Here we can use set operators:

a..z .lc.comb && .say for @*ARGS

Basically, almost all perl 6 solutions look like real code. It’s the extra -1 character oomph that demands extra eye pain, but you didn’t come here to listen about conciseness, right? It’s time to get dirty.

Numbers

Let’s talk numbers! 1 ² ③ ٤ ⅴ ߆… *cough*. You see, in perl 6 any numeric character (that has a corresponding numeric value property) can be used in the source code. The feature was intended to allow us to have some goodies like ½ and other neat things, but this means that instead of writing 50 you can write . Some golfing platforms will count the number of bytes when encoded in UTF-8, so it may seem like you’re not winning anything. But what about 1000000000000 and 𖭡? In any case, code-golf.io is unicode-aware, so the length of any of these characters will be 1.

So you may wonder, which numbers can you write in that manner? There you go:

-0.5 0.00625 0.025 0.0375 0.05 0.0625 0.083333 0.1
0.111111 0.125 0.142857 0.15 0.166667 0.1875 0.2
0.25 0.333333 0.375 0.4 0.416667 0.5 0.583333 0.6
0.625 0.666667 0.75 0.8 0.833333 0.875 0.916667 1
1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7 7.5 8 8.5 9 10
11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44
45 46 47 48 49 50 60 70 80 90 100 200 300 400 500
600 700 800 900 1000 2000 3000 4000 5000 6000 7000
8000 9000 10000 20000 30000 40000 50000 60000 70000
80000 90000 100000 200000 216000 300000 400000
432000 500000 600000 700000 800000 900000 1000000
100000000 10000000000 1000000000000

This means, for example, that in some cases you can save 1 character when you need to negate the result. There are many ways you can use this, and I’ll only mention one particular case. The rest you figure out yourself, as well as how to find the actual character that can be used for any particular value (hint: loop all 0x10FFFF characters and check their .univals).

For example, when golfing you want to get rid of unnecessary whitespace, so maybe you’ll want to write something like:

say 5max3 # ERROR

It does not work, of course, and we can’t really blame the compiler for not untangling that mess. However, check this out:

saymax# OUTPUT: «5␤»

Woohoo! This will work in many other cases.

Conditionals

If there is a good golfing language, that’s not Perl 6. I mean, just look at this:

puts 10<30?1:2 # ruby
say 10 <30??1!!2 # perl 6

Not only TWO more characters are needed for the ternary, but also some obligatory whitespace around < operator! What’s wrong with them, right? How dare they design a language with no code golf in mind⁉

Well, there are some ways we can work around it. One of them is operator chaining. For example:

say 5>3>say(42)

If 5 is ≤ than 3, then there’s no need to do the other comparison, so it won’t run it. This way we can save at least one character. On a slightly related note, remember that junctions may also come in handy:

say yes! if 5==3|5

And of course, don’t forget about unicode operators: , , .

Typing is hard, let’s use some of the predefined strings!

You wouldn’t believe how useful this is sometimes. Want to print the names of all chess pieces? OK:

say (.uniname».words»[2]
# KING QUEEN ROOK BISHOP KNIGHT PAWN

This saves just a few characters, but there are cases when it can halve the size of your solution. But don’t stop there, think of error messages, method names, etc. What else can you salvage?

Base 16? Base 36? Nah, Base 0x10FFFF!

One of the tasks tells us to print φ to the first 1000 decimal places. Well, that’s very easy!

say 1.6180339887498948482045868343656381177203091798057628621354486227052604628189024497072072041893911374847540880753868917521266338622235369317931800607667263544333890865959395829056383226613199282902678806752087668925017116962070322210432162695486262963136144381497587012203408058879544547492461856953648644492410443207713449470495658467885098743394422125448770664780915884607499887124007652170575179788341662562494075890697040002812104276217711177780531531714101170466659914669798731761356006708748071013179523689427521948435305678300228785699782977834784587822891109762500302696156170025046433824377648610283831268330372429267526311653392473167111211588186385133162038400522216579128667529465490681131715993432359734949850904094762132229810172610705961164562990981629055520852479035240602017279974717534277759277862561943208275051312181562855122248093947123414517022373580577278616008688382952304592647878017889921990270776903895321968198615143780314997411069260886742962267575605231727775203536139362

Yes!!!

Okay, that takes a bit more than 1000 characters… Of course, we can try to calculate it, but that is not exactly in the Christmas spirit. We want to cheat.

If we look at the docs about polymod, there’s a little hint:

my @digits-in-base37 = 9123607.polymod(37 xx *); # Base conversion

Hmmm… so that gives us digits for any arbitrary base. How high can we go? Well, it depends on what form we would like to store the number in. Given that code-golf.io counts codepoints, we can use base 0x10FFFF (i.e. using all available codepoints). Or, in this case we will go with base 0x10FFFE, because:

☠☠☠⚠⚠⚠ WARNING! WARNING! WARNING! ⚠⚠⚠☠☠☠
THIS WILL MAKE YOUR COMPUTER IMPLODE!
UNICODE STRINGS ARE SUBJECT TO NORMALIZATION SO YOUR
DATA WILL NOT BE PRESERVED. HIDE YOUR KIDS, HIDE YOUR
WIFE. HIDE YOUR KIDS, HIDE YOUR WIFE. HIDE YOUR KIDS,
HIDE YOUR WIFE. AND HIDE YOUR HUSBAND.
☠☠☠⚠⚠⚠ WARNING! WARNING! WARNING! ⚠⚠⚠☠☠☠

When applied to our constant, it should give something like this:

󻁾񤍠򷒋󜹕󘶸񙦅񨚑򙯬񗈼𢍟𪱷򡀋𢕍򌠐񘦵𔇆򅳒򑠒󌋩򯫞򶝠򚘣򣥨񫵗𿞸􋻩񱷳󟝐󮃞󵹱񿢖𛒕𺬛󊹛󲝂򺗝𭙪񰕺𝧒򊕆𘝞뎛􆃂򊥍񲽤򩻛󂛕磪󡯮끝򰯬󢽈󼿶󘓥򮀓񽑖򗔝󃢖񶡁􁇘󶪼񌍌񛕄񻊺򔴩寡񿜾񿸶򌰘񡇈򦬽𥵑󧨑򕩃򳴪񾖾򌯎󿥐񱛦𱫞𵪶򁇐󑓮򄨠򾎹𛰑𗋨䨀򡒶𰌡򶟫񦲋𧮁􍰍񲍚񰃦𦅂󎓜󸾧󉦩󣲦򄉼񿒣𸖉񿡥󬯞嗟𧽘񿷦򠍍🼟򇋹񖾷𖏕񟡥󜋝􋯱񤄓򭀢򌝓𱀉𫍡󬥝򈘏򞏡񄙍𪏸࿹𺐅񢻳򘮇𐂇񘚡ந򾩴󜆵𰑕򰏷񛉿򢑬򭕴𨬎󴈂􋵔򆀍񖨸􂳚󽡂󎖪񡉽񕧣񎗎򝤉򡔙񆔈󖾩󅾜񋩟򝼤񯓦󐚉񟯶򄠔𦔏򲔐o

How do we reverse the operation? During one of the squashathons I found a ticket about a feature that I didn’t know about previously. Basically, the ticket says that Rakudo is doing stuff that it shouldn’t, which is of course something we will abuse next time. But for now we’re within the limits of relative sanity:

say 1.,:1114110[o򲔐𦔏򄠔񟯶󐚉񯓦򝼤񋩟󅾜󖾩񆔈򡔙򝤉񎗎񕧣񡉽󎖪󽡂􂳚񖨸򆀍􋵔󴈂𨬎򭕴򢑬񛉿򰏷𰑕󜆵򾩴ந񘚡𐂇򘮇񢻳𺐅࿹𪏸񄙍򞏡򈘏󬥝𫍡𱀉򌝓򭀢񤄓􋯱󜋝񟡥𖏕񖾷򇋹🼟򠍍񿷦𧽘嗟󬯞񿡥𸖉񿒣򄉼󣲦󉦩󸾧󎓜𦅂񰃦񲍚􍰍𧮁񦲋򶟫𰌡򡒶䨀𗋨𛰑򾎹򄨠󑓮򁇐𵪶𱫞񱛦󿥐򌯎񾖾򳴪򕩃󧨑𥵑򦬽񡇈򌰘񿸶񿜾寡򔴩񻊺񛕄񌍌󶪼􁇘񶡁󃢖򗔝񽑖򮀓󘓥󼿶󢽈򰯬끝󡯮磪󂛕򩻛񲽤򊥍􆃂뎛𘝞򊕆𝧒񰕺𭙪򺗝󲝂󊹛𺬛𛒕񿢖󵹱󮃞󟝐񱷳􋻩𿞸񫵗򣥨򚘣򶝠򯫞󌋩򑠒򅳒𔇆񘦵򌠐𢕍򡀋𪱷𢍟񗈼򙯬񨚑񙦅󘶸󜹕򷒋񤍠󻁾.ords]

Note that the string has to be in reverse. Other than that it looks very nice. 192 characters including the decoder.

This isn’t a great idea for printing constants that are otherwise computable, but given the length of the decoder and relatively dense packing rate of the data, this comes handy in other tasks.

All good things must come to an end; horrible things – more so

That’s about it for the article. For more code golf tips I’ve started this repository: https://github.com/AlexDaniel/6lang-golf-cheatsheet

Hoping to see you around on https://code-golf.io/! Whether using perl 6 or not, I’d love to see all of my submissions beaten.

🥧♫

Day 22 – Features of Perl 6.d

So there we are. Two years after the first official release of Rakudo Perl 6. Or 6.c to be more precise. Since Matt Oates already touched on the performance improvements since then, Santa thought to counterpoint this with a description of the new features for 6.d that have been implemented since then. Because there have been many, Santa had to make a selection.

Tweaking objects at creation

Any class that you create can now have a TWEAK method. This method will be called after all other initializations of a new instance of the class have been done, just before it is being returned by .new. A simple, bit contrived example in which a class A has one attribute, of which the default value is 42, but which should change the value if the default is specified at object creation:

class A {
    has $.value = 42;
    method TWEAK(:$value = 0) { # default prevents warning
        # change the attribute if the default value is specified
        $!value = 666 if $value == $!value;
    }
}
# no value specified, it gets the default attribute value
dd A.new;              # A.new(value => 42)

# value specified, but it is not the default
dd A.new(value => 77); # A.new(value => 77)

# value specified, and it is the default 
dd A.new(value => 42); # A.new(value => 666)

Concurrency Improvements

The concurrency features of Rakudo Perl 6 saw many improvements under the hood. Some of these were exposed as new features. Most prominent are Lock::Async (a non-blocking lock that returns a Promise) and atomic operators.

In most cases, you will not need to use these directly, but it is probably good that you know about atomic operators if you’re engaged in writing programs that use concurrency features. An often occurring logic error, especially if you’ve been using threads in Pumpking Perl 5, is that there is no implicit locking on shared variables in Rakudo Perl 6. For example:

   my int $a;
    await (^5).map: {
        start { ++$a for ^100000 }
    }
    say $a; # something like 419318

So why doesn’t that show 500000? The reason for this is that we had 5 threads that were incrementing the same variable at the same time. And since incrementing consists of a read step, an increment step and write step, it became very easy for one thread to do the read step at the same time as another thread. And thus losing an increment. Before we had atomic operators, the correct way of doing the above code would be:

   my int $a;
    my $l = Lock.new;
    await (^5).map: {
       start {
           for ^100000 {
               $l.protect( { ++$a } )
           }
       }
    }
    say $a; # 500000

This would give you the correct answer, but would be at least 20x as slow.

Now that we have atomic variables, the above code becomes:

   my atomicint $a;
    await (^5).map: {
        start { ++⚛$a for ^100000 }
    }
    say $a; # 500000

Which is very much like the original (incorrect) code. And this is at least 6x as fast as the correct code using Lock.protect.

Unicode goodies

So many, so many. For instance, you can now use , , as Unicode versions of <=, >= and != (complete list).

You can now also create a grapheme by specifying the Unicode name of the grapheme, e.g.:

say "BUTTERFLY".parse-names; # 🦋

or create the Unicode name string at runtime:

my $t = "THUMBS UP SIGN, EMOJI MODIFIER FITZPATRICK TYPE";
print "$t-$_".parse-names for 3..6; # 👍🏼👍🏽👍🏾👍🏿

Or collate instead of just sort:

# sort by codepoint value
say <ä a o ö>.sort; # (a o ä ö)
# sort using Unicode Collation Algorithm
say <ä a o ö>.collate; # (a ä o ö)

Or use unicmp instead of cmp:

say "a" cmp "Z"; # More
 say "a" unicmp "Z"; # Less

Or that you can now use any Unicode digits Match variables ( for $1), negative numbers ( for -1), and radix bases (:۳("22") for :3("22")).

It’s not for nothing that Santa considers Rakudo Perl 6 to have the best Unicode support of any programming language in the world!

Skipping values

You can now call .skip on Seq and Supply to skip a number of values that were being produced. Together with .head and .tail this gives you ample manipulexity with Iterables and Supplies.

By the way, .head now also takes a WhateverCode so you can indicate you want all values except the last N (e.g. .head(*-3) would give you all values except the last three). The same goes for .tail (e.g. .tail(*-3) would give you all values except the first three).

Some additions to the Iterator role make it possible for iterators to support the .skip functionality even better. If an iterator can be more efficient in skipping a value than to actually produce it, it should implement the skip-one method. Derived from this are the skip-at-least and skip-at-least-pull-one methods that can be provided by an iterator.

An example of the usage of .skip to find out the 1000th prime number:

say (^Inf).grep(*.is-prime)[999]; # 7919

Versus:

say (^Inf).grep(*.is-prime).skip(999).head; # 7919

The latter is slightly more CPU efficient, but more importantly much more memory efficient, as it doesn’t need to keep the first 999 prime numbers in memory.

Of Bufs and Blobs

Buf has become much more like an Array, as it now supports .push, .append, .pop, .unshift, .prepend, .shift and .splice. It also has become more like Str with the addition of a subbuf-rw (analogous with .substr-rw), e.g.:

my $b = Buf.new(100..105);
$b.subbuf-rw(2,3) = Blob.new(^5);
say $b.perl; # Buf.new(100,101,0,1,2,3,4,105)

You can now also .allocate a Buf or Blob with a given number of elements and a pattern. Or change the size of a Buf with .reallocate:

my $b = Buf.allocate(10,(1,2,3));
say $b.perl; # Buf.new(1,2,3,1,2,3,1,2,3,1)
$b.reallocate(5);
say $b.perl; # Buf.new(1,2,3,1,2)

Testing, Testing, Testing!

The plan subroutine of Test.pm now also takes an optional :skip-all parameter to indicate that all tests in the file should be skipped. Or you can call bail-out to abort the test run marking it as failed. Or set the PERL6_TEST_DIE_ON_FAIL environment variable to a true value to indicate you want the test to end as soon as the first test has failed.

What’s Going On

You can now introspect the number of CPU cores in your computer by calling Kernel.cpu-cores. The amount of CPU used since the start of the program is available in Kernel.cpu-usage, while you can easily check the name of the Operating System with VM.osname.

And as if that is not enough, there is a new Telemetry module which you need to load when needed, just like the Test module. The Telemetry module provides a number of primitives that you can use directly, such as:

use Telemetry;
say T<wallclock cpu max-rss>; # (138771 280670 82360)

This shows the number of microseconds since the start of the program, the number of microseconds of CPU used, and the number of Kilobytes of memory that were in use at the time of call.

If you want get to a report of what has been going on in your program, you can use snap and have a report appear when your program is done. For instance:

use Telemetry;
snap;
Nil for ^10000000;  # something that takes a bit of time

The result will appear on STDERR:

Telemetry Report of Process #60076
Number of Snapshots: 2
Initial/Final Size: 82596 / 83832 Kbytes
Total Time:           0.55 seconds
Total CPU Usage:      0.56 seconds
No supervisor thread has been running

wallclock  util%  max-rss
   549639  12.72     1236
--------- ------ --------
   549639  12.72     1236

Legend:
wallclock  Number of microseconds elapsed
    util%  Percentage of CPU utilization (0..100%)
  max-rss  Maximum resident set size (in Kbytes)

If you want a state of your program every .1 of a second, you can use the snapper:

use Telemetry;
snapper;
Nil for ^10000000;  # something that takes a bit of time

The result:

Telemetry Report of Process #60722
Number of Snapshots: 7
Initial/Final Size: 87324 / 87484 Kbytes
Total Time:           0.56 seconds
Total CPU Usage:      0.57 seconds
No supervisor thread has been running

wallclock  util%  max-rss
   103969  13.21      152
   101175  12.48
   101155  12.48
   104097  12.51
   105242  12.51
    44225  12.51        8
--------- ------ --------
   559863  12.63      160

Legend:
wallclock  Number of microseconds elapsed
    util%  Percentage of CPU utilization (0..100%)
  max-rss  Maximum resident set size (in Kbytes)

And many more options are available here, such as getting the output in .csv format.

The MAIN thing

You can now modify the way MAIN parameters are handled by setting options in %*SUB-MAIN-OPTS. The default USAGE message is now available inside the MAIN as the $*USAGE dynamic variable, so you can change it if you want to.

Embedding Perl 6

Two new features make embedding Rakudo Perl 6 easier to handle:
the &*EXIT dynamic variable now can be set to specify the action to be taken when exit() is called.

Setting the environment variable RAKUDO_EXCEPTIONS_HANDLER to "JSON" will throw Exceptions in JSON, rather than text, e.g.:

$ RAKUDO_EXCEPTIONS_HANDLER=JSON perl6 -e '42 = 666'
{
  "X::Assignment::RO" : {
    "value" : 42,
    "message" : "Cannot modify an immutable Int (42)"
  }
}

Bottom of the Gift Bag

While rummaging through the still quite full gift bag, Santa found the following smaller prezzies:

  • Native string arrays are now implemented (my str @a)
  • IO::CatHandle allows you to abstract multiple data sources into a single virtual IO::Handle
  • parse-base() performs the opposite action of base()

Time to catch a Sleigh

Santa would like to stay around to tell you more about what’s been added, but there simply is not enough time to do that. If you really want to keep up-to-date on new features, you should check out the Additions sections in the ChangeLog that is updated with each Rakudo compiler release.

So, catch you again next year!

Best wishes from

🎅🏾

Day 21 – Sudoku with Junctions and Sets

There are a number of core elements in Perl6 that give you powerful tools to do things in a concise and powerful way. Two of these are Junctions and Sets which share a number of characteristics but are also wildly different. In order to demonstrate the power of these I’m going to look at how they can be used with a simple problem, Sudoku puzzles.

Sudoku : A refresher

So for those of you who don’t know a Sudoku puzzle is a 9 by 9 grid that comes supplied with some cells filled in with numbers between 1 and 9. The goal is to fill in all the cells with numbers between 1 and 9 so that no row, column or sub square has more than one of any of the numbers in it.

There’s a few ways to represent a Sudoku puzzle, my personal favourite being a 9 by 9 nested array for example :

my @game = [
    [4,0,0,0,0,0,0,0,0],
    [0,9,0,3,4,6,0,5,0],
    [5,8,0,0,9,0,0,0,6],
    [0,4,0,8,1,3,0,0,9],
    [0,0,0,5,0,4,0,0,0],
    [8,0,0,6,2,9,0,4,0],
    [3,0,0,0,5,0,0,6,2],
    [0,5,0,9,3,2,0,8,0],
    [0,0,0,0,0,0,0,0,1]
];

In this situation the cells with no value assigned are given a 0, this way all the cells have an Integer value assigned to them. The main thing to bear in mind with this format is you need to reference cells using @game[$y][$x] rather than @game[$x][$y].

Junctions : Quantum Logic Testing

One of the simplest ways to use Junctions in Perl6 is in a logical test. The Junction can represent a selection of values you are wanting to test against. For example :

if ( 5 < 1|10 < 2 ) { say "Spooky" } else { say "Boo" }
Spooky

So, not only does this demonstrate operator chaining (something that experienced programmers may already be looking confused about) but the any Junction ( 1|10 ) evaluates to True for both 5 < 10 and 1 < 2. In this way Junctions can be extremely powerful already, it’s when you assign a variable container to them that it gets really interesting.

One of the tests we’d like to be able to make on our Sudoku puzzle is to see if it’s full. By which I mean every cell has been assigned a value greater than 0. A full puzzle may not be completed correctly but there’s a guess in each cell. Another way of putting that would be that none of the cells has a value of 0. Thus we can define a Junction and store it in a scalar variable we can test it at any point to see if the puzzle is full.

my $full-test = none( (^9 X ^9).map(-> ($x,$y) { 
    @game[$y][$x];
} ) );
say so $full-test == 0;
False

In this case the game has a number of 0’s still in it so seeing if $full-test equals 0 evaluates to False. Note that without the so to cast the result to a Boolean you’ll get a breakdown of the cells that are equal to 0 only if all of these are False will the Junction evaluate to True.

Note also the use of the ^9 and X operators to generate two Ranges from 0 to 8 and then the cross product of these two lists of 9 characters to make a list of all the possible X,Y co-ordinates of the puzzle. It’s this kind of powerful simplicity that is one of the reasons I love Perl6. But I digress.

The strength of this method is that once you’ve defined the Junction you don’t need to modify it. If you change the values stored in the Array the Junction will look at the new values instead (note this only holds true for updating individual cells, if you swap out a whole sub array with a new one you’ll break the Junction).

So that’s a simple use of a Junction so store a multi-variable test you can reuse. But it gets more interesting when you realise that the values in a Junction can themselves be Junctions.

Lets look at a more complex test, a puzzle is complete if for every row, column and square in the puzzle there is only one of each number. In order to make this test we’re going to need three helper functions.

subset Index of Int where 0 <= * <= 8; 
sub row( Index $y ) {
    return (^9).map( { ( $_, $y ) } ); 
} 
sub col( Index $x ) {
     return (^9).map( { ( $x, $_ ) } ); 
} 
multi sub square( Index $sq ) {
    my $x = $sq % 3 * 3;
    my $y = $sq div 3 * 3;
    return self.square( $x, $y );
} 
multi sub square( Index $x, Index $y ) {
     my $tx = $x div 3 * 3;
     my $ty = $y div 3 * 3;
     return ( (0,1,2) X (0,1,2) ).map( -> ( $dx, $dy ) { 
        ( $tx + $dx, $ty + $dy ) 
    } );
}

So here we define an Index as a value between 0 and 8 and then define our sub‘s to return a List of List‘s with the sub lists being a pair of X and Y indices’s. Note that our square function can accept one or two positional arguments. In the single argument we define the sub squares with 0 being in the top left then going left to right with 8 being the bottom right. The two argument version gives use the list of cells in the square for a given cell (including itself).

So with these in place we can define our one() lists for each row, column and square. Once we have them we can them put them into an all() junction.

my $complete-all = all(
     (^9).map(
        {
            |(
                one( row( $_ ).map( -> ( $x, $y ) { 
                    @game[$y][$x] 
                } ) ),
                one( col( $_ ).map( -> ( $x, $y ) { 
                    @game[$y][$x] 
                } ) ),
                one( square( $_ ).map( -> ( $x, $y ) { 
                    @game[$y][$x] 
                } ) )
            )
        }
    )
);

Once we have that testing to see if the puzzle is complete is quite simple.

say [&&] (1..9).map( so $complete-all == * );
False

Here we test each possible cell value of 1 through 9 against the Junction, in each case this will be True if all the one() Junctions contains only one of the value. Then we use the [] reduction meta-operator to chain these results to give a final True / False value (True if all the results are True and False otherwise). Again this test can be reused as you add values to the cells and will only return True when the puzzle has been completed and is correct.

Once again we’ve got a complex test boiled down to a single line of code. Our $complete-all variable needs to be defined once and is then valid for the rest of the session.

This sort of nested junction tests can reach many levels, a final example is if we want to test if a current puzzle is valid. By which I mean it’s not complete but it doesn’t have any duplicate numbers in and row, column or square. Once again we can make a Junction for this, for each row, column or square it’s valid if one or none of the cells is set to each of the possible values.  Thus our creation of the Junction is similar to the $complete-all one.

$valid-all = all(
    (^9).map(
        {
            |(
                one( 
                    none( row( $_ ).map( -> ( $x, $y ) {
                        @game[$y][$x]
                    } ) ),
                    one( row( $_ ).map( -> ( $x, $y ) {
                        @game[$y][$x]
                    } ) ) 
                ), 
                one( 
                    none( col( $_ ).map( -> ( $x, $y ) {
                        @game[$y][$x] 
                    } ) ),
                    one( col( $_ ).map( -> ( $x, $y ) { 
                        @game[$y][$x]
                    } ) ) 
                ),
                one( 
                    none( square( $_ ).map( -> ( $x, $y ) {
                        @game[$y][$x]
                    } ) ),
                    one( square( $_ ).map( -> ( $x, $y ) {
                        @game[$y][$x]
                    } ) ) 
                )
            )
        }
    )
);

The test for validity is basically the same as the test for completeness.

say [&&] (1..9).map( so $valid-all == * );
True

Except in this case our puzzle is valid and so we get a True result.

Sets : Collections of Objects

Whilst the Junctions are useful to test values they aren’t as useful if we want to try solving the puzzle. But Perl6 has another type of collection that can come in very handy. Sets, (and their related types Bags and Mixes) let you collect items and then apply mathematical set operations to them to find how different Sets interact with each other.

As an example we’ll define a possible function  that returns the values that are possible for a given cell. If the cell has a value set we will return the empty list.

sub possible( Index $x, Index $y, @game ) {
    return () if @game[$y][$x] > 0;

    ( 
        (1..9) 
            (-)
        set(
            ( row($y).map( -> ( $x, $y ) { 
                @game[$y][$x] 
            } ).grep( * > 0 ) ),
            ( col($x).map( -> ( $x, $y ) { 
                @game[$y][$x] 
            } ).grep( * > 0 ) ),
            ( square($x,$y).map( -> ( $x, $y ) { 
                @game[$y][$x] 
            } ).grep( * > 0 ) )
        )
    ).keys.sort;
 }

Here we find the different between the numbers 1 through 9 and the Set made up of the values of the row, column and square the given cell is in. We ignore cells with a 0 value using grep. As Sets store their details as unordered key / value pairs we get the keys and then sort them for consistency. Note that here we’re using the ascii (-) version of the operator, we could also use the Unicode version instead.

We could define the set as the union of each of the results from row, col and square and the result would be the same. Also we’re using the two argument version of square in this case.

It should be noted that this is the simplest definition of possible values, there’s no additional logic going on but even this simple result lets us do the simplest of solving algorithms. If this case we loop around every cell in the grid and if it’s got 1 possible value we can set the value to that. In this case we’ll loop round, get a list of cells to set, then loop through the list and set the values. If the list of ones to set is empty or the puzzle is complete then we stop.

my @updates;
repeat {
    @updates = (^9 X ^9).map( -> ($x,$y) { 
        ($x,$y) => possible($x,$y,@game) 
    } ).grep( *.value.elems == 1 );
    for @updates -> $pair { 
        my ( $x, $y ) = $pair.key; 
        @game[$y][$x] = $pair.value[0];
    }
} while ( @updates.elems > 0 && 
          ! [&&] (1..9).map( so $complete-all == * ) );

So we make a list of Pairs where the key is the x,y coordinates and the value is the possible values. Then we remove all those that don’t have one value. This is continued until there are no cells found with a single possible value or the puzzle is complete.

Another way of finding solutions is to get values that only appear in one set of possibilities in a given, row, column or square. For example if we have the following possibilities:

(1,2,3),(2,3,4),(),(),(4,5),(),(),(2,3,4),()

1 and 5 only appear in the row once each. We can make use of the symmetric set difference operator and operator chaining to get this.

say (1,2,3) (^) (2,3,4) (^) () (^) () (^) (4,5) (^) () (^) () (^) (2,3,4) (^) ()
set(1 5)

Of course in that case we can use the reduction meta-operator on the list instead.

say [(^)] (1,2,3),(2,3,4),(),(),(4,5),(),(),(2,3,4),()
set(1 5)

So in that case the algorithm is simple (in this case I’ll just cover rows, the column and square code is basically the same).

my @updates;
for ^9 -> $idx {
    my $only = [(^)] row($idx).map( -> ( $x,$y ) { 
        possible($x,$y,@game) 
    } );
    for $only.keys -> $val {
        for row($idx) -> ($x,$y) {
            if $val (elem) possible($x,$y,@game) {
                @updates.push( ($x,$y) => $val );
            }
        }
    }
}

We then can loop through the updates array similar to above. Combining these two algorithms can solve a large number of Sudoku puzzle by themselves and simplify others.

Note we have to make two passes, firstly we get the numbers we’re looking for and then we have to look through each row and find where the number appears. For this we use the (elem) operator. Sets can also be referenced using Associative references for example:

say set(1,5){1}
True

A note on Objects

So for all the examples so far I’ve used basic integers. But there’s nothing stopping you using Objects in your Junctions and Sets. There are a few things to bear in mind though, Sets use the === identity operator for their tests. Most objects will fail an identity check unless you have cloned them or have defined the WHICH method in a way that will allow them to be compared.

For the Sudoku puzzle you may want to create a CellValue class that stores whether the number was one of the initial values in the puzzle. If you do this though you’ll need to override WHICH and make it return the Integer value of the Cell. As long as you are fine with an identity check being technically invalid in this case (two different CellValues may have the same value but the won’t be the same object) then you can put them in Sets.

I hope you’ve found this interesting, Junctions and Sets are two of the many different parts of Perl6 that give you power to do complex tasks simply. If you’re interested in the code here there’s a Object based version available to use you can install with :

zef install Game::Sudoku

Day 20: Advancements in Macrotechnologies

Hi there!

Please allow me, on this day of the advent calendar, a small tangent. I’m not going to be talking directly about a cool Perl 6 feature of idiom. Instead, I’ll open a small window on what could be – and hopefully will be at some point!

If you, like me, have been following the progress on Rakudo for a few years, you have seen this bit regularly in the releases: > Some of the not-quite-there features include: > – advanced macros

Well, what does that exactly mean? Perl 6 does have macros, but they are currently limited beyond what people usually want to do. It’s not that they’re currently useless – they are still useful, from other posts in previous years of the advent showcasing them, to OO::Monitor using macros to report typos earlier.

Enter 007. 007 is a “Small experimental language with a license to macro”. What does this mean?! It’s a language that’s been created to temper and experiments with macros, so that their design is ready and battle-tested when they get integrated into Perl 6.

So, what’s in it? 007 tries to mimic the “powerful” parts of Perl 6, so that we don’t design macros for a completely different language. That means phasers, infix operators, (the gist of) a MOP and regexes are in.

What does it look like? At its core, 007 is meant to like Perl 6. It does, however, shed some parts of it. Let’s take a look at the most important snippet you’d want to write: FizzBuzz. Note: all code snippets in this blogpost are executable 007 code, not Perl 6 code.

my n = 1;
while n <= 100 {
    if n %% 15 {
        say("FizzBuzz");
    }
    else if n %% 3 {
        say("Fizz");
    }
    else if n %% 5 {
        say("Buzz");
    }
    else {
        say(n);
    }
    n = n + 1;
}

What? You don’t care about that? Well, obviously. I did promise you macros. We’re going to take a look at a simple macro, “name”, that returns the name of either the object of the last index.

macro name(expr) {
    if expr ~~ Q::Postfix::Property {
        expr = expr.property;
    }
    if expr !~~ Q::Identifier {
        throw new Exception {
            message: "Cannot turn a " ~ type(expr) ~ " into a name"
        };
    }
    return quasi { expr.name };
}
my info = {
    foo: "Bond",
    bar: {
        baz: "James Bond"
    },
};
say(name(info)); # info
say(name(info.foo)); # foo
say(name(info.bar.baz)); # baz

So, you’re probably going “WAT” right here. You’re right – this gist is missing some explanations. One of the most important feature of macro is access to the AST (Abstract Syntax Tree). Macros need to be able to mess with the code’s structure (like in Lisp), not the code text (like in C). The Q:: types are standardized types that represent the program’s shape. They do not particularly need to represent how the compiler/interpreter thinks about the code, but they need to be stable because we are writing our code – our macros – against this introspection API.

In this code sample, we use two Q types: Q::Postfix::Property, which represents the dot access, and Q::Identifier, which represents an identifier. First, we check if we have a property. If that’s the case, we extract what’s on the right side of the dot (remember, a.b.c is (a.b).c). We then check if we do end up with an identifier (and not, say, a number), and print that. This is e.g. how we could implement C#’s nameof operator, without having to add anything to the language!

A few days ago, masak++ published a blog post titled Has it been three years?, which marked the 3rd birthday of 007. While some areas are still (very) rough, it looks more and more like a usable language, and the possibilities progress day after day.

The next thing we are looking at is implementing is parsed. Here’s how it could look: (this example works in a PR, but uses special-casing right now):

macro statement:<whoa>() is parsed(/"whoa!"/) {
    return quasi @ Q::Statement {
        say("whoa!");
    }
};
whoa!;

This is also probably how we’d want them to look in Perl 6… Or not? The discussion is still open! You are encouraged to join the bikesh… discussion :-). The language is still young and needs a lot of fleshing out, as much for its advanced features as for its simpler features.

Before I leave you with your appriopriate amount of meta-fun, here’s a milestone 007 wants to reach that’s currently sitting in a branch: implement infix: as part of the library (if you’re unsure, the Perl 6 docs on ff apply here), and not as part of the language. Here goes the code!

# our infix macro takes a lhs (left hand side) and a rhs (right hand side).
macro infix:(lhs, rhs) is tighter(infix:<=>) {
    my active = False; # our current value when starting
    return quasi {
        if {{{lhs}}} {
            active = True; # if the bit on the left is true, we switch to active mode
        }
        my result = active; # the result we are returning
        if {{{rhs}}} {
            active = False; # if the bit on the right is true, we switch to inactive mode
        }
        result; # return the result stored *before* the rhs ran.
    };
}
my values = ["A", "B", "A", "B", "A"];
for values -> v {
    if v == "B" ff v == "B" {
        say(v);
    }
    else {
        say("x");
    }
}

That’s it for today! If you want more, feel free to check out the Tutorial, or the examples folder. If you want to learn about the will-be, we also have a Roadmap.

Day 19: Language Independent Validation Rules (LIVR) for Perl6

I’ve just ported LIVR to Perl6. It was really fun to code in Perl6. Moreover, LIVR’s test suite allowed me to find a bug in Perl6 Email::Valid, and another one in Rakudo itself. It was even more fun that you not just implemented module but helped other developers to do some testing :)

What is LIVR? LIVR stands for “Language Independent Validation Rules”. So, it is like “Mustache” but in the world of validation. So, LIVR consists of the following parts:

There is LIVR for:

I will give you a short intro about LIVR here but for details, I strongly recommend to read this post “LIVR – Data Validation Without Any Issues”.

LIVR Intro

Data validation is a very common task. I am sure that every developer faces it again and again. Especially, it is important when you develop a Web application. It is a common rule – never trust user’s input. It seems that if the task is so common, there should be tons of libraries. Yes, it is but it is very difficult to find one that is ideal. Some of the libraries are doing too many things (like HTML form generation etc), other libraries are difficult to extend, some does not have hierarchical data support etc.

Moreover, if you are a web developer, you could need the same validation on the server and on the client.

In WebbyLab, mainly we use 3 programming languages – Perl, JavaScript, PHP. So, for us, it was ideal to reuse similar validation approach across languages.

Therefore, it was decided to create a universal validator that could work across different languages.

Validator Requirements

After trying tons of validation libraries, we had some vision in our heads about the issues we want to solve. Here are the requirements for the validator:

  • Rules are declarative and language independent. So, rules for validation is just a data structure, not method calls etc. You can transform it, change it as you do this with any other data structure.

  • Any number of rules for each field.

  • The validator should return together errors for all fields. For example, we want to highlight all errors in a form.

  • Cut out all fields that do not have validation rules described. (otherwise, you cannot rely on your validation, someday you will have a security issue if the validator will not meet this property).

  • Possibility to validate complex hierarchical structures. Especially useful for JSON APIs.

  • Easy to describe and understand validation.

  • Returns understandable error codes (neither error messages nor numeric codes)

  • Easy to implement own rules (usually you will have several in every project)

  • Rules should be able to change results output (“trim”, “nested_object”, for example)

  • Multipurpose (user input validation, configs validation etc)

  • Unicode support.

LIVR Specification

Since the task was set to create a validator independent of a programming language (some kind of a mustache/handlebars stuff) but within the data validation sphere, we started with the composition of specifications.

The specifications’ objectives are:

  • To standardize the data description format.

  • To describe a minimal set of the validation rules that must be supported by every implementation.

  • To standardize error codes.

  • To be a single basic documentation for all the implementations.

  • To feature a set of testing data that allows checking if the implementation fits the specifications.

  • The basic idea was that the description of the validation rules must look like a data scheme and be as similar to data as possible, but with rules instead of values.

The specification is available here http://livr-spec.org/.

This is the basic intro. More details are in the post I’ve mentioned above.

LIVR and Perl6

Let’s have some fun and play with a code. I will go through several examples, and will provide some internal details after each example. The source code of all examples is available on GitHub

At first, install LIVR module for Perl6 from CPAN

zef install LIVR

Example 1: registration data validation

use LIVR;
# Automatically trim all values before validation
LIVR::Validator.default-auto-trim(True);
my $validator = LIVR::Validator.new(livr-rules => {
name => 'required',
email => [ 'required', 'email' ],
gender => { one_of => ['male', 'female'] },
phone => { max_length => 10 },
password => [ 'required', {min_length => 10} ],
password2 => { equal_to_field => 'password' }
});
my $user-data = {
name => 'Viktor',
email => 'viktor@mail.com',
gender => 'male',
password => 'mypassword123',
password2 => 'mypassword123'
}
if my $valid-data = $validator.validate($user-data) {
# $valid-data is clean and does contain only fields
# which have validation and have passed it
$valid-data.say;
} else {
my $errors = $validator.errors();
$errors.say;
}

So, how to understand the rules?

The idea is very simple. Each rule is a hash. key – name of the validation rules. value – an array of arguments.

For example:

{
name => { required => [] },
phone => { max_length => [10] }
}

but if there is only one argument, you can use a shorter form:

{
phone => { max_length => 10 }
}

if there are no arguments, you can just pass the name of the rule as string

{
name => 'required'
}

you can pass a list of rules for a field in an array:

{
name => [ 'required', { max_length => 10 } ]
}

In this case, rules will be applied one after another. So, in this example, at first, the “required” rule will be applied and “max_length” after that and only if the “required” passed successfully.

Here is the details in LIVR spec.

You can find the list of standard rules here.

Example 2: validation of hierarchical data structure

use LIVR;
my $validator = LIVR::Validator.new(livr-rules => {
name => 'required',
phone => {max_length => 10},
address => {'nested_object' => {
city => 'required',
zip => ['required', 'positive_integer']
}}
});
my $user-data = {
name => "Michael",
phone => "0441234567",
address => {
city => "Kiev",
zip => "30552"
}
}
if my $valid-data = $validator.validate($user-data) {
# $valid-data is clean and does contain only fields
# which have validation and have passed it
$valid-data.say;
} else {
my $errors = $validator.errors();
$errors.say;
}

What is interesting in this example?

  • The schema (validation rules) shape looks very similar to the data shape. It is much easier to read than JSON Schema, for example.

  • It seems that “nested_object” is a special syntax but it is not. The validator does not make any difference between “required”, “nested_object” “max_length’. So, the core is very tiny and you can introduce a new feature easily with custom rules.

  • Often you want to reuse complex validation rules like ‘address’ and it can be done with aliasing.

  • You will receive a hierarchical error message. For example, if you will miss city and name, the error object will look {name => 'REQUIRED', address => {city => 'REQUIRED'} }.

Aliases

use LIVR;
LIVR::Validator.register-aliased-default-rule({
name => 'short_address', # names of the rule
rules => {'nested_object' => {
city => 'required',
zip => ['required', 'positive_integer']
}},
error => 'WRONG_ADDRESS' # custom error (optional)
});
my $validator = LIVR::Validator.new(livr-rules => {
name => 'required',
phone => {max_length => 10},
address => 'short_address'
});
my $user-data = {
name => "Michael",
phone => "0441234567",
address => {
city => "Kiev",
zip => "30552"
}
}
if my $valid-data = $validator.validate($user-data) {
# $valid-data is clean and does contain only fields
# which have validation and have passed it
$valid-data.say;
} else {
my $errors = $validator.errors();
$errors.say;
}

If you want, you can register aliases only for your validator instance:

use LIVR;
my $validator = LIVR::Validator.new(livr-rules => {
password => ['required', 'strong_password']
});
$validator.register-aliased-rule({
name => 'strong_password',
rules => {min_length => 6},
error => 'WEAK_PASSWORD'
});

Example 3: data modification, pipelining

There are rules that can do data modification. Here is the list of them:

  • trim

  • to_lc

  • to_uc

  • remove

  • leave_only

  • default

You can read details here.

With such approach, you can create some sort of pipe.

use LIVR;
my $validator = LIVR::Validator.new(livr-rules => {
email => [ 'trim', 'required', 'email', 'to_lc' ]
});
my $input-data = { email => ' EMail@Gmail.COM ' };
my $output-data = $validator.validate($input-data);
$output-data.say;

What is important here?

  • As I mentioned before, for the validator there is no difference between any of the rules. It treats “trim”, “default”, “required”, “nested_object” the same way.

  • Rules are applied one after another. The output of a rule will be passed to the input of the next rule. It is like a bash pipe echo ' EMail@Gmail.COM ' | trim | required | email | to_lc

  • $input-data will be NEVER changed. $output-data is data you use after the validation.

Example 4: custom rules

You can use aliases as custom rules but sometimes it is not enough. It is absolutely fine to write an own custom rule. You can do almost everything with custom rules.

Usually, we have 1-5 custom rules almost in every our project. Moreover, you can organize custom rules as a separate reusable module (even upload it to CPAN).

So, how to write a custom rule for LIVR?

Here is the example of ‘strong_password’:

use LIVR;
my $validator = LIVR::Validator.new(livr-rules => {
password => ['required', 'strong_password']
});
$validator.register-rules( 'strong_password' => sub (@rule-args, %builders) {
# %builders - are rules from original validator
# to allow you create new validator with all supported rules
# my $validator = LIVR::Validator.new(livr-rules => $livr).register-rules(%builders).prepare();
# See "nested_object" rule implementation for example
# https://github.com/koorchik/perl6-livr/blob/master/lib/LIVR/Rules/Meta.pm6#L5
# Return closure that will take value and return error
return sub ($value, $all-values, $output is rw) {
# We already have "required" rule to check that the value is present
return if LIVR::Utils::is-no-value($value); # so we skip empty values
# Return value is a string
return 'FORMAT_ERROR' if $value !~~ Str && $value !~~ Numeric;
# Return error in case of failed validation
return 'WEAK_PASSWORD' if $value.chars < 6;
# Change output value. We want always return value be a string
$output = $value.Str;
return;
};
});

Look at existing rules implementation for more examples:

Example 5: Web application

LIVR works great for REST APIs. Usually, a lot of REST APIs have a problem with returning understandable errors. If a user of your API will receive HTTP error 500, it will not help him. Much better when he will get errors like

{
"name": "REQUIRED",
"phone": "TOO_LONG",
"address": {
"city": "REQUIRED",
"zip": "NOT_POSITIVE_INTEGER"
}
}

than just “Server error”.

So, let try to do a small web service with 2 endpoints:

  • GET /notes -> get list of notes

  • POST /notes -> create a note

You will need to install Bailador for it:

zef install Bailador

Let’s create some services. I prefer “Command” pattern for the services with template method “run”.

We will have 2 services:

  • Service::Notes::Create

  • Service::Notes::List

Service usage example:

my %CONTEXT = (storage => my @STORAGE);
my %note = title => 'Note1', text => 'Note text';
my $new-note = Service::Notes::Create.new(
context => %CONTEXT
).run(%note);
my $list = Service::Notes::Create.new(
context => %CONTEXT
).run({});

With context you can inject any dependencies. “run” method accepts data passed by user.

Here is how the source code of the service for notes creation looks like:

use Service::Base;
my $LAST_ID = 0;
class Service::Notes::Create is Service::Base {
has %.validation-rules = (
title => ['required', {max_length => 20} ],
text => ['required', {max_length => 255} ]
);
method execute(%note) {
%note<id> = $LAST_ID++;
$.context<storage>.push(%note);
return %note;
}
}

and the Service::Base class:

use LIVR;
LIVR::Validator.default-auto-trim(True);
class Service::Base {
has $.context = {};
method run(%params) {
my %clean-data = self!validate(%params);
return self.execute(%params);
}
method !validate($params) {
return $params unless %.validation-rules.elems;
my $validator = LIVR::Validator.new(
livr-rules => %.validation-rules
);
if my $valid-data = $validator.validate($params) {
return $valid-data;
} else {
die $validator.errors();
}
}
}

“run” method guarantees that all procedures are kept:

  • Data was validated.

  • “execute” will be called only after validation.

  • “execute” will receive only clean data.

  • Throws an exception in case of validation errors.

  • Can check permissions before calling “execute”.

  • Can do extra work like caching validator objects, etc.

Here is the full working example.

Run the app:

perl6 app.pl6

Create a note:

curl -H "Content-Type: application/json" -X POST -d '{"title":"New Note","text":"Some text here"}' http://localhost:3000/notes

Check validation:

curl -H "Content-Type: application/json" -X POST -d '{"title":"","text":""}' http://localhost:3000/notes

Get the list of notes:

curl http://localhost:3000/notes

LIVR links

I hope you will like the LIVR. I will appreciate any feedback.

Day 18: Perl 6 powered work flow

Staying in flow while coding can be a challenge. Distractions and pesky syntactic bugs are potential flow stoppers.

Then there is the 7+/-2 short term memory limit that we all have to juggle. Unlike computers, we can’t just add more hardware to increase the size of the brain’s working memory buffer – at least not yet. Keeping in flow requires managing this buffer to avoid blowouts. Fortunately we have computers to help.

The idea of using a computer to extend your memory has been around since the dawn of computing. Way back in 1945 Vannevar Bush envisioned a Memex (MEMory EXtender), an “enlarged intimate supplement to one’s memory”.

In 2017, the humble text file can act like a poor man’s memex. The text file contains a timeline with three sections: Past, Now and Next. It’s kind of like a changelog but with a future too. The past section fills up over time and contains completed tasks and information for later recall. The now section helps you focus on the task at hand and the next section queues up tasks to do in the future.

Tasks move through three states: do (+next), doing (!now) and done (-past).

To stay in flow you sometimes need to quickly recall something, log a task to do in the future and focus on making progress in the now. Keeping a 123.do file helps you to offload cognitive overhead while coding.

The format of a 123.do file is simple so you can hack on it directly with your text editor and it’s described by this Perl 6 grammar.

Here is the Perl 6 command line module that drives it.

tty

To install it just:

shell> zef install Do123
shell> 123 +7 Merry Christmas
shell> 123 +13 Happy New Year

Day 17: Something about messaging (but I couldn’t think of a snappier title.)

Why messaging?

When I first started thinking about writing an Advent article this year I reflect that I hadn’t really written a great deal of Perl 6 in the past twelve months in comparison to the previous years when I appear to have written a large number of modules. What I have been doing (in my day job at least,) is thinking about and implementing applications that make heavy use of some messaging system. So I thought it would be interesting to bring some of those ideas to Perl 6.

Perl has always had a strong reputation as a “glue language” and Perl 6 has features that take off an run with that, most prominently the reactive and concurrent features, making it ideally suited to creating message based integration services.

What messaging?

At my feet right now is the excellent Enterprise Integration Patterns which I’d recommend to anyone who has an interest (or works in,) the field, despite it being nearly 15 years old now. However it is a weighty tome (literally, it weighs in at nearly one and half kilograms in hard book,) so I’m using it as a reminder to myself not to attempt to be exhaustive on the subject, lest this turn into a book itself.

There are quite a large number of managed messaging systems both free and commercial, using a range of protocols both open and proprietary, but I am going to limit myself to RabbitMQ which I know quite well and is supported in Perl 6 by Net::AMQP.

If you want to try the examples yourself you will need to have access to a RabbitMQ broker (which is available as a package for most operating system distributions,) but you can use the Docker Image, which appears to work quite well.

You will also need to install Net::AMQP which can be done with:

zef install Net::AMQP

In the examples I will be using the default connection details for the RabbitMQ server (that is the broker is running on localhost and the default guest is active,) if you need to supply differing details then you can alter the constructor for Net::AMQP to reflect the appropriate values:

my $n = Net::AMQP.new(
  host => 'localhost',
  port => 5672,
  login => 'guest',
  password => 'guest',
  vhost => '/'
);

A couple of the examples may require other modules but I’ll introduce them as I go along.

Obligatory Hello, World

RabbitMQ implements the rich broker architecture that is described by the AMQP v0.9 specification, the more recent v1.0 specification as implemented by ActiveMQ does away with much of the prescribed broker semantics to the extent that it is basically a different protocol that shares a similar wire format.

Possibly the simplest possible example of sending a message (a producer,) would be:

   use Net::AMQP;

    my $n = Net::AMQP.new;

    await $n.connect;
    my $channel = $n.open-channel(1).result;
    my $exchange = $channel.exchange.result;
    $exchange.publish(routing-key => "hello", body => "Hello, World".encode);
    await $n.close("", "");

This demonstrates most of the core features of RabbitMQ and Net::AMQP.

Firstly you will notice that many of the methods return a Promise that will be mostly kept with the actual returned value, this reflects the asynchronous nature of the broker which sends (im most cases but not all,) a confirmation message (method in AMQP parlance,) when the operation has been completed on the server.

The connect here establishes the network connection to the broker and negotiates certain parameters, returning a Promise which will be kept with a true value if succesfull or broken if the network connection fails, the supplied credentials are incorrect or the server declines the connection for some other reason.

The open-channel opens a logical broker communication channel in which the messages are exchanged, you may use more than one channel in an application. The returned Promise will be kept with an initialised Net::AMQP::Channel object when confirmed by the server.

The exchange method on the channel object returns a Net::AMQP::Exchange object, in the AMQP model all messages are published to an exchange from which the broker may route the message to one or more queues depending on the definition of the exchange from whence the message may be consumed by another client. In this simple example we are going to use the default exchange (named amq.default.)

The publish method is called on the exchange object, it has no return value as it is simply fire and forget, the broker doesn’t confirm the receipt and the delivery or otherwise to a queue is decoupled from the act of publishing the message. The routing-key parameter is, as the name suggests, used by the broker to determine which queue (or queues,) to route the message to. In the case of the default exchange used in this example the type of the exchange is direct which basically means that the messsage is delivered to exactly one consumer of a queue with a matching name to the routing key. The body is always a Buf and can be of an arbitrary length, in this case we are using an encoded string but it could equally be encoded JSON, MessagePack or BSON blob, whatever suits the consuming application. You can infact supply content-type and content-encoding parameters which will be passed on with the message delivered to a consumer if the design of your application requires it, but the broker itself is totally agnostic to the content of the payload. There are other optional parameters but none are required in this example.

Of course we also need something to read the messages that we are publishing (a consumer,) :

use Net::AMQP;

my $n = Net::AMQP.new;

my $connection = $n.connect.result;

react {
    whenever $n.open-channel(1) -> $channel {
        whenever $channel.declare-queue("hello") -> $queue {
            $queue.consume;
            whenever $queue.message-supply.map( -> $v { $v.body.decode }) -> $message {
                say $message;
                $n.close("", "");
                done();
            }
        }
    }
}

Here, rather than operating on an exchange as we did in the producer, we are using a named queue; declare-queue will cause the queue to be created if it doesn’t already exist and the broker will, by default, bind this queue to the default exchange, “binding” essentially means that messages sent to the exchange can be routed to the queue depending on the exchange type, the routing key of the messages and possibly other metadata from the message. In this case the “direct” type of the default exchange will cause the messages to be routed to a queue that matches the routing key (if one exists, the message will be silently dropped if it doesn’t.)

The consume method is called when you are ready to start receiving messages, it returns a Promise that will be kept with the “consumer tag” that uniquely identifies the consumer to the server but, as we don’t need it here, we can ignore it.

Once we have called consume (and the broker has sent the confirmation,) the messages that are routed to our queue will be emitted to the Supply returned by message-supply as Net::AMQP::Queue::Message objects, however as we aren’t interested in the message metadata in this example map is used to create a new Supply with the decoded bodies of the messages; this is safe where, as in this case, you can guarantee that you will be receiving utf-8 encoded but in a real world application you may want to be somewhat more robust about handling the body if you aren’t in control of the sender (which is often the case when integrating with third party applications.) The content-type and content-encoding as supplied when publishing the message are available in the headers attribute (a Hash,) of the Message object, but they aren’t required to be set, so you may want to consider an alternative scheme as suitable for your application.

In this example the connection is closed and the react exited after the first message is received, but in reality you may want remove the lines:

$n.close("", "");
done();

from the inner whenever and if you want to exit on a signal for example add:

whenever signal(SIGINT) {
    $n.close("", "");
    done();
}

within the top level of the react block. However you choose to exit your program you should always call call close on the connection object as it will cause a warning message in the broker logs that might upset the person administering the server if you don’t.

We could of course have used the react syntax in the producer example in a similar way, but it would have added verbosity for little benefit, however in a larger program where you may for instance be processing from, say, a Supply it can work quite nicely:

    use Net::AMQP;
      
    my $supply = Supply.from-list("Hello, World", "Bonjour le monde", "Hola Mundo");
    my $n = Net::AMQP.new;

    react {
        whenever $n.connect {
            whenever $n.open-channel(1) -> $channel {
                whenever $channel.exchange -> $exchange {
                    whenever $supply.map(-> $v { $v.encode }) -> $body {
                        $exchange.publish(routing-key => "hello", :$body );
                        LAST {
                            $n.close("", "");
                            done();
                        }
                    }
                }
            }
        }
    }

Something a bit more useful

You’re probably thinking “that’s all very well, but that’s nothing I couldn’t do with, say, an HTTP client and a small web-server”, well, you’re getting reliable queuing, persistence of unread messages and so forth, but yes, it could be over-kill for a simple application, until you add a requirement to send the messages to multiple, possibly unknown, consumers for example. This kind of pattern is a use of the “fanout” exchange type, which will deliver a message to all queues that are bound to the exchange.

In this example we will need to declare our own queue, in order that we can specify the type, but the producer doesn’t become much more complicated:

use Net::AMQP;

my $n = Net::AMQP.new;
my $con =  await $n.connect;
my $channel = $n.open-channel(1).result;
my $exchange = $channel.declare-exchange('logs', 'fanout').result;
$exchange.publish(body => 'Hello, World'.encode);
await $n.close("", "");

The only major difference here is that we use declare-exchange rather than exchange on the channel to obtain the exchange to which we send the message, this has the advantage of causing the exchange to be created on the broker with the specified type if it doesn’t already exist which is useful here as we don’t need to rely on the exchange being created beforehand (with the command line tool rabbitmqctl or via the web management interface,) but it similarly returns a Promise that will be kept with the exchange object. You probably also noticed that here the routing-key is not being passed to the publish method, this is because for a fanout exchange the routing key is ignored and the messages are delivered to all the consuming queues that are bound to the exchange.

The consumer code is likewise not dissimilar to our original consumer:

use Net::AMQP;

my $n = Net::AMQP.new;

my $connection = $n.connect.result;

react {
    whenever $n.open-channel(1) -> $channel {
        whenever $channel.declare-exchange('logs', 'fanout') -> $exchange {
            whenever $channel.declare-queue() -> $queue {
                whenever $queue.bind('logs') {
                    $queue.consume;
                    whenever $queue.message-supply.map( -> $v { $v.body.decode }) -> $message {
                        say $*PID ~ " : " ~ $message;
                    }
                }
                whenever signal(SIGINT) {
                    say $*PID ~ " exiting";
                    $n.close("", "");
                    done();
                }

            }
        }
    }
}

The exchange is declared in the same way that it was declared in the producer example, this is really a convenience so you don’t have to worry about which order to start the programs, the first one run will create the queue, however if you run the producer before the consumer is started the messages sent will be dropped as there is nowhere by default to route them. Here we are also declaring a queue without providing a name, this creates an “anonymous” queue (the name is made up by the broker,) because the name of the queue doesn’t play a part in the routing of the messages in this case.

You could provide a queue name but if there are duplicate names then the messages will be routed to the queues with the same names on a “first come, first served” basis, which is possibly not the expected behaviour (though it is possible and may have a use.)

Also in this case the queue has to be explictly bound to the exchange we have declared, in the first example the binding to the default exchange was performed by the broker automatically, but in most other cases you will have to use bind on the queue with the name of the exchange. bind, like many of the methods, returns a Promise that will be kept when the broker confirms that the operation has been completed (though in this case the value isn’t important.)

You should be able to start as many of the consumers as you want and they will all receive all the messages in the same order that they are sent. Of course in a real world application the consumers may be completely different prograns written in a variety of different languages.

Keeping on Topic

A common pattern is a set of consumers that are only interested in some of the messages published to a particular exchange, a classic example of this might be a logging system where there are consumers specialised to different log levels for instance. AMQP provides a topic exchange type that allows for the routing of the messages to a particular queue by pattern matching on the producer supplied routing key.

The simplest producer might be:

	use Net::AMQP;

	multi sub MAIN(Str $message = 'Hello, World', Str $level = 'application.info') {
		my $n = Net::AMQP.new;
		my $con =  await $n.connect;
		my $channel = $n.open-channel(1).result;
		my $exchange = $channel.declare-exchange('topic-logs', 'topic').result;
		$exchange.publish(routing-key => $level, body => $message.encode);
		await $n.close("", "");
	}

This should now be fairly clear from the previous examples, except in this case we declare the exchange as the topic type and also provide the routing key that will be used by the broker to match the consuming queues.

The consumer code itself is again fairly similar to the previous examples, except it will take a list of patterns on the command line that will be used to match the routing key sent to the exchange:

use Net::AMQP;

multi sub MAIN(*@topics ) {
    my $n = Net::AMQP.new(:debug);
    unless @topics.elems {
        say "will be displaying all the messages";
        @topics.push: '#';
    }
    my $connection = $n.connect.result;
    react {
        whenever $n.open-channel(1) -> $channel {
            whenever $channel.declare-exchange('topic-logs', 'topic') -> $exchange {
                whenever $channel.declare-queue() -> $queue {
                    for @topics -> $topic {
                        await $queue.bind('topic-logs', $topic);
                    }
                    $queue.consume;
                    my $body-supply = $queue.message-supply.map( -> $v { [ $v.routing-key, $v.body.decode ] }).share;
                    whenever $body-supply -> ( $topic , $message ) {
                            say $*PID ~ " : [$topic]  $message";
                    }
                }
            }
        }
    }
}

Here essentially the only difference from the previous consumer example is (aside from the type supplied to the exchange declaration,) that the topic is supplied to the bind method. The topic can be a simple pattern where a # will match any supplied routing key and the behaviour will be the same as a fanout exchange, otherwise a * can be used in any part of the binding topic as a wild card which will match any characters in the topic, so in this example application.* will match messages sent with the routing key application.info or application.debug for instance.

If there is more than one queue bound with the same pattern, they too will behave as if they were bound to a fanout exchange. If the bound pattern contains neither a hash nor an asterisk character then the queue will behave as if it was bound to a direct exchange as a queue with that name (that is to say it will have the messages delivered on a first come, first served basis.)

But there’s more to life than just AMQP

Of course. The beauty of the Perl 6 reactive model is that various sources feeding Supplies can be integrated into your producer code as touched on above and similarly a consumer can push a message to another transport mechanism.

I was delighted to discover while I was thinking about the examples for this that the following just works:

	use EventSource::Server;
	use Net::AMQP;
	use Cro::HTTP::Router;
	use Cro::HTTP::Server;

	my $supply = supply { 
		my $n = Net::AMQP.new;
		my $connection = $n.connect.result;
		whenever $n.open-channel(1) -> $channel {
			whenever $channel.declare-queue("hello") -> $queue {
				$queue.consume;
				whenever $queue.message-supply.map( -> $v { $v.body.decode }) -> $data {
					emit EventSource::Server::Event.new(type => 'hello', :$data);
				}
			}
		}
	};

	my $es = EventSource::Server.new(:$supply);

	my $application = route {
		get -> 'greet', $name {
			content 'text/event-stream; charset=utf-8', $es.out-supply;
		}
	}
	my Cro::Service $hello = Cro::HTTP::Server.new:
		:host, :port, :$application;
	$hello.start;

	react whenever signal(SIGINT) { $hello.stop; exit; }

This is a variation of the example in the EventSource::Server you could of course the alter it to use any of the exchange types as discussed above. It should work fine with the producer code from the first example. And (if you were so persuaded,) you could consume the events with a small piece of node.js code (or in some browser oriented javascript,):

	var EventSource = require('eventsource');

	var event = process.argv[2] || 'message';

	console.info(event);
	var v = new EventSource(' http://127.0.0.1:10000');

	v.addEventListener(event, function(e) {
		console.info(e);

	}, false);

Wrapping it up

I concluded after typing the first paragraph of this that I would never be able to do this subject justice in a short article, so I hope you consider this as an appetizer, I don’t think I’ll ever find the time to write the book that it probably deserves. But I do have all the examples based on the RabbitMQ tutorials so check that out and feel free to contribute.

 

Day 16 – 🎶 Deck The Halls With Perf Improvements 🎶

In the UK our lack of Thanksgiving leaves us with Christmas as a period of giving thanks and reflection up to the New Year. To that end I wanted to put together several bits and pieces I’ve been sat on for a while, around the state of Perl 6 performance, that highlight just how much effort is going into this. I’m not sure the wider programming community appreciates the pace and volume of effort that’s happening.

I’m not a core dev, but I have been a humble user of Perl 6 since just before the 2010 introduction of Rakudo*. Frequently the effort that’s already gone into Rakudo is overshadowed by the perceived effort yet to come. This is especially true of people taking a fresh look at Rakudo Perl 6, who might imagine a fly-by look is what next Christmas will be like. But Perl 6 has historically proven things always improve by next Christmas, for any Christmas you choose.

All the way back in Christmas 2014 I wrote an advent post about why I thought Perl 6 was great for doing Bioinformatics work. What was left out of that post, was why the implementation of Perl 6 on Rakudo was not at all ready for doing any serious Bioinformatics. The performance was really not there at all! My first attempts in Perl 6 (when the Parrot VM was in full force) left me with simple operations taking tens of minutes to execute, that I’d expect to be millisecond level perf. This is unfortunately anecdotal because I didn’t keep good track of timings then. But it was certainly not a great starting place.

However, fast forwarding to 2014 and MoarVM I felt comfortable writing the advent post, because I was cognisant of how much things had improved in my 4 years of being a user. But also that all development was on finishing the language definition and correct implementation back then. I am however a user that has been waiting for perf to get there. The time I think has mostly arrived. For this I have to give thanks to the tremendous daily hard work put in by all the core devs. It’s been incredible and motivating to watch it unfold. For me this Christmas is the goal Christmas, it’s arrived. 👏🏻🎊

I have been running and timing the tests for my BioInfo module that does some basic manipulations of biological sequence data for many years now. It does this in a really terrible way. Lots of mistakes in allocation and dropping of hashes in tight loops etc. But I’ve left this code alone -by now- for the better part of half a decade. Quietly benchmarking in private, and occasionally applauding efforts on the IRC channel when a quantum leap in perf was visible. Sub 10s was a big one! It happened suddenly from 30/40s. That jump came after I hinted on IRC a place my code was especially slow from profiling!

This is a bit of a long term view, if I zoom in on just this last year you can see that performance is still improving by integer factors if not large quantities of time.

Keep in mind that all of these profiles are not from released versions of the Rakudo compiler but HEAD from that day. So occasionally there is the odd performance regression as you can see above that usually isn’t left for a release.

So what’s going on? How are things getting better? There are several reasons. Many of the algorithmic choices and core built in functions in Perl 6 have been progressively and aggressively optimised at a source level (more later). But the MoarVM virtual machine backing Rakudo has also increased in its ability to optimise and JIT down to native code and inline specialised versions of code. This is in part thanks to the –profile option available with Rakudo Perl 6 since 2014 that provides all of this info.In the above plot of how MoarVM has treated the code frames of my compiled Perl 6 tests it should hopefully be clear that since this summer there are considerably more frames being JIT compiled, less interpreted, and almost all of the specialised frames (orange) end up as native JIT (green). If you want to know more about the recent work on the “spesh” MoarVM code specializer you can read about it on Jonathan Worthington’s 4-part posting on his blog. Baart Weigmans also has a blog outlining his work on the JIT compiler, and a nice talk presented recently about lots of new functionality that’s yet to land that should hopefully let many new developers pile on and help improve the JIT. So if that feels interesting as a challenge to you I recommend checking out the above links.

So that’s my benchmark and my goals, most of which revolve around data structure creation and parsing. However, what about other stuff like numeric work? Has that kept up too? Without anyone pushing, like I pushed my view of where things could be improved. The answer is yes!

Once upon a time, back in 2013 a gentleman by the name of Tim King took an interest in finding prime numbers in Perl 6. Tim was fairly upset with the performance he discovered. Rightly so. He started out with the following pretty code:

sub primes-any (Int $max) {
my Int @primes;
@primes.push($_) unless $_ %% any(@primes) for 2 .. $max;
return @primes;
}

Find any prime via a junction on the definition of a prime, really a nice elegant solution! But Tim was aghast that Junctions were slow, with the above code taking him 11s to see the first 1000 primes. Today that super high level code takes 0.96s.

Being unhappy with how slow the junction based code was Tim went on to do the more standard iterative approaches. Tim vanished from online shortly after these posts. But he left a legacy that I continued. His code for the prime benchmarks and my adaptation with results through time can be found in this gist. Below is the punchline with another graph showing the average time taken for finding the first 1000 primes over 100 trials each. Vertical lines in 2015 is a higher standard deviation.

Again with a zoomed in view of more recently (with the latest data point worrying me a little that I screwed up somehow…)

The above convergence to a point, is the overhead of starting and stopping the Rakudo runtime and MoarVM. Finding primes isn’t the effort it once was, with it being marginally slower than just Rakudo starting. At least an order of magnitude faster regardless of how high level and elegant the code solution you choose.

Ok so we’ve seen MoarVM got some shiny new moving parts. But huge effort has been put in by developers like Liz, jnthn, Zoffix and more recently in the world of strings Samcv to improve what MoarVM and Rakudo are actually doing under the hood algorithmically.

Sidenote: I’m sure I am not doing most other devs justice at all, especially by ignoring JVM efforts in this post. I would recommend everyone goes and checks out the commit log to see just how many people are now involved in making Rakudo faster, better, stronger. Im sure they would like to see your thanks at the bottom of this article too!

So saving you a job of checking out the commit log I’ve done some mining of my own looking at commits since last Christmas related to perf gains. Things with N% or Nx faster. Like the following:

3c6277c77 Have .codes use nqp::codes op. 350% faster for short strings

ee4593601 Make Baggy (^) Baggy about 150x faster

Those two commits on their own would be an impressive boost to a programming project in the timescale of a years core development. But they are just two of hundreds of commits this year alone.

git log --since 2016-12-25 --oneline | perl6 -e 'say lines().grep(/(\d+\.?\d*?)\%(" "|$)/).map({/(\d+\.?\d*?)\%(" "|$)/; ~$/[0]}).sort.join("\n")' > %_increase.txt
git log --since 2016-12-25 --oneline | perl6 -e 'say lines().grep(/(\d+\.?\d*?)x(" "|$)/).map({/(\d+\.?\d*?)x(" "|$)/; ~$/[0]}).sort.join("\n")' > x_increase.txt
view raw perf_improvements.sh hosted with ❤ by GitHub

Below are some histograms of the numbers of commits and the % and x multiplier increase in performance they mentioned. You can grep the logs yourself with the code above. There are some even more exciting gains during 2016 worth checking out.

These really are the perf improvement commits for 2017 alone, with more landing almost daily. This doesn’t even include many of the I/O perf gains from Zoffix’ grant, as they were not always benchmarked before/after. 2016 is equally dense with some crazy >1000x improvements. Around ten commits alone this year with 40x improvement! This is really impressive to see. At least to me. I think its also not obvious to many on the project how much they’re accomplishing. Remember these are singular commits. Some are even compounded improvement over the year!

I will leave it here. But really thank you core devs, all of you. It’s been a great experience watching and waiting. But now it’s time for me to get on with some Perl 6 code in 2018! It’s finally Christmas.