Day 14 – A Perl 6 Timer: Ready, Set, Go!

Christmas is coming!

With Perl 6 promising to be ready for production next year – now is the time to stop lurking and start learning. It’s exciting! But what to try first?

OK – so I’ve downloaded Rakudo and tinkered with the REPL. Time to try translating a simple module that I often use in Perl 5 projects. It’s a simple timer that records events and renders a timed report:

my $timer = Timer->new;
$timer->record('first event');
$timer->record('second event');
say $timer->report;

Here’s the output:

[0.000] first event
[0.007] second event

It helps trace code and identify bottlenecks. Here’s a Perl 5.20 version:

package Timer;

use Time::HiRes qw(gettimeofday tv_interval);
use Mojo::Base -base;
use experimental 'signatures', 'postderef';

has '_event_log'  => sub { [] };
has '_start_time' => sub { [ gettimeofday ] };

sub record ($self, $event_description) {
    # record the event and timestamp
    my $event = {
        timestamp   => sprintf("%.3f", tv_interval($self->_start_time)),
        description => $event_description,
    push($self->_event_log->@*, $event);

sub report ($self) {
    # render a full report
    my $report = '';
    foreach my $event ($self->_event_log->@*) {
         $report .= '[' . $event->{timestamp} . '] ' . $event->{description} . "\n";
    return $report;

Now how would this look in Perl 6?

Perl 6 does full flavour, duck quacking OO so we can replace Perl 5’s package and declare a Timer class:

class Timer {

No need for that spurious 1; at the end of the file any more either. We need to store an initial start time and a list of event records. In Perl 6, object attributes are defined with the help of ‘twigils’ – that is, secondary sigils …

class Timer {
    has $!start-time;  # private scalar storing the starting time
    has @!event-log;   # private list of events 
                       # dashes-in-identifiers save keystrokes too

The exclamation mark here in $!start-time is a secondary sigil and declares the attribute as private. The @!event-log list is also marked as private.

You can imagine the ! character is like a portcullis dropping down at the entrance of a castle – blocking external access to the attribute behind. The . twigil is used for public attributes (e.g., $.this-attribute-is-public-no-portcullis-here).

The twigil tells you at first glance what an attribute’s contract is with the class. No more context-switching and scrolling back to the top of the file to find out. Thanks Larry for saving some brain-space!

So the attributes are defined, what about the methods, record() and report()?

The record() method starts the internal timer the first time it’s called so it needs an equivalent to Perl 5’s Time::HiRes. A quick Google search brings up an example on Like the ancient Rosetta Stone this site is useful for translating between languages and is a hangout for early adopting polyglot programmers. Perl 5 and Perl 6 are well represented there so it’s a great place for converting Perl 5-isms into Perl 6. It seems now is the new gettimeofday!

I had a little chat with the Perl6 REPL about this:

> now # returns an Instant object
> now.HOW # access the Higher Order Workings
> now.^methods # use ^ to access the HOW
new from-posix to-posix Bridge Num Int narrow Rat abs sign conj sqrt rand sin asin cos acos tan atan atan2 sec asec cosec acosec cotan acotan sinh asinh cosh acosh tanh atanh sech asech cosech acosech cotanh acotanh floor ceiling round unpolar cis Complex log exp truncate isNaN base Real log10 roots succ pred sleep Str perl Numeric ACCEPTS Bool gist DUMP <anon>
> now.perl # Data::Dumper in Perl 6 => <1127461994944/795>)
> now.gist # Human readable explanation
> my $time = now
> now - $time # can we get the difference in time?

So this will do the trick! The record() method works like a stopwatch snapshotting the difference since the start:

method record ($event_description) {
    # initialise the start-time if this is the first time through
    $!start-time //= now;

        timestamp   => now - $!start-time, # take the difference
        description => $event_description

The report() method pretty prints the @!event-log:

method report {
    my @report_lines;
    for @!event-log -> %event {
         @report_lines.push("[%event<timestamp>.fmt("%.3f")] %event<description>");
    return @report_lines.join("\n");

Metaphorically speaking the for loop unrolls the @!event-log carpet (@) shooting values (->) into individual hash entries (%event)!

Sigils are invariant in Perl 6 so a %hash means %hash and the sigil doesn’t change while accessing a hash value: %hash<key>. Interpolation into a string just works.The full Perl 6 version looks like:

class Timer {
    has $!start-time;
    has @!event-log;

    method record ($event_description) {
        # initialise the start-time if this is the first time through
        $!start-time //= now;
            timestamp   => now - $!start-time,
            description => $event_description
    method report {
        my @report_lines;
        for @!event-log -> %event {
             @report_lines.push("[%event<timestamp>.fmt("%.3f")] %event<description>");
        return @report_lines.join("\n");

I’m happy with that!

Now … what can I learn next?

How about converting the class into a Perl 6 role instead?

There’s a learning challenge for you too. Can you turn it into a Perl 6 role?

Let me start the timer … Ready, Set, Go!

Day 13 – String Interpolation and the Zen Slice

So you think you know all about string interpolation in Perl 6?

Well, especially coming from Perl 5, you may find some things that do not work exactly as you’re used to. The simple examples all work, as it seems:

my $a = 42;
say 'value = $a'; # value = $a
say "value = $a"; # value = 42

my @a = ^10;
say 'value = @a'; # value = @a
say "value = @a"; # value = @a HUH??

In earlier versions of Perl 5 (or was it Perl 4?), this gave the same result. At one point, it was decided that arrays would be interpolated in double quoted strings as well. However, this caused quite a few problems with double quoted texts with email addresses in them: you would need to escape each @ (if you were using strict, which you of course should have). And if you forgot, and there was an array that just happened to have the same name as the user part of an email address, you would be in for a surprise. Or if you didn’t use strict, you would suddenly have the text without the email address. But then again, you got what you asked for.

So how can we make this work in Perl 6?

Introducing the Zen slice

The Zen slice on an object, returns the object. It’s like you specify nothing, and get everything. So what does that look like?

my @a = ^10;
say "value = @a[]"; # value = 0 1 2 3 4 5 6 7 8 9

You will have to make sure that you use the right indexers for the type of variable that you’re interpolating.

my %h = a => 42, b => 666;
say "value = %h{}"; # value = a 42 b 666

Note that the Zen slice on a hash returns both keys and values, whereas the Zen slice on an array only returns the values. This seems inconsistent, until you realize that you can think of a hash as a list of Pairs.

The Zen slice only really exists at compile time. So you will not get everything if your slice specification is an empty list at runtime:

my @a;
my %h = a => 42, b => 666;
# a slice, but not a Zen slice:
say "value = %h{@a}"; # value =

So the only way you can specify a Zen slice, is if there is nothing (but whitespace) between the slice delimiters.

The Whatever slice

The * ( Whatever ) slice is different. The Whatever will just fill in all keys that exist in the object, and thus only return the values of a hash.

my %h = a => 42, b => 666;
say "value = %h{*}"; # value = 42 666

For arrays, there isn’t really any difference at the moment (although that may change in the future when multi-dimensional arrays are fleshed out more).

Interpolating results from subs and methods

In double quoted strings, you can also interpolate subroutine calls, as long as they start with an ‘&‘ and have a set of parentheses (even if you don’t want to pass any arguments):

sub a { 42 }
say "value = &a()"; # value = 42

But it doesn’t stop at calling subs: you can also call a method on a variable as long as they have parentheses at the end:

my %h = a => 42, b => 666;
say "value = %h.keys()"; # value = b a

And it doesn’t stay limited to a single method call: you can have as many as you want, provided the last one has parentheses:

my %h = a => 42, b => 666;
say "value = %h.perl.EVAL.perl.EVAL.perl()"; # value = ("b" => 666, "a" => 42).hash

Interpolating expressions

If you want to interpolate an expression in a double quoted string, you can also do that by providing an executable block inside the string:

say "6 * 7 = { 6 * 7 }"; # 6 * 7 = 42

The result of the execution of the block, is what will be interpolated in the string. Well, what really is interpolated in a string, is the result of calling the .Str method on the value. This is different from just saying a value, in which case the .gist method is called. Suppose we have our own class with its own .gist and .Str methods:

class A {
    method Str { "foo" }
    method gist { "bar" }
say "value = { A }"; # value = foo
say "value = ", A;   # value = bar


String interpolation in Perl 6 is very powerful. As you can see, the Zen slice makes it easy to interpolate whole arrays and hashes in a string.

In this post I have only scratched the surface of string interpolation. Please check out Quoting on Steroids in a few days for more about quoting constructs.

Day 12 – Towards cleaner JVM-Interoperability

As some of our readers might remember, interoperability on Rakudo JVM has been described as “basic” by jnthn++ last year. Which is to say, calling methods on Java objects inside Rakudo on the JVM was possible, but it wasn’t always pretty or convenient. As a reminder:

    use java::util::zip::CRC32:from<java>;
    my $crc =;
    for 'Hello, Java'.encode('UTF-8').list {

In this post I want to describe my journey towards figuring out why the method call looks like that and what we can do to improve it.

What’s wrong with that method call?

The reason behind the unusual method call is that there’s no multi-method dispatch (MMD) for overloaded methods on Java objects from the Perl 6 side. Sure enough, checking the javadoc for reveals that update() is overloaded with three candidates. Readers familiar with the JVM might notice that the method we call on $crc is a method descriptor, defined in the JVM specification as

A method descriptor represents the parameters 
that the method takes and the value that it returns.

In our case this means “a method called update which takes a primitive int and returns void[1]. Clearly Rakudo should figure out on its own which candidate we want, considering Rakudo has MMD on Perl 6 objects and Java has compile-time dispatch on it’s own objects as well. Let’s see what it takes!

The nitty-gritty, if only some of it

Let’s start at the top of the code example. We’re starting with a use statement which by and large does the same in Perl 6 as it does in Perl 5, except we are supplying a longname consisting of the module name and the colonpair :from<java>. The colonpair with key from tells the ModuleLoader to not use the default Perl 6 module loader, but a different one, in our case the JavaModuleLoader.

In JavaModuleLoader::load_module we’re starting to tread towards vm-level code. After making sure we have an interop loader, we call out to Java code with typeForNameFromJar() or typeForName() respectively. This is where we’re leaving NQP code and entering Java code on our trail. Next stop: org.perl6.nqp.runtime.BootJavaInterop

typeForName() and typeForNameFromJar() both do some amount of path-munging to find the .class or .jar file, build a ClassLoader with the path where they found those files and pass the loaded class to getSTableForClass. A STable, or shared table, represents a pairing of a HOW and a REPR, that is, a pairing between a metaobject and the vm-level representation of an object of that type. Creation of a STable happens lazily, via a ClassValue, where the InteropInfo remembers the Java class it represents as well as the computed interop and STable. The important thing we’re looking for is set inside computeInterop, where the documentation rightfully states the gory details begin. The details in question concern themselves with bytecode generation via the framework org.objectweb.asm, although most of the aforementioned details are not particularly important at this stage. What is important though is the following bit:

    HashMap<String, SixModelObject> names = new HashMap< >();
    // ...
    for (int i = 0; i < adaptor.descriptors.size(); i++) {
        String desc = adaptor.descriptors.get(i);
        SixModelObject cr = adaptorUnit.lookupCodeRef(i);
        int s1 = desc.indexOf('/');
        int s2 = desc.indexOf('/', s1+1);
        // create shortname
        String shorten = desc.substring(s1+1, s2);
        // XXX throw shortname => coderef away if shortname is overloaded
        names.put(shorten, names.containsKey(shorten) ? null : cr);
        // but keep the descriptor
        names.put(desc, cr);
    // ... = names;

The adaptor object is a dataholder for a multitude of things we need while generating the adaptor, for example the name of the adaptor class or a list of the constants it has to contain, or even the CompilationUnit we generated in bytecode. The adaptorUnit is a shorthand for the mentioned CompilationUnit. What happens here is that we construct the MethodCache (which is a HashMap) by extracting the shortname of the method out of the descriptor and adding the shortname as key with the CodeRef as value, as long as we only have one candidate. To be sure we aren’t forgetting anything, we also add the descriptor as key to the MethodCache with the same CodeRef. Thus we have figured out why the method call in the original example has to look the way it does: we don’t even know the method by its shortname.

Great. How do we fix it?

Dynamically. Which is a bit complicated on the JVM, because Java is statically typed. Luckily JSR 292 [2] has been approved some time ago which means we have an instruction available that “supports efficient and flexible execution of method invocations in the absence of static type information”. The basic mechanics of this are as follows:

    • While generating bytecode for the runtime of our language we insert an invokedynamic instruction in a place where we want to be able to decide at runtime which method we want to invoke.
    • This instruction references a slightly special method (usually called a bootstrap method) with a specific signature. Most importantly this method has to return a java.lang.invoke.CallSite.
    • When the invokedynamic instruction is encountered while executing the bytecode, the bootstrap method is called.
    • Afterwards the resulting CallSite is installed in place of the invokedynamic instruction, and on repeated calls the method installed with the CallSite will be invoked instead of the bootstrap method.

To be able to catch methods that we want to treat as multi on the Perl 6 side, we have to change how the adaptors are generated. Recall that we currently only know which methods are overloaded after we generated the adaptor, thus we’re too late to insert an invokedynamic as adaptor method. So we override BootJavaInterop.createAdaptor, and instead of walking through all methods and simply creating an adaptor method directly, we additionally memorize which methods would end up having the same shortname and generate an invokedynamic instruction for those as well.

There’s one more problem though. The fact that we have a shortname that should dispatch to different methods depending on the arguments means that we can’t take full advantage of installing a CallSite. This is because any given CallSite always dispatches to exactly one method, and method signatures in Java are statically typed. Luckily we can instead resolve to a CallSite which installs a fallback method, which does the actual resolution of methods. [3]

To summarize this briefly: Via invokedynamic we install a CallSite that dispatches to a fallback method which converts the Perl 6 arguments to Java objects and then looks among all methods with the same shortname for one that fits the argument types. I won’t paste the org.objectweb.asm instructions for generating byte code here, but the fallback method looks approximately as follow

Object fallback(Object intc, Object incf, Object incsd, Object... args) throws Throwable {
    // some bookkeeping omitted
    Object[] argVals = new Object[args.length];
    for(int i = 0; i < args.length; ++i) {
        // unboxing of Perl 6 types to Java objects happens here

    int handlePos = -1;
    OUTER: for( int i = 0; i < hlist.length; ++i ) {
        Type[] argTypes = Type.getArgumentTypes(((MethodHandle)hlist[i]).type().toMethodDescriptorString());
        if(argTypes.length != argVals.length) {
            // Different amount of parameters never matches
            continue OUTER;
        INNER: for( int j = 1; j < argTypes.length; ++j ) {
            if( !argTypes[j].equals(Type.getType(argVals[j].getClass())) ) {
                switch (argTypes[j].getSort()) {
                    // comparison of types of the unboxed objects with 
                    // the types of the method parameters happens here
                // if we didn't match the current signature type we can skip this method
                continue OUTER;
        handlePos = i;
    if( handlePos == -1 ) {
        // die here, we didn't find a matching method

    // create a MethodHandle with a boxed return type
    MethodType objRetType = ((MethodHandle)hlist[handlePos]).type().changeReturnType(Object.class);
    // and convince our adaptor method to return that type instead
    MethodHandle resHandle = ((MethodHandle) hlist[handlePos]).asType(objRetType);

    MethodHandle rfh;

    try {
        // here's where we look for the method to box the return values
    } catch (NoSuchMethodException|IllegalAccessException nsme) {
        throw ExceptionHandling.dieInternal(tc,
            "Couldn't find the method for filtering return values from Java.");

    MethodHandle rethandle = MethodHandles.filterReturnValue(resHandle, (MethodHandle) rfh);
    return ((MethodHandle)rethandle).invokeWithArguments(argVals);

The curious may check the whole file here to see the omitted parts (which includes heaps of debug output), although you’d also have to build NQP from this branch for the ability to load .class files as well as a few fixes needed for overriding some of the methods of classes contained in BootJavaInterop if you wanted to compile it.

The result of these changes can be demonstrated with the following classfile (which has to be compiled with javac

public class Bar {

    public String whatsMyDesc(int in, String str) {
        String out = "called 'method/answer/(ILjava/lang/String;)Ljava/lang/String;";
        out += "\nparameters are " + in + " and " + str;
        return out;

    public String whatsMyDesc(String str, int in) {
        String out = "called 'method/answer/(Ljava/lang/String;I)Ljava/lang/String";
        out += "\nparameters are " + str + " and " + in;
        return out;

and this oneliner:

$ perl6-j -e'use Bar:from<java>;
my $b =; 
say $b.whatsMyDesc(1, "hi"); 
say $b.whatsMyDesc("bye", 2)'
called 'method/answer/(ILjava/lang/String;)Ljava/lang/String;
parameters are 1 and hi
called 'method/answer/(Ljava/lang/String;I)Ljava/lang/String;
parameters are bye and 2

Closing thoughts

While working on this I discovered a few old-looking bugs with marshalling reference type return values. This means that the current state can successfully dispatch among shortname-methods, but only value types and java.lang.String can be returned without causing problems, either while marshalling to Perl 6 objects or when calling Perl 6 methods on them. Additionally, there’s a few cases where we can’t sensibly decide which candidate to dispatch to, e.g. when two methods only differ in the byte size of a primitive type. For example one of

    public String foo(int in) {
        // ...
    public String foo(short in) {
        // ...

is called with a Perl 6 type that does not supply byte size as argument, let’s say Int. This is currently resolved by silently choosing the first (that is, declared first) candidate and not mentioning anything about this, but should eventually warn about ambiguity and give the method descriptors for all matching candidates, to facilitate an informed choice by the user. Another, probably the biggest problem that’s not quite resolved is invocation of overloaded constructors. Constructors in Java are a bit more than just static methods and handling of them doesn’t quite work properly yet, although it’s next on my list.

These shortcoming obviously need to be fixed, which means there’s still work to be done, but the thing I set out to do – improving how method calls to Java objects look on the surface in Perl 6 code – is definitely improved.

Addendum: As of today (2015-01-04) the works described in this advent post have been extended by a working mechanism for shortname constructors, the marshalling issues for reference types have been solved and the resulting code has been merged into rakudo/nom. Note that accessors for Java-side fields are not yet implemented on the Perl 6 side via shortname, so you’ll have to use quoted methods of the form “field/get_$fieldName/$descriptor”() and “field/set_$fieldName/$descriptor”($newValue) respectively, where $fieldName is the name of the field in the Java class and $descriptor the corresponding field descriptor. That’s next on my list, though.

Day 11 – So, what does MoarVM do with your Perl 6 code?

The first day of the advent calendar remarked that during 2014, MoarVM became the de facto standard virtual machine to run Perl 6 on. And, with Perl 6 on course to hit 6.0.0 in 2015, MoarVM is what most people adopting Perl 6 at this point will be working with. In this post, we’ll take a look at what MoarVM brings to the table, and where it’s heading from here.

But first – why a VM?

Many modern languages depend on a good deal of runtime support. Perl 6 is very much in this camp, and the same goes for Perl 5. A VM is in the business of providing that support. With Perl 5, the word “interpreter” tends to be used over VM, but the goal is the same: provide the things needed to execute programs in the language across disparate hardware and operating system environments. The difference is mostly that we tend to use VM when there’s a clean separation of the language compiler and the runtime support – meaning that the VM might also be used to execute other languages. This has happened with the JVM, for example. MoarVM is squarely aimed at being a good host for the Rakudo Perl 6 compiler, and the NQP subset of Perl 6 that Rakudo is mostly implemented in. If any other language finds it useful, that’s fine, but it’s not a primary design consideration.

So why care so much over a clean interface between compiler and runtime? Because, in a project the size of “implement Perl 6”, managing implementation complexity is crucial. Establishing a clean boundary between the compiler and the runtime, and knowing what each side is responsible for, is one strategy for doing that. (Rakudo’s compilation process is also broken up into a pipeline of stages that only communicate by passing well-defined data structures from one to the next, which is another form of architectural boundary.)

But why interpreters and VMs at all? Why not go straight down to the metal? For any language where we can reasonably figure out a lot about the program statically, that is a fine strategy. The information needed to eliminate many forms of late binding, checks, and duplicate work is to hand at compile time, and a decent optimizer will be able to use that information. Of course, this approach restricts us to what can be seen at compile time. Some compilers as a result support profile-guided optimization, where we run the program on typical workloads, collect information about where time is spent and what code-paths are common and uncommon, and then compile the program again taking that data into account.

Profile-guided optimization hints at a strategy for efficiently executing languages where we can know rather less at compile time: put off most optimization until we observe the program running, and dynamically replace parts of the program with optimized versions. This isn’t a new idea, and there has been a massive amount of work done in this area. At its heart is the general principle that most programs are, if I may pun on a term from the database field, “eventually static”. Even though a Perl 6 program may have a huge number of points of potential dynamism and genericity, in most programs just a handful of them actually ever exploit it. Of course, different runs of the program on different inputs may tickle different bits of that dynamism, not to mention that as a program grows and evolves, these flexible points will be exploited here and there to (hopefully cleanly) add new functionality.

A modern VM aiming to support a language like Perl 6 is not, therefore, just there to run the code and provide services like garbage collection, OS abstraction, and so on. Its job is to take a program in all its dynamic glory, watch what code paths it actually takes, see what types really show up – and then produce optimized versions of carefully selected parts of the program, with the unused dynamism ruthlessly torn out, and much of the remaining dynamism massaged to more restricted and faster forms.

Bytecode specialization

So what exactly does MoarVM do with a program? First, it needs to find code worth considering optimizing. This is done by seeking the hot parts of a program: routines that are called again and again, and loops that are doing a lot of iterations. Since time to optimize a routine is proportional to the size of the routine, the threshold for “it’s worth considering” goes with code size. A small routine – such as a simple operator or an accessor method – has rather little code and will become hot quickly. A larger routine heats up more slowly. It’s just like our kettle: a little water boils quickly, but fill it up and we’re going to be waiting a bit for that cuppa. This is in part about risk management: we would prefer to avoid investing time optimizing code that’s never going to give a positive return on investment. We can’t predict the future, but some cheap, simple heuristics are at least a good start.

So, we’ve got some hot code. What now? Well, if it’s a call, we start by looking at the callsites that are invoking it. The callsite tells us how many arguments are being passed, whether they are native values or objects, and – for named arguments – what their names are. We can then start to produce bytecode specialized by callsite. Know that the object parameter is passed an object argument? Then replace the op that checks if we need to box the incoming argument with a simpler op that just copies the pointer. Know that an optional parameter actually doesn’t get passed? Then strip out the code to look for it, toss the branch, and just always run the code that sets the default value. Know that the named parameter “config” is always the first passed named argument? Then retrieve it by indexed access (so a cheap pointer copy), rather than messing about with strings.

Next up: types. If we are passed arguments, what types are they? If we look up constants, what types to those have? Knowing these is relatively easy: we just peek at the args buffer to see what’s being passed, or look up the constant and see what type it is. However, knowing them also presents us with a huge range of opportunities. See that method call? It doesn’t need to look up the method in a table each call any more, since we know where it’s going. And that attribute access? Since we know the object layout, it can just become a pointer offset from the object’s memory address. That type check? We often know the answer statically now – and can eliminate entire branches that won’t be taken. Oh, and that multiple dispatch? Let’s just resolve it once now we know the types we’re dealing with, not every single call. The list goes on.

Specialization works at the bytecode level, because that’s what the VM gets. And a specialization of a given piece of bytecode comes with a set of guards: conditions that must be met for it to be applicable. Those can constrain it by callsite and argument types. And we can produce multiple specializations for a given piece of original bytecode.

Of course, choosing the appropriate specialization is a cost too. It would eat into our winnings if we had to do that on every call. Thankfully, there are ways to avoid that. If we are optimizing a call to something and know the argument types being passed, we can pick the specialization of the callee and optimize the caller to always call that specialization, rather than having to hunt for the right one each time. And, if the callee is rather small, we can often do away with the call entirely and simply inline the callee into the caller – eliminating the need to create a callframe and keeping the code tighter, which CPU caches like.

Optimization speculation

So, all this is nice, but we want MOAR MOAR MOAR. Because there are lots of other things that we can’t statically figure out the types of, but that may end up having very stable types come runtime. Some examples are lexical variables that we close over, object attributes, and return values of our callees. So how do we get hold of these types?

When code becomes hot, we don’t actually optimize it right away. Well, we do for the simple callsite-related transformations. But before going and doing the type-driven stuff, we take a closer look at what’s actually going on in the program. The specializer quickly produces an instrumented version of the program that records what types show up where. This runs for the next bunch of invocations, and logs what it sees. After a threshold is hit, we have a look at the types that showed up and do a little analysis. If a given logging site consistently logs the same type, we’re onto something. If we see more than one type show up there, we consider that part of the program as being typically dynamic and leave it be.

The thing is, we didn’t actually prove a property of the program here, just profiled it and figured out what tends to happen. So how can we use this information? The trick is to insert a guard into the specialized program that cheaply asserts that we got the expected type. Beyond that guard, we can optimize assuming we really did. (We also make sure that we actually do an optimization based upon what the guard checks, and don’t bother to insert it otherwise.)

Which is all well and good when the guard is met, but what about when the assertion fails? In fact, this is a more general problem. Even the far simpler type-based optimizations on incoming parameters assume that an object’s type will not change – and of course, thanks to mixins, they can. All of these situations trigger deoptimization: replacing the optimized code we’re in the midst of running with the unoptimized, naive, but safe version that is dynamic enough to handle all the cases.

Deoptimization is quite a pain to get right. For one, if we were inside of inlined code, we have to go back and create all the callframes we skipped. Since inlining can be many levels deep, that can be a bunch of callframes to wire up. Then we have to make sure that our optimizations don’t throw away code that produces a value the deoptimized code relies on being there, but the optimized code would otherwise not. Of course, we still need to do dead code elimination in general; it’s just that it has to be aware of deoptimization.

Therefore, it’s not only important that MoarVM can optimize code. It’s also critical to its ability to do so that it can also deoptimize, falling back to slow paths when needed.

Down to machine code

Interpreting specialized bytecode can yield, in some programs, a significant improvement. The simple program:

my $i = 0; while ++$i <= 100000000 { }

Got a factor of 2.5 improvement with what we had several months ago, and there are still a lot of opportunities left (some of which we’ve already explored, and with others yet to go). However, interpreting – that is, looping over an instruction stream and choosing what to do for each instruction – has its overhead. Before the specialization process eliminates a lot of the dynamism, interpretation is only so bad; some ops have some checks to do, and so they cost a bit. But specialized code tends to have a lot of very cheap operations that just play with pointers – and then the overhead of interpreting can quickly come to account for the majority of the execution time.

Therefore, on x64, we can often take the further step of producing machine code to run, instead of specialized bytecode to interpret. Of course, we have to be ready to deoptimize from that too. I’ll save talking about the machine code part too much, since the guy who worked on it is, I believe, plotting an advent post about it in the near future. Needless to say, it makes a dramatic difference; for the benchmark above it more than halved the time again relative to the specialized bytecode.

We often referred to the “produce machine code” part of the work as “the JIT”. But really, the entire set of optimizations described in this article are part of the typical work a JIT compiler does. Producing machine code is just one small part, and it’s interesting to note that the win from eliminating dynamism was a bigger one than eliminating interpretation overhead.

What next?

The work so far has had a dramatic impact on performance. However, there’s still plenty of areas for improvement.

For one, the number of specializations produced and how we choose them can certainly use some tweaking. It’s possible for a routine to saturate its number of allowed specializations with ones that, in the long run, are not entirely strategic. Or we may be able to reasonably argue that it should be allowed more. At the same time, specialized bytecode is a notable memory overhead at this point. So, we need to produce both more and less specializations. Of course, what that really means is we need a better strategy for picking what is worth doing.

Another clear issue is that some speculative optimizations turn out to be bad bets, and we have no way of recovering from that. We can deoptimize pretty fast (in fact, some trade-offs have been made to have it be quite affordable). But it’s still a cost, and it’s possible a gamble that looked good based on available data may lead to code that has to be deoptimized a majority of the time – and thus run slower than if we’d never bothered. Again, it’s about being smarter and keeping on learning as we run the program.

Then there’s a bunch of things that we don’t specialize or compile to machine code in a good way yet. Array access is an obvious one, and big integer operations – which are at the heart of Perl 6’s Int type – are another. The specializer is also not as aware as it really should be of closures. These are all areas where we could do a whole lot better – and they are nice, incremental advances on what we already have.

A further interesting area for exploration is doing escape analysis, and being able to eliminate a bunch of GC allocations in favor of allocating the memory as part of the call frame, because we know it can never end up being used beyond it. This is a fair amount of work, but also highly relevant to Perl 6: many scalar containers for lexical variables would be great candidates.

Then there’s the whole idea of trace JIT. MoarVM so far has a routine-level JIT, often called a method JIT. There, we optimize at routine level first, and may do some inlining to flatten away the call tree. This is a pretty decent strategy for Perl 6, in so far as we’re often very interested in inlining primitive operators that, naively interpreted in the original dynamic program, would be multiple dispatch calls. Trace JITs, by contrast, just follow the flow of instructions, over however many callframes, and every single conditional branch becomes a guard. Their strength is loops, which today we cope with using On Stack Replacement (detect we’re in a hot loop, compile an optimized version of it, and hot-swap it). While it’s easy to talk about trace JIT vs. method JIT as a “pick one”, I’m much more interested in the “pick both” track. They both, after all, share a lot of infrastructural needs – and they have their strengths and weaknesses. There’s more than one way to dynamically optimize it, and, this being Perl, we might as well just do all the ways – spotting all the nice unifications and re-use opportunities along the way.

Day 10 – Introspecting the Symbol Tables

Perl 6 is designed with extensibility in mind. And when you want to extend something, you often need to know as much as possible about the environment.

Today we’ll look at an aspect of finding out about the environment: introspecting symbol tables.

A symbol is something you refer to by name, so it could be a package, class, variable, routine, constant etc.

A symbol table is a data structure in which symbols can be looked up. Perl 6 has three main types of symbol tables, differentiated by scope: lexical symbols live in the lexical pad, package symbols live in a Stash, and methods in a method table.

Enough theory, time for action:

    $ perl6 -e 'my $x = 42; say MY::.keys'
    $x $! $/ $_ GLOBALish EXPORT $?PACKAGE ::?PACKAGE $=pod !UNIT_MARKER

MY is a pseudo package representing the current scope, and appending two colons gives us its symbol table. Which in turn roughly behaves like a Hash, so we can use a method like keys to find all symbols in that table. Or look up a string there:

    $ perl6 -e 'my $x = 42; say MY::<$x>'

A complicated way to say $x, but you can use that to lookup symbols by name:

    $ perl6 -e 'my $x = 42; my $var = q[$x]; say MY::{$var}'

Or if you don’t care if it comes from the current lexical scope, just from somewhere:

    $ perl6 -e 'my $x = 42; my $var = q[$x]; say ::($var)'

No need to tell you that in general, this is a bad idea; just use it when you have exceptional circumstances.

But wait, what were all those other symbols from the first line? $!, $/ and $_ are the three “special” variables for errors, matches and the current topic; the rest is stuff that doesn’t need to be in lexical scope, but the compiler puts it there because it makes stuff easier.

Outer lexicals are accessible through the OUTER:: pseudo package:

    $ perl6 -e 'my $x = 1; { my $x = 2; say OUTER::<$x>; say $x }'

Instead of OUTER::<$x>, you can also say $OUTER::x. In general, you can move the sigil to the front and omit the angle brackets if the symbol name is a literal.

Package declarations and explicitly our-scoped symbols generally go into a stash, with the top-most being GLOBAL::

    class A {
        class B { }
    say GLOBAL::<A>;    # (A)
    say A;              # (A)
    say A::.keys;       # B

Not only the double colon gives you the symbol table, you can also get it with .WHO (which also works for non-literals). So a long way to write A::B is

    say A::B === GLOBAL::<A>.WHO<B>;    # True

Finally, methods live in a classes or roles, and you can use the meta object to introspect them; those generally respect inheritance and role composition, so they are a bit more complicated than stashes.

    my $m = Str.^can('split')[0];
    say $m.^name;               # Method
    say $m('a1B', /\d/).perl;    # ("a", "B").list

Here .^can(name) returns a list of methods on that object with a given name. It’s a list, because it includes methods from superclasses, so there can be more than one. To get a list of all available methods, you can use .^methods and .^methods(:all):

    $ perl6-m -e 'say  Str.^methods(:all).grep: {.name ~~ /^m/ }'
    match match map min max minmax

.^methods(:all) includes methods from all superclasses, without the :all, methods from classes Cool, Any and Mu are excluded.

This concludes our short tour through the land of the symbol table. There is more to see (for example the EXPORT package, and various pseudo packages), and the daring can go to the design documents or play with Rakudo’s implementation, which is fairly robust.

Day 9 – Data munging in Perl 6 vs Perl 5

One thing that Perl has traditionally been used for a lot, is small scripts which read some data (usually from a file), put it into a data structure, transform said data structure, and print some output based on it.

I believe that Perl 6, too, has the potential to appeal to people with such data wrangling needs (who may not necessarily be professional programmers, e.g. powerusers/sysadmins or students/scientists). Perl 6 does not share Perl 5’s selling-point of coming pre-installed on most *nix systems, but it entices with convenience features that allow script writers to focus more on what they’re trying to accomplish, and less on low-level technical details.

Let me showcase and compare an idiomatic Perl 5 solution and an idiomatic Perl 6 solution for the same simple data munging use-case (which might be a little contrived, but practical enough to be transferable to more complex problems), so you can form your own opinion:

Continue reading “Day 9 – Data munging in Perl 6 vs Perl 5”

Day 8 – A HTTP User Agent with SSL/TLS


This year’s Google Summer of Code brings us some new cool stuff.
One of them is web crawling which comes with HTTP::UserAgent and IO::Socket::SSL modules!

You all know what web crawling is, so I describe it with just a few words:
Web crawling (limited for the purposes of this article) is everything we do with the websites using scripts, sending and receiving requests.


We can write a simple Web Crawler with just a few lines of code.

 use HTTP::UserAgent;
 my $ua = => 'firefox_linux');
 my $response = $ua.get: "";

Simple as that.

What could draw attention is the second line where we use a magic phrase ‘firefox_linux’.
It automatically generates an User-Agent header’s field for us. The rule is simple, just write ‘browser_system’ and the correct user agent will be used.
The list of predefined user agents can be found here, feel free to add more.

We want to decode the content.

 my $content = $response.decoded_content;

As we have the content decoded, we can do everything we want with it. It’s simple, isn’t it?

TLS support

The most fascinating thing which comes with this project is SSL/TLS support built on OpenSSL library.
To use SSL/TLS we must install IO::Socket::SSL.

 panda install IO::Socket::SSL

And again, that’s it, simple.

All of that gives us some opportunities like WWW::Mechanize, Net::GitHub and so on.

I really want these modules to be available in Perl 6 — do you want to help us writing? Join us. :)

Merry Christmas!

Day 7 – that’s how we .roll these days

This is a public service announcement.

Since Wolfman++ wrote his blog post about .pick back in the year two thousand nine (nine years after the distant future), we’ve dropped the :replace flag, and we’ve gained a .roll method. (Actually, the change happened in September 2010, but no-one has advent blogged about it.)

So this is the new way to think about the two methods:

.pick    like picking pebbles out of an urn, without putting them back
.roll    like rolling a die

I don't remember how I felt about it back in 2010, but nowadays I'm all for keeping these methods separate. Also, it turns out, I use .roll($N) a whole lot more. In that light, I'm glad it isn't spelled .pick($N, :replace).

Just to give a concrete example:

> say <Rock Paper Scissor>.roll
Rock                    # good ol' Rock

.roll is equivalent to .roll(1). When we're only interested in one element, .roll and .pick work exactly the same since replacement doesn't enter into the picture. It's a matter of taste which one to use.

I just want to show two more tricks that .roll knows about. One is that you can do .roll on a Bag:

> my $bag = (white-ball => 9, black-ball => 1).Bag;
bag(white-ball(9), black-ball)
> say $bag.roll         # 90% chance I'll get white-ball...
white-ball              # ...told ya

Perfect for all your weighted-randomness needs.

The other trick is that you can .roll on an enumeration:

> enum Dwarfs <Dipsy Grumpy Happy Sleepy Snoopy Sneezy Dopey>
> Dwarfs.roll
Happy                   # me too!
> enum Die <⚀ ⚁ ⚂ ⚃ ⚄ ⚅>
> Die.roll
> Die.roll(2)
⚀ ⚀                     # snake eyes!
> Die.roll(5).sort
⚀ ⚀ ⚁ ⚂ ⚃

The most frequent use I make of this is the idiom Bool.roll for randomly tossing a coin in my code.

if Bool.roll {
    # heads, I win
else {
    # tails, you lose

Bool is an enum by spec, even though it isn't one in Rakudo (yet) for circularity saw reasons.

> say Die.HOW
> say Bool.HOW    # audience goes "awwww!"

This has been a public service announcement. Enjoy your holidays, and have a merry Perl 6.

Day 6 – Running External Programs from Perl 6

There are several ways to run an external program in Perl 6, let’s take a look at each way in turn.


Shell is used to pass an expression to the system shell. The shell function invokes the system shell and passes a string argument to it. The return value of `shell` is the exit code of the program:

    my $exit_code = shell 'echo "Hello, World!"';
    say $exit_code; # 0 == success

If you want redirects, pipes, globs and other shell features to work in your command lines, this is the way to go.


Run is used to execute a program. The run function accepts a stringified program name and optional list of arguments to pass to the external program. Like shell, the return value is the exit code of the program:

    my $exit_code = run 'echo', 'Hello, World!';
    say $exit_code; # 0 == success

Capturing Output

What if you wanted to capture the output of a command, rather than just know if the command was executed successfully or not? Just use a quoting operator “q” and “qq” with the “:x” adverb to execute an expression:

    my $message = q:x/echo "Hello, World!"/;
    say $message; # Hello, World!

If you want to interpolate the expression, use qq:

    my $audience = 'Everyone';
    my $message = qq:x/echo "Hello, $audience!"/;
    say $message; # Hello, Everyone!

With quoting mechanisms in Perl 6, the colon is optional for the first adverb so they can be slightly shortened as “qx” and “qqx”.

You’re not limited to slurping results into a single scalar either. Split strings into a list:

    # get a list of paths (also see %*ENV)
    my @env_path = qx/echo $PATH/.split(':'); # Unix-based systems 
    my @env_path = qx/echo %PATH%/.split(';'); # Windows version

Or read a program’s output into an array of lines:

    my @commit_results = qqx/git commit -am "$message"/.lines;

Or extract a particular value from tabular data:

    # requires "free" program
    my $free_memory_mb = qx/free -m/.lines[1].words[3]

These examples just scratch the surface of what’s possible, but they should give you enough to get started any time you need to run an external program. Give them a shot!

More Info

Day 5 – Act with great Responsibility

The past year in Rakudo-land has been very exciting. It has brought us a new, full-fledged backend in the form of MoarVM. And with that came extended support for asynchronous execution of code. This means you can now more easily try out all these asynchronous features, without having to suffer from the long startup time of the JVM backend.

The “Supply” is still the main mechanism to use the Reactive Programming paradigm. Here is a simple (silly) example of using a Supply:

1: my $s =;
2: $s.act: { print " Fizz" if $_ %% 3 }
3: $s.act: { print " Buzz" if $_ %% 5 }
4: $s.act: { print " $_" unless $_ %% 3 || $_ %% 5 }
5: $s.emit($_) for 1..20;
1 2 Fizz 4 Buzz Fizz 7 8 Fizz Buzz 11 Fizz 13 14 Fizz Buzz 16 17 Fizz 19 Buzz

Let’s take this example apart. Line 1 just creates a new Supply. By itself, it doesn’t do anything at all, except for creating the object, of course. Lines 2, 3 and 4 set up code (closures) to be executed every time a new value arrives in the Supply $s.

Line 5 sends 20 consecutive values to the Supply. This in turn causes the closures to be executed (in turn for each value emitted). The result is a nice FizzBuzz.

But but…

Careful watchers of this code may wonder about two things. First of all: what’s emit ? Is that something new? No, it is not (really). It used to be called more. At the Austrian Perl Workshop last October it was decided to rename this method to emit (as nobody was really happy with any more). So what happens if you have code that uses more? Well, it will still work, but you will get a deprecation warning after your process finishes:

Saw 1 call to deprecated code during execution.
Method more (from Supply) called at:
t/spec/integration/advent2014-day05.t, line 13
Deprecated since v2014.10, will be removed with release v2015.10!
Please use emit instead.

Note that even though the same method was called 20 times, it is only mentioned once. And that’s because all the calls happened at the same location in the code. If you have seen deprecation messages before, you might notice that it now also mentions when it was deprecated, and when the feature will most likely be removed.

The second thing that a careful watcher might wonder about is: why the use of act? Is that a also a rename (from tap)? No, it is not. The act method was added last April. It ensures that your closure will be the only one running on that Supply, even if the emits are coming from different threads. That is a much safer default than tap, where multiple threads can be executing the closure you provided at the same time.

Sharing everything comes with great power and great responsibility

So why is this important? This is important because all variables visible in a specific scope, are readable and changeable by all threads simultaneously. Reading a variable from different threads at the same time, is not an issue in Perl 6 (apart from some minor known issues to be resolved after the Great List Refactor). Changing a variable from multiple threads simultaneously, is an issue that may cause data loss, memory corruption and segfaults. To give an example:

1: my $times = 100000;
2: my $a = 0;
3: $*SCHEDULER.cue: { $a++ } for ^$times; # this may segfault because of unguarded changes
4: sleep 5; # poor man’s way to wait for all code to finish
5: say “Missed {$times - $a} updates”;
Missed 9 updates

Line 1 sets up the number of iterations we’re going to do. Line 2 initializes the variable that’s going to be incremented. Although ++ is magic, we initialize the variable explicitly, so that each increment will follow the same internal code path. Line 3 cues the increment to be executed that many times (in a very low level way using the underlying scheduler). Line 4 waits for the execution to be finished. And line 5 shows the result (if we didn’t segfault, which we might).

So please use act rather than tap, unless you are sure that closures will not be updating any shared variables simultaneously. Which could be hard to tell if you’re using external modules inside the closure.

FizzBuzzing some more

Coming back to the first example: should we have used act there, or could we have used tap? We could have used tap there, because print is threadsafe. It becomes a different story if we would have used a (shared) array instead of print:

1: my @seen;
2: my $s =;
3: $s.act: { @seen.push: "Fizz" if $_ %% 3 }
4: $s.act: { @seen.push: "Buzz" if $_ %% 5 }
5: $s.act: { @seen.push: $_ unless $_%%3 || $_%%5 }
6: await do for 1..20 {
7:     start { rand.sleep; $s.emit($_) }
8: }
9: say @seen;
11 14 4 Fizz 17 19 Fizz Buzz 16 13 Buzz Fizz Buzz 7 Fizz 8 2 Buzz 1 Fizz Fizz

Hmmm… that has the right number of Fizz’s and Buzz’s, but clearly the order is wrong. But because we have used act rather than tap, we are at least sure that no push was executed simultaneously on the @seen array. So we didn’t lose any values, nor did we segfault.

You might ask: what are that lines 6-8 about? Well, that’s making sure the emit’s are done in a random order, spread out in time (because the 0 to 1 second random sleep from rand.sleep, and the start making it run in a separate thread).

So, how could you make this work in a parallel way? Well generally, if you have some kind of identifier for each value emitted, then you can use that to store the result in a shared data-structure. In this case, $_ is exactly that. So with a slight modification:

1: my @seen;
2: my $s =;
3: $s.act: { @seen[$_] = "Fizz" if $_ %% 3 }
4: $s.act: { @seen[$_] ~= "Buzz" if $_ %% 5 }
5: $s.act: { @seen[$_] //= $_ }
6: await do for 1..20 {start{rand.sleep;$s.emit($_)}}
7: say @seen[1..20];
1 2 Fizz 4 Buzz Fizz 7 8 Fizz Buzz 11 Fizz 13 14 FizzBuzz 16 17 Fizz 19 Buzz

By assigning directly to an element in the array (lines 3, 4, 5), we automatically put the information in the right place. And because we are (implicitely) guaranteed that act’s on the same item are always executed in the order they were created, we can simplify the non-FizzBuzz case in line 5.


When using Supplies, please use act on the Supply if you’re not sure whether the closure is going to change shared variables. Only use tap if you’re really 100% sure.

Further reading

Last year saw 2 async execution related posts, the latter showing uses of tap. Please read up on them if the above was not telling you all you needed to know:


The above code can also be found in roast at t/spec/integration/advent2014-day05.t