Day 18 – An SVG Christmas Tree

Christmas trees are a traditional symbol that date back more than four hundred years in Europe, so what could be better for an advent article than something about creating Christmas tree images.

The typical, simplified, representation of the tree is several triangles of decreasing size stacked on top of each other with a small over-lap, so are fairly easy to create with a computer program.

Here I’ll use Scalable Vector Graphics (SVG) to draw the image as, given the description above, it seems perfectly suited to the task.

Continue reading “Day 18 – An SVG Christmas Tree”

Day 17: Something about messaging (but I couldn’t think of a snappier title.)

Why messaging?

When I first started thinking about writing an Advent article this year I reflect that I hadn’t really written a great deal of Perl 6 in the past twelve months in comparison to the previous years when I appear to have written a large number of modules. What I have been doing (in my day job at least,) is thinking about and implementing applications that make heavy use of some messaging system. So I thought it would be interesting to bring some of those ideas to Perl 6.

Perl has always had a strong reputation as a “glue language” and Perl 6 has features that take off an run with that, most prominently the reactive and concurrent features, making it ideally suited to creating message based integration services.

What messaging?

At my feet right now is the excellent Enterprise Integration Patterns which I’d recommend to anyone who has an interest (or works in,) the field, despite it being nearly 15 years old now. However it is a weighty tome (literally, it weighs in at nearly one and half kilograms in hard book,) so I’m using it as a reminder to myself not to attempt to be exhaustive on the subject, lest this turn into a book itself.

There are quite a large number of managed messaging systems both free and commercial, using a range of protocols both open and proprietary, but I am going to limit myself to RabbitMQ which I know quite well and is supported in Perl 6 by Net::AMQP.

If you want to try the examples yourself you will need to have access to a RabbitMQ broker (which is available as a package for most operating system distributions,) but you can use the Docker Image, which appears to work quite well.

You will also need to install Net::AMQP which can be done with:

zef install Net::AMQP

In the examples I will be using the default connection details for the RabbitMQ server (that is the broker is running on localhost and the default guest is active,) if you need to supply differing details then you can alter the constructor for Net::AMQP to reflect the appropriate values:

my $n = Net::AMQP.new(
  host => 'localhost',
  port => 5672,
  login => 'guest',
  password => 'guest',
  vhost => '/'
);

A couple of the examples may require other modules but I’ll introduce them as I go along.

Obligatory Hello, World

RabbitMQ implements the rich broker architecture that is described by the AMQP v0.9 specification, the more recent v1.0 specification as implemented by ActiveMQ does away with much of the prescribed broker semantics to the extent that it is basically a different protocol that shares a similar wire format.

Possibly the simplest possible example of sending a message (a producer,) would be:

   use Net::AMQP;

    my $n = Net::AMQP.new;

    await $n.connect;
    my $channel = $n.open-channel(1).result;
    my $exchange = $channel.exchange.result;
    $exchange.publish(routing-key => "hello", body => "Hello, World".encode);
    await $n.close("", "");

This demonstrates most of the core features of RabbitMQ and Net::AMQP.

Firstly you will notice that many of the methods return a Promise that will be mostly kept with the actual returned value, this reflects the asynchronous nature of the broker which sends (im most cases but not all,) a confirmation message (method in AMQP parlance,) when the operation has been completed on the server.

The connect here establishes the network connection to the broker and negotiates certain parameters, returning a Promise which will be kept with a true value if succesfull or broken if the network connection fails, the supplied credentials are incorrect or the server declines the connection for some other reason.

The open-channel opens a logical broker communication channel in which the messages are exchanged, you may use more than one channel in an application. The returned Promise will be kept with an initialised Net::AMQP::Channel object when confirmed by the server.

The exchange method on the channel object returns a Net::AMQP::Exchange object, in the AMQP model all messages are published to an exchange from which the broker may route the message to one or more queues depending on the definition of the exchange from whence the message may be consumed by another client. In this simple example we are going to use the default exchange (named amq.default.)

The publish method is called on the exchange object, it has no return value as it is simply fire and forget, the broker doesn’t confirm the receipt and the delivery or otherwise to a queue is decoupled from the act of publishing the message. The routing-key parameter is, as the name suggests, used by the broker to determine which queue (or queues,) to route the message to. In the case of the default exchange used in this example the type of the exchange is direct which basically means that the messsage is delivered to exactly one consumer of a queue with a matching name to the routing key. The body is always a Buf and can be of an arbitrary length, in this case we are using an encoded string but it could equally be encoded JSON, MessagePack or BSON blob, whatever suits the consuming application. You can infact supply content-type and content-encoding parameters which will be passed on with the message delivered to a consumer if the design of your application requires it, but the broker itself is totally agnostic to the content of the payload. There are other optional parameters but none are required in this example.

Of course we also need something to read the messages that we are publishing (a consumer,) :

use Net::AMQP;

my $n = Net::AMQP.new;

my $connection = $n.connect.result;

react {
    whenever $n.open-channel(1) -> $channel {
        whenever $channel.declare-queue("hello") -> $queue {
            $queue.consume;
            whenever $queue.message-supply.map( -> $v { $v.body.decode }) -> $message {
                say $message;
                $n.close("", "");
                done();
            }
        }
    }
}

Here, rather than operating on an exchange as we did in the producer, we are using a named queue; declare-queue will cause the queue to be created if it doesn’t already exist and the broker will, by default, bind this queue to the default exchange, “binding” essentially means that messages sent to the exchange can be routed to the queue depending on the exchange type, the routing key of the messages and possibly other metadata from the message. In this case the “direct” type of the default exchange will cause the messages to be routed to a queue that matches the routing key (if one exists, the message will be silently dropped if it doesn’t.)

The consume method is called when you are ready to start receiving messages, it returns a Promise that will be kept with the “consumer tag” that uniquely identifies the consumer to the server but, as we don’t need it here, we can ignore it.

Once we have called consume (and the broker has sent the confirmation,) the messages that are routed to our queue will be emitted to the Supply returned by message-supply as Net::AMQP::Queue::Message objects, however as we aren’t interested in the message metadata in this example map is used to create a new Supply with the decoded bodies of the messages; this is safe where, as in this case, you can guarantee that you will be receiving utf-8 encoded but in a real world application you may want to be somewhat more robust about handling the body if you aren’t in control of the sender (which is often the case when integrating with third party applications.) The content-type and content-encoding as supplied when publishing the message are available in the headers attribute (a Hash,) of the Message object, but they aren’t required to be set, so you may want to consider an alternative scheme as suitable for your application.

In this example the connection is closed and the react exited after the first message is received, but in reality you may want remove the lines:

$n.close("", "");
done();

from the inner whenever and if you want to exit on a signal for example add:

whenever signal(SIGINT) {
    $n.close("", "");
    done();
}

within the top level of the react block. However you choose to exit your program you should always call call close on the connection object as it will cause a warning message in the broker logs that might upset the person administering the server if you don’t.

We could of course have used the react syntax in the producer example in a similar way, but it would have added verbosity for little benefit, however in a larger program where you may for instance be processing from, say, a Supply it can work quite nicely:

    use Net::AMQP;
      
    my $supply = Supply.from-list("Hello, World", "Bonjour le monde", "Hola Mundo");
    my $n = Net::AMQP.new;

    react {
        whenever $n.connect {
            whenever $n.open-channel(1) -> $channel {
                whenever $channel.exchange -> $exchange {
                    whenever $supply.map(-> $v { $v.encode }) -> $body {
                        $exchange.publish(routing-key => "hello", :$body );
                        LAST {
                            $n.close("", "");
                            done();
                        }
                    }
                }
            }
        }
    }

Something a bit more useful

You’re probably thinking “that’s all very well, but that’s nothing I couldn’t do with, say, an HTTP client and a small web-server”, well, you’re getting reliable queuing, persistence of unread messages and so forth, but yes, it could be over-kill for a simple application, until you add a requirement to send the messages to multiple, possibly unknown, consumers for example. This kind of pattern is a use of the “fanout” exchange type, which will deliver a message to all queues that are bound to the exchange.

In this example we will need to declare our own queue, in order that we can specify the type, but the producer doesn’t become much more complicated:

use Net::AMQP;

my $n = Net::AMQP.new;
my $con =  await $n.connect;
my $channel = $n.open-channel(1).result;
my $exchange = $channel.declare-exchange('logs', 'fanout').result;
$exchange.publish(body => 'Hello, World'.encode);
await $n.close("", "");

The only major difference here is that we use declare-exchange rather than exchange on the channel to obtain the exchange to which we send the message, this has the advantage of causing the exchange to be created on the broker with the specified type if it doesn’t already exist which is useful here as we don’t need to rely on the exchange being created beforehand (with the command line tool rabbitmqctl or via the web management interface,) but it similarly returns a Promise that will be kept with the exchange object. You probably also noticed that here the routing-key is not being passed to the publish method, this is because for a fanout exchange the routing key is ignored and the messages are delivered to all the consuming queues that are bound to the exchange.

The consumer code is likewise not dissimilar to our original consumer:

use Net::AMQP;

my $n = Net::AMQP.new;

my $connection = $n.connect.result;

react {
    whenever $n.open-channel(1) -> $channel {
        whenever $channel.declare-exchange('logs', 'fanout') -> $exchange {
            whenever $channel.declare-queue() -> $queue {
                whenever $queue.bind('logs') {
                    $queue.consume;
                    whenever $queue.message-supply.map( -> $v { $v.body.decode }) -> $message {
                        say $*PID ~ " : " ~ $message;
                    }
                }
                whenever signal(SIGINT) {
                    say $*PID ~ " exiting";
                    $n.close("", "");
                    done();
                }

            }
        }
    }
}

The exchange is declared in the same way that it was declared in the producer example, this is really a convenience so you don’t have to worry about which order to start the programs, the first one run will create the queue, however if you run the producer before the consumer is started the messages sent will be dropped as there is nowhere by default to route them. Here we are also declaring a queue without providing a name, this creates an “anonymous” queue (the name is made up by the broker,) because the name of the queue doesn’t play a part in the routing of the messages in this case.

You could provide a queue name but if there are duplicate names then the messages will be routed to the queues with the same names on a “first come, first served” basis, which is possibly not the expected behaviour (though it is possible and may have a use.)

Also in this case the queue has to be explictly bound to the exchange we have declared, in the first example the binding to the default exchange was performed by the broker automatically, but in most other cases you will have to use bind on the queue with the name of the exchange. bind, like many of the methods, returns a Promise that will be kept when the broker confirms that the operation has been completed (though in this case the value isn’t important.)

You should be able to start as many of the consumers as you want and they will all receive all the messages in the same order that they are sent. Of course in a real world application the consumers may be completely different prograns written in a variety of different languages.

Keeping on Topic

A common pattern is a set of consumers that are only interested in some of the messages published to a particular exchange, a classic example of this might be a logging system where there are consumers specialised to different log levels for instance. AMQP provides a topic exchange type that allows for the routing of the messages to a particular queue by pattern matching on the producer supplied routing key.

The simplest producer might be:

	use Net::AMQP;

	multi sub MAIN(Str $message = 'Hello, World', Str $level = 'application.info') {
		my $n = Net::AMQP.new;
		my $con =  await $n.connect;
		my $channel = $n.open-channel(1).result;
		my $exchange = $channel.declare-exchange('topic-logs', 'topic').result;
		$exchange.publish(routing-key => $level, body => $message.encode);
		await $n.close("", "");
	}

This should now be fairly clear from the previous examples, except in this case we declare the exchange as the topic type and also provide the routing key that will be used by the broker to match the consuming queues.

The consumer code itself is again fairly similar to the previous examples, except it will take a list of patterns on the command line that will be used to match the routing key sent to the exchange:

use Net::AMQP;

multi sub MAIN(*@topics ) {
    my $n = Net::AMQP.new(:debug);
    unless @topics.elems {
        say "will be displaying all the messages";
        @topics.push: '#';
    }
    my $connection = $n.connect.result;
    react {
        whenever $n.open-channel(1) -> $channel {
            whenever $channel.declare-exchange('topic-logs', 'topic') -> $exchange {
                whenever $channel.declare-queue() -> $queue {
                    for @topics -> $topic {
                        await $queue.bind('topic-logs', $topic);
                    }
                    $queue.consume;
                    my $body-supply = $queue.message-supply.map( -> $v { [ $v.routing-key, $v.body.decode ] }).share;
                    whenever $body-supply -> ( $topic , $message ) {
                            say $*PID ~ " : [$topic]  $message";
                    }
                }
            }
        }
    }
}

Here essentially the only difference from the previous consumer example is (aside from the type supplied to the exchange declaration,) that the topic is supplied to the bind method. The topic can be a simple pattern where a # will match any supplied routing key and the behaviour will be the same as a fanout exchange, otherwise a * can be used in any part of the binding topic as a wild card which will match any characters in the topic, so in this example application.* will match messages sent with the routing key application.info or application.debug for instance.

If there is more than one queue bound with the same pattern, they too will behave as if they were bound to a fanout exchange. If the bound pattern contains neither a hash nor an asterisk character then the queue will behave as if it was bound to a direct exchange as a queue with that name (that is to say it will have the messages delivered on a first come, first served basis.)

But there’s more to life than just AMQP

Of course. The beauty of the Perl 6 reactive model is that various sources feeding Supplies can be integrated into your producer code as touched on above and similarly a consumer can push a message to another transport mechanism.

I was delighted to discover while I was thinking about the examples for this that the following just works:

	use EventSource::Server;
	use Net::AMQP;
	use Cro::HTTP::Router;
	use Cro::HTTP::Server;

	my $supply = supply { 
		my $n = Net::AMQP.new;
		my $connection = $n.connect.result;
		whenever $n.open-channel(1) -> $channel {
			whenever $channel.declare-queue("hello") -> $queue {
				$queue.consume;
				whenever $queue.message-supply.map( -> $v { $v.body.decode }) -> $data {
					emit EventSource::Server::Event.new(type => 'hello', :$data);
				}
			}
		}
	};

	my $es = EventSource::Server.new(:$supply);

	my $application = route {
		get -> 'greet', $name {
			content 'text/event-stream; charset=utf-8', $es.out-supply;
		}
	}
	my Cro::Service $hello = Cro::HTTP::Server.new:
		:host, :port, :$application;
	$hello.start;

	react whenever signal(SIGINT) { $hello.stop; exit; }

This is a variation of the example in the EventSource::Server you could of course the alter it to use any of the exchange types as discussed above. It should work fine with the producer code from the first example. And (if you were so persuaded,) you could consume the events with a small piece of node.js code (or in some browser oriented javascript,):

	var EventSource = require('eventsource');

	var event = process.argv[2] || 'message';

	console.info(event);
	var v = new EventSource(' http://127.0.0.1:10000');

	v.addEventListener(event, function(e) {
		console.info(e);

	}, false);

Wrapping it up

I concluded after typing the first paragraph of this that I would never be able to do this subject justice in a short article, so I hope you consider this as an appetizer, I don’t think I’ll ever find the time to write the book that it probably deserves. But I do have all the examples based on the RabbitMQ tutorials so check that out and feel free to contribute.

 

Day 18 – Asynchronous Workflow with Tinky

State Machines

The idea of “state” and the “state machine” is so ubiquitous in software programming that we tend to use it all the time without even thinking about it very much: that we have a fixed number of states of some object or system and that there are fixed allowable transitions from one state to another will be almost inevitable when we start modelling some physical process or business procedure.

I quite like the much quoted description of a state machine used in the erlang documentation:

If we are in state S and the event E occurs, we should perform the actions A and make a transition to the state S’.

once we string a number of these together we could consider it to form a workflow. Whole books, definition languages and software systems have been made about this stuff.

Managing State

Often we find ourselves implementing state management in an application on an ad-hoc basis, defining the constraints on state transition in code, often in separate part of the application and relying on the other parts of the application to do the right thing. I’m sure I’m not alone to have seen code in the wild that purports to implement a state machine which in fact has proved to be entirely non-deterministic in the face of the addition of new states or actions in the system. Or systems where a new action has to be performed on some object when it enters a new state and the state can be entered in different ways in different parts of the application so new code has to be implemented in a number of different places with the inevitable consequence that one gets missed and it doesn’t work as defined.

Using a single source of state management with a consistent definition within an application can alleviate these kinds of problems and can actually make the design of the application simpler and clearer than it might otherwise have been.

Thus I was inspired to write Tinky which is basically a state management system that allows you to create workflows in an application.

Tinky allows you to compose a number of states and the transitions between them into an application workflow which can make the transitions between the states available as a set of Supplies: providing the required constraints on the allowed transitions and allowing you to implement the actions on those transitions in the appropriate place or manner for your application.

Simple Workflow

Perhaps the canonical example used for similar software is that of the bug tracking software, so let’s start with the simplest possible example with three states and two transitions between them:

use Tinky;

my $state-new  = Tinky::State.new(name => "new");
my $state-open = Tinky::State.new(name => "open");
my $state-done = Tinky::State.new(name => "done");

my @states = ( $state-new, $state-open, $state-done);

my $new-open   = Tinky::Transition.new(name => "open", from => $state-new, to => $state-open);
my $open-done  = Tinky::Transition.new(name => "done", from => $state-open, to => $state-done);

my @transitions = ( $new-open, $open-done);


my $workflow = Tinky::Workflow.new(name => "support", :@states, :@transitions, initial-state => $state-new);

This defines our three states “new”, “open” and “done” and two transitions between them, from “new” to “open” and “open” to “done”. This defines a “workflow” in which the state must go through “open” before becoming “done”.

Obviously this doesn’t do very much without an object that can take part in this workflow, so Tinky provides a role Tinky::Object that can be applied to a class who’s state you want to manage:

class Ticket does Tinky::Object {
    has Str $.ticket-number = (^100000).pick.fmt("%08d");
}

my $ticket = Ticket.new;

$ticket.apply-workflow($workflow);

say $ticket.state.name;          # new
$ticket.next-states>>.name.say;  # (open)

$ticket.state = $state-open;
say $ticket.state.name;          # open

The Tinky::Object role provides the accessors state and next-states to the object, the latter returning a list of the possible states that the object can be transitioned to (in this example there is only one, but there could be as many as your workflow definition allows,) you’ll notice that the state of the object is defaulted to new which is the state provided as initial-state to the Tinky::Workflow constructor.

The assignment to state is constrained by the workflow definition, so if in the above you were to do:

$ticket.state = $state-done;

This would result in an exception “No Transition for ‘new’ to ‘done'” and the state of the object would not be changed.

As a convenience the workflow object defines a role which provides methods named for the transitions and which is applied to the object when apply-workflow is called, thus the setting of the state in the above could be written as:

$ticket.open

However this feature has an additional subtlety (that I unashamedly stole from a javascript library,) in that if there are two transitions with the same name then it will still create a single method which will use the current state of the object to select which transition to apply; typically you might do this where the to state is the same, so for example if we added a new state ‘rejected’ which can be entered from both ‘new’ and ‘open’:

use Tinky;

my $state-new      = Tinky::State.new(name => "new");
my $state-open     = Tinky::State.new(name => "open");
my $state-done     = Tinky::State.new(name => "done");
my $state-rejected = Tinky::State.new(name => "rejected");


my @states = ( $state-new, $state-open, $state-done, $state-rejected);

my $new-open   = Tinky::Transition.new(name => "open", from => $state-new, to => $state-open);
my $new-rejected   = Tinky::Transition.new(name => "reject", from => $state-new, to => $state-rejected);
my $open-done  = Tinky::Transition.new(name => "done", from => $state-open, to => $state-done);
my $open-rejected  = Tinky::Transition.new(name => "reject", from => $state-open, to => $state-rejected);

my @transitions = ( $new-open,$new-rejected, $open-done, $open-rejected);


my $workflow = Tinky::Workflow.new(name => "support", :@states, :@transitions, initial-state => $state-new);

class Ticket does Tinky::Object {
    has Str $.ticket-number = (^100000).pick.fmt("%08d");
}

my $ticket-one = Ticket.new;
$ticket-one.apply-workflow($workflow);
$ticket-one.next-states>>.name.say;
$ticket-one.reject;
say $ticket-one.state.name;

my $ticket-two = Ticket.new;
$ticket-two.apply-workflow($workflow);
$ticket-two.open;
$ticket-two.next-states>>.name.say;
$ticket-two.reject;
say $ticket-two.state.name;

You are not strictly limited to having the similarly named transitions enter the same state, but they must have different from states (otherwise the method generated wouldn’t know which transition to apply).   Obviously if the method is called on an object which is not in a state for which there are any transitions an exception will be thrown.

So what about this asynchronous thing

All of this might be somewhat useful if we are merely concerned with constraining the sequence of states an object might be in, but typically we want to perform some action upon transition from one state to another (and this is explicitly stated in the definition above). So, for instance, in our ticketing example we might want to send some notification, recalculate resource scheduling or make a branch in a version control system for example.

Tinky provides for the state transition actions by means of a set of  Supplies on the states and transitions,  to which the object for which the transition has been performed is emitted. These “events” are emitted  on the state that is being left, the state that is being entered and the actual transition that was performed. The supplies are conveniently aggregated at the workflow level.

So, if, in the example above, we wanted to log every transition of state of a ticket and additional send a message when the ticket enters the “open” state we can simply tap the appropriate Supply to perform these actions:

use Tinky;

my $state-new      = Tinky::State.new(name => "new");
my $state-open     = Tinky::State.new(name => "open");
my $state-done     = Tinky::State.new(name => "done");
my $state-rejected = Tinky::State.new(name => "rejected");


my @states = ( $state-new, $state-open, $state-done, $state-rejected);

my $new-open   = Tinky::Transition.new(name => "open", from => $state-new, to => $state-open);
my $new-rejected   = Tinky::Transition.new(name => "reject", from => $state-new, to => $state-rejected);
my $open-done  = Tinky::Transition.new(name => "done", from => $state-open, to => $state-done);
my $open-rejected  = Tinky::Transition.new(name => "reject", from => $state-open, to => $state-rejected);

my @transitions = ( $new-open,$new-rejected, $open-done, $open-rejected);


my $workflow = Tinky::Workflow.new(name => "support", :@states, :@transitions, initial-state => $state-new);

# Make the required actions

$workflow.transition-supply.tap(-> ($trans, $object) { say "Ticket '{ $object.ticket-number }' went from { $trans.from.name }' to '{ $trans.to.name }'" });
$state-open.enter-supply.tap(-> $object { say "Ticket '{ $object.ticket-number }' is opened, sending email" });

class Ticket does Tinky::Object {
    has Str $.ticket-number = (^100000).pick.fmt("%08d");
}

my $ticket-one = Ticket.new;
$ticket-one.apply-workflow($workflow);
$ticket-one.next-states>>.name.say;
$ticket-one.reject;
say $ticket-one.state.name;

my $ticket-two = Ticket.new;
$ticket-two.apply-workflow($workflow);
$ticket-two.open;
$ticket-two.next-states>>.name.say;
$ticket-two.reject;
say $ticket-two.state.name;

Which will give some output like

[open rejected]
Ticket '00015475' went from new' to 'rejected'
rejected
Ticket '00053735' is opened, sending email
Ticket '00053735' went from new' to 'open'
[done rejected]
Ticket '00053735' went from open' to 'rejected'
rejected

The beauty of this kind of arrangement, for me at least, is that the actions can be defined at the most appropriate place in the code rather than all in one place and can also be added and removed at run time if required, it also works nicely with other sources of asynchronous events in Perl 6 such as timers, signals or file system notifications.

Defining a Machine

Defining a large set of states and transitions could prove somewhat tiresome and error prone if doing it in code like the above, so you could choose to build it from some configuration file or from a database of some sort, but for convenience I have recently released Tinky::JSON which allows you to define all of your states and transitions in a single JSON document.

The above example would then become something like:

use Tinky;
use Tinky::JSON;

my $json = q:to/JSON/;
{
    "states" : [ "new", "open", "done", "rejected" ],
    "transitions" : [
        {
            "name" : "open",
            "from" : "new",
            "to"   : "open"
        },
        {
            "name" : "done",
            "from" : "open",
            "to"   : "done"
        },
        {
            "name" : "reject",
            "from" : "new",
            "to"   : "rejected"
        },
        {
            "name" : "reject",
            "from" : "open",
            "to"   : "rejected"
        }
    ],
    "initial-state" : "new"
}
JSON

my $workflow = Tinky::JSON::Workflow.from-json($json);

$workflow.transition-supply.tap(-> ($trans, $object) { say "Ticket '{ $object.ticket-number }' went from { $trans.from.name }' to '{ $trans.to.name }'" });
$workflow.enter-supply("open").tap(-> $object { say "Ticket '{ $object.ticket-number }' is opened, sending email" });

class Ticket does Tinky::Object {
    has Str $.ticket-number = (^100000).pick.fmt("%08d");
}

my $ticket-one = Ticket.new;
$ticket-one.apply-workflow($workflow);
$ticket-one.next-states>>.name.say;
$ticket-one.reject;
say $ticket-one.state.name;

my $ticket-two = Ticket.new;
$ticket-two.apply-workflow($workflow);
$ticket-two.open;
$ticket-two.next-states>>.name.say;
$ticket-two.reject;
say $ticket-two.state.name;

As well as providing the means of constructing the workflow object from a JSON description it adds methods for accessing the states and transitions and their respective supplies by name rather than having to have the objects themselves to hand, which may be more convenient in your application. I’m still working out how to provide the definition of actions in a similarly convenient declarative way.

It would probably be easy to make something similar that can obtain the definition from an XML file (probably using XML::Class,) so let me know if you might find that useful.

Making something useful

My prime driver for making Tinky in the first place was for a still-in-progress online radio management software, this could potentially have several different workflows for different types of objects: the media for streaming may need to be uploaded, it may possibly require encoding to a streamable format, have silence detection performed and its metadata normalised and so forth before it is usable in a show; the shows themselves need to have either media added or a live streaming source configured and then be scheduled at the appropriate time and possibly also be recorded (and then the recording fed back into the media workflow.) All of this might be a little too complex for a short example, but an example that ships with Tinky::JSON is inspired by the media portion of this and was actually made in response to something someone was asking about on IRC a while ago.

The basic idea is that a process waits for WAV files to appear in some directory and then copies them to another directory where they are encoded (in this case to FLAC.) The nice thing about using the workflow model for this is that the code is kept quite compact and clear, since failure conditions can be handled locally to the action for the step in the process so deeply nested conditions or early returns are avoided, also because it all happens asynchronously it makes best of the processor time.

So the workflow is described in JSON as:

{
    "states" : [ "new", "ready", "copied", "done", "failed", "rejected" ],
    "transitions" : [
        {
            "name" : "ready",
            "from" : "new",
            "to"   : "ready"
        },
        {
            "name" : "reject",
            "from" : "new",
            "to"   : "rejected"
        },
        {
            "name" : "copied",
            "from" : "ready",
            "to"   : "copied"
        },
        {
            "name" : "fail",
            "from" : "ready",
            "to"   : "failed"
        },
        {
            "name" : "done",
            "from" : "copied",
            "to"   : "done"
        },
        {
            "name" : "fail",
            "from" : "copied",
            "to"   : "failed"
        }
    ],
    "initial-state" : "new"
}

Which defines our six states and the transitions between them. The “rejected” state is entered if the file has been seen before (from state “new”,) and the “failed” state may occur if there was a problem with either the copying or the encoding.

The program expects this to be in a file called “encoder.json” in the same directory as the program itself.

This example uses the ‘flac’ encoder but you could alter this to something else if you want.

use Tinky;
use Tinky::JSON;
use File::Which;


class ProcessFile does Tinky::Object {
    has Str $.path      is required;
    has Str $.out-dir   is required;
    has Str $.new-path;
    has Str $.flac-file;
    has     @.errors;
    method new-path() returns Str {
        $!new-path //= $!out-dir.IO.child($!path.IO.basename).Str;
    }
    method flac-file() returns Str {
        $!flac-file //= self.new-path.subst(/\.wav$/, '.flac');
        $!flac-file;
    }
}


multi sub MAIN($dir, Str :$out-dir = '/tmp/flac') {
    my ProcessFile @process-files;

    my $json = $*PROGRAM.parent.child('encoder.json').slurp;
    my $workflow = Tinky::JSON::Workflow.from-json($json);

    my $flac = which('flac') or die "no flac encoder";
    my $cp   = which('cp');

    my $watch-supply = IO::Notification.watch-path($dir).grep({ $_.path ~~ /\.wav$/ }).unique(as => { $_.path }, expires => 5);

    say "Watching '$dir'";

    react {
        whenever $watch-supply -> $change {
            my $pf = ProcessFile.new(path => $change.path, :$out-dir);
            say "Processing '{ $pf.path }'";
            $pf.apply-workflow($workflow);
        }
        whenever $workflow.applied-supply() -> $pf {
            if @process-files.grep({ $_.path eq $pf.path }) {
                $*ERR.say: "** Already processing '", $pf.path, "' **";
                $pf.reject;
            }
            else {
                @process-files.append: $pf;
                $pf.ready;
            }
        }
        whenever $workflow.enter-supply('ready') -> $pf {
            my $copy = Proc::Async.new($cp, $pf.path, $pf.new-path, :r);
            whenever $copy.stderr -> $error {
                $pf.errors.append: $error.chomp;
            }
            whenever $copy.start -> $proc {
                if $proc.exitcode {
                    $pf.fail;
                }
                else {
                    $pf.copied;
                }
            }
        }
        whenever $workflow.enter-supply('copied') -> $pf {
            my $encode = Proc::Async.new($flac,'-s',$pf.new-path, :r);
            whenever $encode.stderr -> $error {
                $pf.errors.append: $error.chomp;
            }
            whenever $encode.start -> $proc {
                if $proc.exitcode {
                    $pf.fail;
                }
                else {
                    $pf.done;
                }
            }
        }
        whenever $workflow.enter-supply('done') -> $pf {
            say "File '{ $pf.path }' has been processed to '{ $pf.flac-file }'";
        }
        whenever $workflow.enter-supply('failed') -> $pf {
            say "Processing of file '{ $pf.path }' failed with '{ $pf.errors }'";
        }
        whenever $workflow.transition-supply -> ($trans, $pf ) {
            $*ERR.say("File '{ $pf.path }' went from '{ $trans.from.name }' to '{ $trans.to.name }'");
        }
    }
}

If you start this with an argument of the directory where you want to pick up the files ffrom, it will wait until new files appear then create a new ProcessFile object and apply the workflow to it, then every object to which the workflow is applied is sent to the applied-supply which is tapped to check whether the file has already been processed: if it has (and this can happen because the file directory watch may emit more than one event for the creation of the file,) the object is moved to state ‘rejected’ and no further processing happens, otherwise it is moved to state ‘ready’ whereupon it is copied, and if successfully encoded.

Additional states (and transitions to enter them,) could easily be added to, for instance, store the details of the encoded file in a database, or even start playing it, or new actions could be added for existing states by adding additional “whenever” blocks. As it stands this will block forever waiting for new files;  however this could be integrated into a larger program by starting this in a new thread for instance.

The program and the JSON file are in the examples directory for Tinky::JSON, please feel free to grab and tinker with them.

Not quite all

Tinky has a fair bit more functionality that I don’t think I have space to describe here: there are facilities for the run-time validation of transition application and additional supplies that are emitted to at various stages of the workflow lifecycle. Hopefully your interest is sufficiently picqued that you might look at the documentation.

I am considering adding a cookbook-style document for the module for some common patterns that might arise in programs that might use it. If you have any ideas or questions please feel free to drop me a note.

Finally, I chose a deliberately un-descriptive name for the module as I didn’t want to make a claim that this would be the last word in the problem space. There are probably many more ways that a state managed workflow could be implemented nicely in Perl 6. I would be equally delighted if you totally disagree with my approach and release your own design as I would be if you decide to use Tinky.

Tinky Winky is the purple Teletubby with a red bag.

Day 13 – Audio Streaming done Completely Wrong

Starting out

I made the audio streaming source client Audio::Libshout about a year and a half ago, it works quite well: with the speed improvements in Rakudo I have been able to stream 320kb/s MP3 without a problem, but it always annoyed me that it was difficult to test properly and even to test it at all required an Icecast server. Even with an icecast server that I could stream to, it would be necessary to actually listen to the stream in order to determine whether the stream was actually functioning correctly.

This all somewhat came to a head earlier in the year when I discovered that even the somewhat rudimentary tests that I had been using for the asynchronous streaming support had failed to detect that the stream wasn’t being fed at all. What I really needed was a sort of dumb streaming server that could act in place of the real icecast and could be instrumented to determine whether a connection was being made and that the correct audio data was being received. How hard could it be? After all it was just a special kind of web server.

I should have known from experience that this was a slippery slope, but hey.

A little aside about Audio Streaming

An icecast streaming server is basically an HTTP server that feeds a continuous stream of encoded audio data to the client listeners who connect with an HTTP GET request, the data to be streamed is typically provided by a source client which will connect to the server, probably authenticate using HTTP Authentication, and start sending the data at a steady rate that is proportional to the bitrate of the stream. libshout connects with a custom request method of SOURCE which is inherited from its earlier shoutcast origins, though icecast itself understands PUT as well for the source client. Because it is the responsibility of the listening client to supply the decoded audio data to the soundcard at exactly the right rate and the encoded data contains the bitrate of the stream as transmitted from the source the timing demands on the server are not too rigorous: it just has to be consistent and fast enough that a buffer on the client can be kept sufficiently full to supply the audio data to the sound card. Icecast does a little more in detail to, for instance, adjust for clients that don’t seem to be reading fast enough and so forth, but in principle it’s all relatively simple.

Putting it together

As might be obvious by now, an audio streaming server differs from a typical HTTP server in that rather than serving some content from disk or generated by the program itself for example, it needs to share data received on one client connection with one or more other client connections. In the simplest C implementation one might have a set of shared buffers, one of which is being populated from the source connection at any given time, whilst the others are being consumed by the client connections, alternating on filling and depletion. Whether the implementation settles on a non-blocking or threaded source possibly the most critical part of the code will be the synchronisation between the source writer and the client readers to ensure that a buffer is not being read from and written to at the same time.

In Perl 6 of course you’d hope you didn’t need to worry about these kind of annoying details as there are well thought out concurrency features that abstract away most of the nasty details.

From one to many

Perhaps the simplest program that illustrates how easy this might be would be this standalone version of the old Unix service chargen :

use v6.c;

my $supplier = Supplier.new;

start {
    loop {
        for ( 33 ... 126 ).map( { Buf.new($_) }) -> $c {
            sleep 0.05;
            $supplier.emit($c);
        }
    }
}

my $sig = signal(SIGPIPE).tap( -> $v {
    $sig.close;
});

react {
    whenever IO::Socket::Async.listen('localhost', 3333) -> $conn {
        my $p = Promise.new;
        CATCH {
            when /'broken pipe'/ {
                if $p.status !~~ Kept {
                    $p.keep: "done";
                }
            }
        }

        my $write = $supplier.Supply.tap( -> $v {
            if $p.status ~~ Planned {
                $conn.write: $v;
            }
            else {
                $write.close;
            }
        });
    }
}

In this the Supplier is being fed asynchronously to stand in for the possible source client in our streaming server, and each client that connects on port 3333 will receive the same stream of characters – if you connect with two clients (telnet or netcat for instance,) you will see they are getting the same data at roughly the same time.

The Supplier provides a shared sequence of data of which there can be many consumers, so each connection provided by the IO::Socket::Async will be fed the data emitted to the Supplier starting at the point the client connected.

The CATCH here is to deal with the client disconnecting, as the first our code will know about this is when we try to write to the connection, we’re not expecting any input from the client we can check and besides the characters becoming available to write may happen sooner than the attempt to read may register the close, so, while it may seem like a bit of a hack, it’s the most reliable and simple way of doing this: protecting from further writes with a Promise. If, by way of experiment, you were to omit the CATCH you would find that the server would quit without warning the first time the first client disconnected.

I’ll gloss over the signal outside the react as that only seems necessary in the case where we didn’t get any input data on the first connection.

Making something nearly useful

The above example is almost all we need to make something that you might be able to use, all we need is for it to handle an HTTP connection from the clients and get a source of actual MP3 data into the Supply and we’re good. To handle the HTTP parts we’ll just use the handy HTTP::Server::Tiny which will conveniently take a Supply that will feed the output data, so in fact we end up with quite a tiny program:

use HTTP::Server::Tiny;

sub MAIN(Str $file) {

    my $supplier = Supplier.new;

    my $handle = $file.IO.open(:bin);

    my $control-promise = start {
        while !$handle.eof {
            my $c = $handle.read(1024);
            $supplier.emit($c);
        }
    }

    sub stream(%env) {
        return 200, [ Content-Type => 'audio/mpeg', Pragma => 'no-cache', icy-name => 'My First Streaming Server'], $supplier.Supply;
    }
    HTTP::Server::Tiny.new(port => 3333).run(&stream);
}

The HTTP::Server::Tiny will run the stream subroutine for every request and we need to do is return the status code, some headers, and a supply from which the output data will be read, the client connection will be closed when the Supply is done (that is when the done method is called on the source Supplier.) It couldn’t really be much more
simple.

Just start the program with the path to a file containing MP3 audio and then point your favourite streaming client at port 3333 on your localhost and you should get the stream, I say should as it makes no attempt to regulate the rate at which the audio data is fed to the client. But for constant bit-rate MP3 data and a client that will buffer as much as it
can get it works.

Of course a real streaming server would read the data frame by frame and adjust the rate according to the bit-rate of each frame. I actually have made (but not yet released,) Audio::Format::MP3::Frame to help do this, but it would be over-kill for this example.

Relaying a stream

Of course the original intent of this streaming server was to be able to test a streaming source client, so we are going to have to add another part that will recognise a source connection, read from that and relay it to the normal clients in a similar way to the above.

You’ll recall that the libshout client library will connect with a request method of SOURCE so we can adjust the file streaming example to identify the source connect and feed the Supplier with the data read from the connection:

use HTTP::Server::Tiny;

sub MAIN() {

    my $supplier = Supplier.new;

    my Bool $got-source = False;

    sub stream(%env) {
        if %env<REQUEST_METHOD> eq 'GET' {
            if $got-source {
                return 200, [ Content-Type => 'audio/mpeg', Pragma => 'no-cache', icy-name => 'My First Streaming Server'], $supplier.Supply;
            }
            else {
                return 404, [ Content-Type => 'text/plain'], "no stream connected";
            }
        }
        elsif %env<REQUEST_METHOD> eq 'SOURCE' {
            my $connection = %env<p6sgix.io>;

            my $finish-promise = Promise.new;

            $connection.Supply(:bin).tap(-> $v {
                $supplier.emit($v);
            }, done => -> { $finish-promise.keep: "done" });

            $got-source = True;
            return 200, [ Content-Type => 'audio/mpeg' ], supply { whenever $finish-promise { done; } };
        }
    }
    HTTP::Server::Tiny.new(port => 3333).run(&stream);
}

You’ll see immediately that nearly all of the action is happening in the request handler subroutine: the GET branch is almost unchanged from the previous example (except that it will bail with a 404 if the source isn’t connected,) the SOURCE branch replaces the reading of the file previously. HTTP::Server::Tiny makes reading the streamed data from the source client really easy as it provides the connected IO::Socket::Async to the handler in the p6sgix.io (which I understand was originally primarily to support the WebSocket module,) theSupply of which is tapped to feed the shared Supplier that
conveys the audio data to the clients. All else that is necessary is to return with a Supply that is not intended to actually provide any data but just to close when the client closes their connection.

Now all you have to do is run the script and feed it with some source client like the following:

use Audio::Libshout;

multi sub MAIN(Str $file, Int $port = 3333, Str $password = 'hackme', Str $mount = '/foo') {

    my $shout = Audio::Libshout.new(:$port, :$password, :$mount, format => Audio::Libshout::Format::MP3);
    $shout.open;
    my $fh = $file.IO.open(:bin);

    while not $fh.eof {
        my $buf = $fh.read(4096);
        say $buf.elems;
        $shout.send($buf);
        $shout.sync;
    }

    $fh.close;
    $shout.close;
}

(Providing the path to some MP3 file again,) then if you connect a streaming audio player you will be getting some audio again.

You might notice that the script can take a password and a mount which aren’t used in this case, this is because Audio::Libshout requires them and also because this is basically the script that I have been using to test streaming servers for the last year or so.

Surprisingly this tiny streaming server works quite well, in testing I found that it ran out of threads before it got too bogged down handling the streams with multiple clients, showing how relatively efficient and well thought out the Perl 6 asynchronous model is. And how simple it is to put together a program that would probably require a lot more
code in many other languages.

Where do we go from here

I’m pretty sure that you wouldn’t want to use this code to serve up a popular radio station, but it would definitely be sufficient for my original testing purposes with a little additional instrumentation.

Of course I couldn’t just stop there so I worked up the as yet unreleased Audio::StreamThing which uses the same basic design with shared supplies, but works more like Icecast in having multiple mounts for individual streams, provision for
authentication and better exception handling.

If you’d find it useful I might even release it.

Postscript

I’d just like to get a mention in for the fabulous DJ Mike Stern as I always use his recorded
sets for testing this kind of stuff for some reason.

Have fun and make more noise.

Day 16 – Yak Shaving for Fun and Profit (or How I Learned to Stop Worry and Love Perl 6)

History

A little over a year and a half ago I decided to write a radio station management application in Perl 5, I won’t bore you with the detailed reasons but it involved hubris and not liking having to modify an existing application that was written in multiple languages that I didn’t enjoy working with.   Most of the required parts were available on CPAN and the requirements were clear, so it was going along, albeit slowly.

Anyway in the early part of this year a couple of things coincided to make me consider that there would be more value in making the application in Perl 6:  the probability of a release before Christmas 2015 tied in with my original estimate that it would take approximately a year to finish the application, the features of Perl 6 that I  was aware of would lead to a nice neat design, and frankly it seemed like a cool idea to get in there with a large, possibly useful, application right as Perl 6 was beginning to enter the mainstream. I largely stopped working on the Perl 5 version and started looking at what I needed to be able to make it in Perl 6.  Of course this would prove to be even more hubristic and deluded than the original idea, but I didn’t know that at the time.

Entering the Yak Farm

It was immediately clear that I was going to have to write a lot more code for myself: at the turn of the year there were approximately 275 distributions in the Perl 6 modules ecosystem, whereas there are somewhere in the region of 30,000 modules on CPAN going back some twenty years. There were bits and pieces that I was going to need: Database access was coming along, there was a RabbitMQ client, people were working on web toolkits so some of the heavy lifting was already being worked on.

Having determined that I was going to have to write some modules it seemed that porting some of my existing CPAN modules might be a good way of getting up to speed and doing something useful into the bargain (though I’m not anticipating using any of them in the larger project.)

I thought I’d start with Linux::Cpuinfo because it was a fairly simple proposition on the face of it and I haven’t been happy with the AUTOLOAD that the Perl 5 version uses since a few months after I wrote it nearly fifteen years ago.

Straight into MOP Land

As it turns out losing the AUTOLOAD in Perl 6 was a fairly simple proposition, infact it could be replaced directly with a method called  “FALLBACK” in the class which gets called with the name of the required method as the first argument thus not requiring the not quite so nice global variable in Perl 5.  But it seemed nicer to take advantage of Perl 6’s Meta Object Protocol (MOP) and build a class based on the fields found in the /proc/cpuinfo on the fly, so I ended up with something like:

multi method cpu_class() {
    if not $!cpu_class.isa(Linux::Cpuinfo::Cpu) {
         my $class_name = 'Linux::Cpuinfo::Cpu::' ~ $!arch.tc;
         $!cpu_class := Metamodel::ClassHOW.new_type(name => $class_name);
         $!cpu_class.^add_parent(Linux::Cpuinfo::Cpu);
         $!cpu_class.^compose;
    }
    $!cpu_class;
}

And then add the fields from the data with:

submethod BUILD(:%!fields ) {
     for %!fields.keys -> $field {
         if not self.can($field) {
             self.^add_method($field, { %!fields{$field} } );
         }
     }
}

In the parent class of the newly made class, works really nicely. This is all going swimmingly, you can construct and manipulate classes using a documented interface without any external dependencies.

Losing the XS

The fact that the Perl 6 NativeCall interface allows you to bind functions defined in a dynamic library directly without requiring any external code didn’t seem to really help with the next couple of modules I chose to look at (Sys::Utmp and Sys::Lastlog) as the majority of the code in the Perl 5 XS files is actually dealing with the differences in various operating systems ideas of the respective interfaces as much as providing the XSUBs to be used by the Perl code.  As it happens this isn’t really so much of a problem as, with all the XSisms stripped out, the code can be compiled and linked to a dynamic library that can be used via NativeCall, even better someone had already made  LibraryMake that makes integrating all this into the standard build mechanisms really quite easy.  Infact it all proved so easy I made both of those modules in a few days rather than just the one that I had intended to do in the first place.

Anyhow by this point it was probably time to start on something that I might need in order to make the radio software, so I settled on Audio::Sndfile – it seemed generally useful and libsndfile is robust and well understood. And this is where the yak shaving really began to kick in. The module itself was really quite easy thanks to NativeCall despite the size of the interface and a subsequently fixed bug in the native arrays, but it occurred to me that automated testing of the module would be somewhat compromised if the dynamic library it depends on is not installed.

Testing Times

In order to facilitate the nicer testing of modules that depend on optionally installed dynamic libraries, I made LibraryCheck, though to be honest it was so easy I was surprised that no-one had made it before.  It exports a single subroutine that returns a boolean to indicate whether the specified library can be loaded or not, so in a test you might do something like:

use LibraryCheck;
if !library-exists('libsndfile') {
    skip "Won't test because no libsndfile", 10;
    exit;
}

As an aside you’ll notice that the extension to the dynamic library isn’t specified because NativeCall works that out for you (be it “.so”, “.dll”, “.dylib” or whatever.)

I’ve actually taken to putting the check in a Build.pm (which is used by panda if present to take user specified actions as part of the build):

class Build is Panda::Builder {
    method build($workdir) {
        if !library-exists('libsndfile') {
            say "Won't build because no libsndfile";
            die "You need to have libsndfile installed";
        }
        True;
     }
}

This has the effect of aborting the build before the tests are attempted, which for automated testing has the benefit of not showing false positives where the tests could not be attempted.

Of course a radio software doesn’t only need to read audio files, it really also should be able to stream audio out to listeners, so I next decided to make Audio::Libshout which binds the streaming source client library for Icecast libshout.  Not only does the testing of this depend on the dynamic library, but also requires the network service supplied by Icecast, so it would be nice to check whether the service was actually available before performing some of the tests.  So to this end I made CheckSocket which does exactly that using what is actually a fairly common pattern in tests for network services. It can be used in a similar fashion to LibraryCheck:

use CheckSocket;
if not check-socket($port, $host) {
    diag "not performing live tests as no icecast server";
    skip-rest "no icecast server";
    exit;
}

I’ve subsequently added it to at least one other author’s tests, it nicely encapsulates a pattern which would otherwise be ten or so lines of boilerplate that would have to be copied and pasted into each test file.

Enter the Gates of Trait

A feature of the implementation of libshout is that it has an initialisation function that returns an opaque “handle” pointer, that gets passed to the other functions of the library, in a sort of object oriented fashion, additionally it provides getter and setter functions for all of the parameters, these would best be modelled as read/write accessors in a Perl 6 class, but there would be a lot of boilerplate code to write these out by hand.  Having looked at a number of similar libraries I concluded that this may be a common pattern, so I wrote AccessorFacade to encapsulate the pattern.

AccessorFacade is implemented as a Perl 6 trait (I won’t explain “trait” here as a previous advent post has already done that,)  it allowed me to turn:

     sub shout_set_host(Shout, Str) returns int32 is native('libshout') { * }
     sub shout_get_host(Shout) returns Str is native('libshout') { * }

     method host() is rw {
         Proxy.new(
                    FETCH => sub ($) {
                                       shout_get_host(self);
                    },
                    STORE => sub ($, $host is copy ) {
                        explicitly-manage($host);
                        shout_set_host(self, $host);
                    }
         );
     }

Of which there may  be over a dozen or so, into:

sub shout_set_host(Shout, Str) returns int32 is native('libshout') { * } 
sub shout_get_host(Shout) returns Str is native('libshout') { * }

method host() is rw is accessor-facade(&shout_set_host, &shout_get_host) { }

Thus saving tens of lines of boilerplate, copy and paste mistakes and simplified testing. It probably took me longer to write AccessorFacade nicely after I had worked out how to do traits, than I ended up doing for Audio::Libshout. Which is a result, as next up I decided I needed to write Audio::Encode::LameMP3 in order to stream audio data that wasn’t already MP3 encoded, and it transpired that the mp3lame library also used the getter/setter pattern that AccessorFacade targetted, having that library enabled me to finish it even quicker than anticipated.

Up to my Ears in Yaks

Digital audio data is typically represented by large arrays of  numbers and with the native bindings there is a lot of creating native CArrays of the correct size, copying Perl arrays to native arrays and copying native arrays to Perl arrays and so forth, this was clearly a generally useful pattern so I created NativeHelpers::Array to collect all the common use cases, refactored all the audio modules to use the subroutines it exports and found myself in the position of having written more modules to help me make the modules that I wanted to write than the modules I actually wanted to write.  So in order to get things in balance I wrote Audio::Convert::Samplerate that used some of the helpers above.

Around this time I decided that I probably needed to concentrate on some application infrastructure requirements rather than domain specific things if I wanted to get anywhere with the original plan and started on a logging framework that I had been thinking about for a while.  I immediately concluded that I needed a singleton logger object so I wrote Staticish.

Staticish is implemented as a class trait that basically does two things: it adds a role to the class that gives it a singleton constructor that will always only return the same object (which is created the first time it is called,) and applies a role to the classes MetaClass (the .HOW,) which will cause the methods (and public accessors) of the class to be wrapped such that if the method is called on the type object of the class the singleton object will be obtained from the constructor and it will be called on that instead, so you can do something like:

use Staticish;

class Foo is Static {
    has Str $.bar is rw;
}

Foo.bar = "There you go";
say Foo.bar; # > "There you go";

It does exactly what I need it to, but I still haven’t finished that logging framework.

All at Sea with JSON

Earlier in the year I had started working on a Couch DB interface module with a view to using a document rather than a relational database in the application which is part of the reason I had been helping to make the HTTP client modules do some of the things that were needed, but another part, for me at least, was the ability to round-trip a Perl 6 class marshalled to JSON and back again, or vice versa. Half of that already existed with JSON::Unmarshal so I proceeded to make JSON::Marshal to do the opposite and then JSON::Class which provides a role such that a class has  to-json and from-json methods, all so good so far and you can do:

 use JSON::Class;

 class Something does JSON::Class {
     has Str $.foo;
 }

 my Something $something = Something.from-json('{ "foo" : "stuff" }');
 my Str $json = $something.to-json(); # -> '{ "foo" : "stuff" }'

But it needed some real world application to test it in, and fortunately this presented itself in the Perl 6 ecosystem itself: the META6.json that is crucial to the proper installation of a module. It is quite easy to mis-type if you are manually creating JSON, so META6 which can read and write the META6 files using JSON::Class and Test::META which can provide a sanity test of the meta file seemed like a good idea. Almost inevitably JSON::Unmarshal and JSON::Marshal needed to be revisited to allow custom marshallers/un-marshallers to be defined, (using traits, natch,) for certain attributes.

I had actually already started making JSON::Infer a year ago in Perl 5 for reasons that I could make an entire post about, but having already started down the JSON path it seemed easy to finish in Perl 6 and use JSON::Class in the created classes to allow the creation of JSON web service API classes easily: unfortunately JSON::Class (or rather its depencies) didn’t even survive the encounter with the test data, so I made JSON::Name so JSON object attribute who’s name wasn’t a valid Perl 6 identifier could be marshalled or unmarshalled correctly.  All fixed up this gave me the impetus to finish the WebService::Soundcloud which I had been playing with for a year or so.  I still haven’t finished the CouchDB interface.

Nowhere near to Land

So now toward the end of the year, there are 478 modules in the ecosystem and somehow I have made 29 of them and I haven’t even started to make the application that I set out to make at the beginning of the year, but I’m happy with that as hopefully I’ve made some modules that may be useful to someone else, I’ve become a confident Perl 6 programmer and the things I’ve learned have helped improve the documentation and possibly some of the code. I will finish the radio software in Perl 6 and I still know I have a lot of software to write but it becomes easier every day.  So if you’re thinking of making an application in Perl 6 yourself enjoy the journey more than you may be frustrated by not reaching the destination when you expected as you are probably travelling a path along which few have been before.