A piggy bank of commands, fixes, succinct reviews, some mini articles and technical opinions from a (mostly) Perl developer.

Moose questions

These points are not covered in the Moose documentation:

Question: If you set an attribute in the constructor, does it override the builder?
Answer: Yes, the builder is not run.
Code:
perl -MMoose -le'package Quxx; use Moose; has "qux" => ( is => "ro", builder => "_build_qux" ); sub _build_qux { die "duck" }; package main; my $q = Quxx->new( qux => "cat" ); print "qux = #".$q->qux."#"'
Output:
qux = #cat#

Question: How do you compose a writer subroutine? (aka setter / mutator method).
Answer: You don't need to, Moose creates it for you. Just define it in the attribute and then call it.
Code:
perl -MMoose -le'package Foo; use Moose; has "foo" => ( is => "rw", writer => "_set_foo" ); sub set_foo { "bar" }; sub bar { my $self = shift; $self->_set_foo(2); }; package main; my $f = Foo->new; $f->bar(4); print "foo = #".$f->foo."#"'
Output:
foo = #2#

What is DevOps?

I've heard a lot of waffling definitions of DevOps that left me with more questions than answers. A colleague explained it to me like this:

DevOps is where:
  • Developers are responsible for getting code into production, and keeping it running. They receive problem alerts from production.
  • Member of the Operations team attend Development team standups.
  • When a problem arises, the two teams work closely together to resolve it
In contrast, seperate Dev and Ops is:
  • The development team is treated like a third-party supplier, who delivers a fully working and documented application.
  • When the software breaks they are still treated like a third-party supplier

Bash shortcuts for faster editing

To open the command line in vi:

set -o vi
# type or navigate to a long command
# press [escape]
# navigate the line using vi keyboard commands
# press 'v' to open the command in vi


bash default parameters

# Set dir to $1, or "foo" if $1 isn't set
DIR=${1:-foo}

Don't grep, but filter terminal output and colour certain lines (and print all other lines too)

Ruby:

acoc source with dependency of term-ansicolor gem.

Perl:

tail -f /some/log | perl -MTerm::ANSIColor -ne'print color("red") if /error/i; print $_; print color("reset");'

or

tail -f /some/log | perl -MTerm::ANSIColor -pne's/foo/color("cyan")."foo".color("reset")/e;'

or

tail -f /some/log | perl -MTerm::ANSIColor -pne's/(error|bar)/color("green").$1.color("reset")/eig;'

Bash:

Emacs font-lock-keywords can easily be implemented in a 3 line bash script. Call it "highlight":

    # Usage: tail -f error.log | highlight "error"
    RED="$(tput setaf 1)"
    RESET="$(tput setaf 7)"
    sed "s/$1/$RED$1$RESET/"

Non-functional checklist

When writing a user story or writing a spec for a piece of development work, consider the following non-functional aspects:

  • Authentication
  • Session management
  • Access control
  • Input validation
  • Output encoding/escaping
  • Encryption
  • Error handling and logging
  • Data protection
  • Communication security
  • HTTP security features
  • Monitoring
    • Logging of significant code paths
    • Logging of expected events and errors
    • Catching and logging of unexpected errors (crashes)
    • Metrics for stats of usage and throughput (requests)
  • Performance, e.g. response time must be <500ms


This is especially useful when building new systems like a new app or API.

Can't connect to London underground WiFi after changing device

Problem:

You just changed your phone/tablet/device. Now you can't connect to the free Virgin Media wifi at London underground stations.

Solution:

  • Call your phone company's tech support (For EE it's 150, the 1, 3, 4)
  • Tell the level 1 support person that you've already tried all the troubleshooting steps including a network refresh
  • Ask to be put through to level 2
  • Ask the level 2 tech to un-register you from London underground wifi, wait 24 hours and re-register.
  • After that, follow the process for a new WiFi password (for EE it's texting EEWIFI to 9527).

Many different ways to resolve an IP to a hostname in Perl

Some different ways to look up hostnames.

Notes:

  • getnameinfo() gets a hostname and a service name, so it's not exactly the same as gethostbyaddr()
  • although the docs imply you need to enter a port, that's just to get the local service name. If you set it as undef then you can ignore any service

# 1, old deprecated way

use Socket; # gethostbyaddr, inet_aton, AF_INET

sub ip_address_to_host {
    my ( $self, $ip_address ) = @_;
    my ($hostname) = gethostbyaddr(
        inet_aton($ip_address),
        AF_INET,
    );
    return $hostname;
}

# 2, newer better way

use Socket qw(AF_INET inet_pton getnameinfo sockaddr_in);

sub ip_address_to_host {

     my ( $self, $ip_address ) = @_;

    my $port = undef;
    my $socket_address = sockaddr_in($port, inet_pton(AF_INET, $ip_address));

    my $flags = 0;
    my $xflags = 0;
    my ($error, $hostname, $servicename) = getnameinfo($socket_address, $flags, $xflags);

    return $hostname;
}

# 3, best way - that only uses DNS and not /etc/hosts first

use Net::DNS;

sub ip_address_to_host {

     my ( $self, $ip_address ) = @_;

    my $res = Net::DNS::Resolver->new;
    my $target_ip = join('.', reverse split(/\./, $ip_address)).".in-addr.arpa";
    my $query = $res->query("$target_ip", "PTR") // return;
    my $answer = ($query->answer)[0] // return;
    my $hostname = $answer->rdatastr;

    return $hostname;
}

# 4, the code golf way (no validation) - by Mark B

 perl -le 'use Net::DNS::Resolver; print ((Net::DNS::Resolver->new()->query("10.232.32.158","PTR")->answer)[0]->ptrdname);'


Run only one subtest with Test::More

Apply this patch to Test/More.pm, version 1.302075:

807d806
<     return if exists $ENV{SUBTEST} && $ENV{SUBTEST} ne $_[0];

Here's that patch again, in unified format:

$ diff -u Test/More.pm{,.orig}
--- Test/More.pm      2017-07-27 13:23:56.000000000 +0100
+++ Test/More.pm.orig 2017-07-27 13:19:00.000000000 +0100
@@ -804,7 +804,6 @@

 sub subtest {
     my $tb = Test::More->builder;
+    return if exists $ENV{SUBTEST} && $ENV{SUBTEST} ne $_[0];
     return $tb->subtest(@_);
 }

Run the test like this:

SUBTEST="put name of subtest here" /usr/bin/prove -v test_file.t

Why it's important to merge upstream at the end of every sprint

The whole idea of agile/scrum is to regularly and frequently provide value to the business.

If you already deploy to production every few weeks, or tag your feature branch and deploy it to a Testing or Production environment for users or other stakeholders to see, then that's great! You probably don't need to merge to master or trunk in that case.

But say your release cycle is every two months so new features won't actually reach production for some time after being built. If you don't merge the sprint's work to master and don't deploy it anywhere, then it weakens all the good habits of agile working. Every sprint the developers will feel slightly less motivated to complete the work on time because it won't really matter, nothing is being done with the work. In this situation, merging to master is a proxy for deploying to production. But an even better proxy for deploying to production is: Deploying to an environment, preferably an environment of which the stakeholders have visibility.

If you don't like using master/trunk, then designate a common feature branch, release branch or any other kind of special branch to merge each set of changes to. The point is that branch should not be under direct control of the feature developer anymore. Any new changes require a new issue/branch to be created, reviewed and merged separately. It's psychologically cleaner.

There are many benefits to keeping branches small and merging upstream frequently:
  • The whole team gets to review the code, even (especially) people who haven't seen it yet
  • Smaller branches are easier for the team to review
  • Mistakes are caught earlier so less work is needed to rectify them
  • The new code can be more easily tested in continuous integration, which probably already runs off master
  • Reduces conflicts with the rest of the codebase
  • Makes it easier to work on new features in parallel (the alternative is using a common feature branch and sub-branches)
  • Keeps you honest by "publishing" your work, prevents feature creep, makes iterations clear, etc.
  • Makes it more efficient to work on and think about, you keep fewer things in your head because it's not all "up in the air"

General API design principles

API principles

Status: Draft
Working notes:
  • Read the Heroku HTTP API design guide
  • And the The twelve-factor app methodology for building SaaS
  • Use jsonapi.org
  • Consider JSON PATCH
  • Endpoints are all nouns, use the HTTP actions as verbs.
  • All endpoint nouns must always be singular, to match with database tables. Or plural (explanation). The point is they should remain consistent with other APIs from the same team, organisation or whatever.
  • Article: Your API versioning is wrong. Conclusion: Use the headers for versioning.
  • Cool ideas:
    • (for ease of development) Provide a special undocumented "override" URL path for humans, that sets the header appropriately:
      • i.e. /api/v2/nodes --> redirects to --> /api/nodes and automatically sets header: api-version: 2
      • or /api/nodes?version=2 --> redirects the same as above
    • Caution: Don't overload the content-type header.
    • (for ease of development) Provide a similar parameter for the Accept headers. The reason for this is to make it easier during development to send a URL to someone non-technical, or who does have the right dev environment set up, but the URL will still works without any special software like curl or browser plugins. Example: /api/nodes?accept=application/json
  • Return 2xx status code to indicate the success of the HTTP request
  • Return a status field in the content body to indicate the progress of the business domain request
    • Use a "status" field, not a "state" field. Status refers to a progression.
APIs should conform to previously created APIs where that doesn't contradict the principles above.
Where legacy APIs don't follow the principles above, they should be updated to conform as a pre-requisite to any changes.

Minimal JSONAPI examples

Success response:
(the data array is optional, it could be ommitted if the response is always a single object).
Failure response:

Questions

  • The response should be valid JSON API. But must the request be JSON API too?

References

  • If the examples above seem unnecessarily verbose (even though they have been cut down the most they can be), try JSend instead, it's simpler.
  • How to validate JSON API: Take the schema and your JSON, and input them at JSON schema lint
More stuff

https://www.youtube.com/watch?v=aAb7hSCtvGw

Some of the key takeaways:
  • An api should be easy to learn, easy to use, hard to misuse
  • Continue to write to the API early and often
  • Example programs should be exemplary - this code will end up being copied everywhere
  • When in doubt, leave it out
  • Be consistent - same word means the same thing across the api 
  • Documentation matters - reuse is something is easier to say than to do - doing it requires both good design and good documentation
  • Do what is customary - obey standard naming conventions - it should feel like one of the core APIs, know the common pitfalls for the language and avoid them

Perl one-liner to serve a directory over HTTP

plackup -e 'use Plack::App::Directory; Plack::App::Directory->new({ root => "/opt/dweb/packages" })->to_app;'

(source)

Git show untracked stash files

Git stash can save untracked files like this:

git stash --all

But to see untracked files in the stash you need this special ^3 syntax:

git show stash@{1}^3
git show stash@{2}^3
git show stash@{99}^3

This shows stash numbers, 1, 2 and 99 respectively. The number 3 always remains 3.

(source)