Rust and QML on Xubuntu

I just spotted A simple Rust GUI with QML on r/rust and wanted to give it a try. I already had Rust installed, but not QML. My desktop computer is currently running Xubuntu, which is Ubuntu with Xfce. Xfce is based on GTK+, whereas QML is based on Qt. There is a Kubuntu, which is Ubuntu with KDE. Since KDE is based on Qt, it probably has all the stuff we need for QML already. But we needn’t switch to Kubuntu to install QML. We can install it on Xubuntu or any version of Ubuntu just fine 1.

To install QML, I ran

sudo apt install qtdeclarative5-dev qml-module-qtquick-controls

which installed a whole bunch of dependencies, since I had no Qt things at all on this box yet. This got “Hello, World” to work, but there was not enough whitespace.

Screen shot of "hello world" in QML

Figure 1: “hello world” in QML!

I added width and height attributes to the .qml file and changed anchors.fill to anchors.centerIn

ApplicationWindow {
    visible: true
    width: 200
    height: 100
    Text {
        anchors.centerIn: parent
        text: message
    }
}

This looked a little better.

Screen shot of "hello world" with whitespace added

Figure 2: “hello world” with whitespace added

I probably should have made the font bigger as well, but I didn’t want to go too far down the “pointy-clicky nonsense” rabbit hole.

Later in the exercise, I needed QtQuick.Dialogs as well. For that, I ran

sudo apt install qml-module-qtquick-dialogs

Anyway, it’s a fun little blog post…you should give it a try! Bottom line is, Ubuntu has everything we need to follow along with it, even if the names aren’t that obvious.

Footnotes:

1

Indeed, renaming the distribution after the window manager like this is my least favorite thing about Ubuntu. I appreciate easy access to Xfce, but I didn’t need a whole new name. Xubuntu, Kubuntu, Lubuntu,…gah!

Rust and QML on Xubuntu

Strings in Go and Rust

This week at Go Meetup, we talked briefly about how strings in Go are UTF-8, but not really. What I mean is, on the one hand, we can write

s := "Hello, 世界!"
fmt.Println(s)

and it prints out

Hello, 世界!

as expected. But on the other hand, we can put an invalid UTF-8 sequence into a string as well

s := "\x67\x72\xfc\xdf\x65"

It will compile just fine, but print out junk.

gr��e

If we accept strings from an external source, we probably don’t want to do stringy things with them without first checking that they’re valid. For example, this code

package main

import (
    "fmt"
    "os"
)

func main() {
    for _, s := range os.Args {
        fmt.Println(s)
    }
}

just prints whatever we give it

$ ./garbage foo bär $(echo -en "\x67\x72\xfc\xdf\x65") baz
./garbage
foo
bär
gr��e
baz

while this one

package main

import (
    "fmt"
    "os"
    "unicode/utf8"
)

func main() {
    for _, s := range os.Args {
        if utf8.ValidString(s) {
            fmt.Println(s)
        } else {
            fmt.Println("not valid")
        }
    }
}

only prints valid strings

$ go build valid_string.go 
$ ./valid_string foo bär $(echo -en "\x67\x72\xfc\xdf\x65") baz
./valid_string
foo
bär
not valid
baz

In Rust, strings are UTF-8 as well. We can write

let s = "Hello, 世界!";
println!("{}", s);

and it prints out

Hello, 世界!

as expected. But unlike Go, we can’t put an invalid UTF-8 sequence in a string. This

let s = "\x67\x72\xfc\xdf\x65";

doesn’t even compile

error: this form of character escape may only be used with characters in the range [\x00-\x7f]

However, we still need to be careful. This

let v = vec![0x67, 0x72, 0xfc, 0xdf, 0x65];
let t = String::from_utf8(v);
println!("{:?}", t);

compiles fine, but gives a run-time error

Err(FromUtf8Error { bytes: [103, 114, 252, 223, 101], error: Utf8Error { valid_up_to: 2 } })

So once again, if we accept strings from an external source, we probably don’t want to do stringy things with them without first checking that they’re valid. But, unlike in Go, we can’t even put them in a string until we check. This code

use std::env;

fn main() {
    for arg in env::args() {
        println!("{}", arg);
    }
}

panics if any arguments are not valid UTF-8

$ ./valid_string_panic foo bär $(echo -en "\x67\x72\xfc\xdf\x65") baz
./valid_string_panic
foo
bär
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: "gr��e"', ../src/libcore/result.rs:837
note: Run with `RUST_BACKTRACE=1` for a backtrace.

Instead of std::env::args, we can use std::env::args_os to collect the arguments

use std::env;

fn main() {
    for arg in env::args_os() {
        println!("{:?}", arg);

        //println!("{}", arg);
        // does not compile
    }
}

This gives us an OsString instead of a String. Right away, we can see it’s different because it won’t even compile if we try to print it with “{}”. When we change to “{:?}”, we get junk for invalid UTF-8

$ ./valid_string_garbage foo bär $(echo -en "\x67\x72\xfc\xdf\x65") baz
"./valid_string_garbage"
"foo"
"bär"
"gr��e"
"baz"

To check that it’s valid, we can try to convert the OsString to a String. The to_str method returns an Option, which we can check

use std::env;

fn main() {
    for arg in env::args_os() {
        match arg.to_str() {
            Some(s) => println!("{}", s),
            None => println!("not valid"),
        }
    }
}

Thus we get

$ rustc valid_string.rs
$ ./valid_string foo bär $(echo -en "\x67\x72\xfc\xdf\x65") baz
./valid_string
foo
bär
not valid
baz

just as in Go.

So even though both Go and Rust use UTF-8 for strings, they are not the same model. There’s more to it. When it comes to encodings, there’s always more to it!

Strings in Go and Rust

Rusty Perl

I wanted to call Rust from Perl, so I tried to follow along with this blog post which does exactly that. But it was written before the release of Rust 1.0, so not everything still works. Here’s what I did:

Create a new Rust project called points.

$ cargo new points
     Created library `points` project
$ cd points

Add a lib section to the Cargo.toml file to create a .so instead of a .rlib.

[package]
name = "points"
version = "0.1.0"
authors = ["oylenshpeegul <oylenshpeegul@gmail.com>"]

[lib]
name = "points"
crate-type = ["dylib"]

Now edit the src/lib.rs as @pauldwoolcock describes, but there’s no deriving, no box, no int, and abs_sub is deprecated.

#[derive(Copy, Clone)]
pub struct Point { x: i64, y: i64 }

struct Line { p1: Point, p2: Point }

impl Line {
    pub fn length(&self) -> f64 {
        let xdiff = self.p1.x - self.p2.x;
        let ydiff = self.p1.y - self.p2.y;
        ((xdiff.pow(2) + ydiff.pow(2)) as f64).sqrt() 
    }
}

#[no_mangle]
pub extern "C" fn make_point(x: i64, y: i64) -> Box<Point> {
    Box::new( Point { x: x, y: y } )
}

#[no_mangle]
pub extern "C" fn get_distance(p1: &Point, p2: &Point) -> f64 {
    Line { p1: *p1, p2: *p2 }.length()
}

#[cfg(test)]
mod tests {
    use super::{Point, get_distance};

    #[test]
    fn test_get_distance() {
        let p1 = Point { x: 2, y: 2 };
        let p2 = Point { x: 4, y: 4 };
        assert!((get_distance(&p1, &p2) - 2.828427).abs() < 0.01f64);
    }
}

Now try running the tests!

$ cargo test
    Finished debug [unoptimized + debuginfo] target(s) in 0.0 secs
     Running target/debug/deps/points-0a1a2813ecad97ba

running 1 test
test tests::test_get_distance ... ok

test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured

If we do a cargo build, we’ll get a debug build and our libpoints.so will be in one location

$ cargo build
    Finished debug [unoptimized + debuginfo] target(s) in 0.0 secs

If we do a cargo build --release, we’ll get a release build and our libpoints.so will be in another location.

$ cargo build --release
    Finished release [optimized] target(s) in 0.0 secs

To use libpoints.so from Perl, we’ll create a perl directory with our points.pl script in it.

$ mkdir perl
$ touch perl/points.pl

We’ll use the FindBin module to link to the libpoints.so file relative to where we are now. And we’ll use a debug flag to link to either the debug or the release.

#!/usr/bin/env perl

use v5.24;
use warnings;
use FindBin;
use FFI::Raw;

my $debug = shift;

my $libpoints = "$FindBin::Bin/../target/release/libpoints.so";
if ($debug) {
    $libpoints = "$FindBin::Bin/../target/debug/deps/libpoints.so";
}

my $make_point = FFI::Raw->new(
    $libpoints,
    'make_point',
    FFI::Raw::ptr,
    FFI::Raw::int, FFI::Raw::int,
);

my $get_distance = FFI::Raw->new(
    $libpoints,
    'get_distance',
    FFI::Raw::double,
    FFI::Raw::ptr, FFI::Raw::ptr,
);

my $p1 = $make_point->call(2,2);
my $p2 = $make_point->call(4,4);

my $result = $get_distance->call($p1, $p2);
say "The distance from (2,2) to (4,4) is $result (the square root of 8).";

Now we should be able to run the Perl script from anywhere, with either the debug build or the release build.

$ perl/points.pl 1
The distance from (2,2) to (4,4) is 2.82842712474619 (the square root of 8).

$ perl/points.pl 
The distance from (2,2) to (4,4) is 2.82842712474619 (the square root of 8).

All of these files are on github.

Rusty Perl

Epoch fail

I was initially excited when Elixir 1.3 was released with a Calendar module, but now that I’ve tried to use it in a project, I’m disappointed. It turns out that they didn’t include everything we need, so we still need to import the :calendar module from Hex.

Worse, there is now twice as much documentation to sift through to try to find the functions we need. Are we looking for DateTime.add or Calendar.DateTime.add? While I was working, I ended up keeping tabs open to not only Date, Time, DateTime, and NaiveDateTime, but also Calendar.Date, Calendar.Time, Calendar.DateTime, and Calendar.NaiveDateTime. I’m not certain, but it feels as though I’m actually worse off than I was without the 1.3 additions.

Adding to my frustration was that the module felt both overly pedantic and incomplete. I had to define my own function to find the date n months from a given date

defp plus_months(date, 0) do
  date
end
defp plus_months(date, n) when n > 0 do
  dim = Calendar.Date.number_of_days_in_month(date)
  plus_months(Calendar.Date.add!(date, dim), n-1)
end

That was to do this calculation where we add days and months and seconds.

  def google_calendar(n) do

    {whole_days, seconds} = div_rem(n, @seconds_per_day)
    {months, days} = div_rem(whole_days, 32)
    {:ok, date} = Date.new(1969,12,31)
    {:ok, time} = Time.new(0,0,0, {0,6})

    date = date
    |> Calendar.Date.add!(days)
    |> plus_months(months)

    Calendar.NaiveDateTime.from_date_and_time!(date, time)
    |> Calendar.NaiveDateTime.add!(seconds)

end

But I don’t actually care about a Date or Time; I just wanted to do all that to a NaiveDateTime. I think it’s overly pedantic to say we must add days to a Date rather than a NaiveDateTime. It doesn’t seem to buy us anything. Obviously we can do the whole calculation without choosing a time zone, so why all the hand-wringing?

What I really want to do is define a NaiveDateTime and add days, months, and seconds to it

def google_calendar(n) do

  {whole_days, seconds} = div_rem(n, 24 * 60 * 60)

  # A "Google month" has 32 days!
  {months, days} = div_rem(whole_days, 32)

  # A "Google epoch" is one day early.
  {:ok, datetime} = NaiveDateTime.new(1969,12,31,0,0,0,0)

  datetime
  |> plus_days(days)
  |> plus_months(months)
  |> plus_seconds(seconds)

end

But to do that I had to keep unwrapping and re-wrapping the NaiveDateTime, so it’s not worth it.

def plus_days(ndt, n) do
  d = NaiveDateTime.to_date(ndt) 
  t = NaiveDateTime.to_time(ndt) 
  {:ok, ndt} = NaiveDateTime.new(Calendar.Date.add!(d, n), t)
  ndt
end

def plus_months(ndt, 0) do
  ndt
end
def plus_months(ndt, n) do
  d = NaiveDateTime.to_date(ndt)
  dim = Calendar.Date.number_of_days_in_month(d)
  plus_months(plus_days(ndt, dim), n-1)
end

def plus_seconds(ndt, n) do
  Calendar.NaiveDateTime.add!(ndt, n)
end

This is the kind of gorgeous code I expect from Elixir

datetime
|> plus_days(days)
|> plus_months(months)
|> plus_seconds(seconds)

but to get it we have to do something stupid.

Note that the corresponding calculation in the Perl version of the same project is that gorgeous

Time::Moment
    ->from_epoch(-$SECONDS_PER_DAY)
    ->plus_days($days)
    ->plus_months($months)
    ->plus_seconds($seconds);

Perl’s Time::Moment gives us a single immutable object representing a date and time of day with an offset from UTC in the ISO 8601 calendar system. All of the methods above were supplied.

I hope we get something more like that in Elixir 1.4 or 1.5; the half-baked support we got in 1.3 does not seem helpful. I made a branch with :calendar removed to illustrate. It has the linear transformations in one direction, but not the other. It has only one of the other transformations.

Epoch fail

Testing in PowerShell

I made a thing! A thing in PowerShell!

I’m not really sure how to make a module in PowerShell, but I made functions to do these epoch conversions and put them in a .psm1 file. Don’t know if there’s more to it or not.

Naturally, I wanted to write tests as I worked. I discovered that there is no testing framework in PowerShell, but there’s a terrific third-party framework called Pester. Since PowerShell on Linux is so new, I had no idea if it was going to work. I was pleased when it installed without issue, but at first it didn’t appear to work. Turns out it was assuming I had a $env:TEMP defined, but I had not. I took a guess and just set it to /tmp

$env:TEMP = '/tmp'

and then all was well! When you are in the directory where your module and tests are, you can import your module with

Import-Module ./Epochs

and then run your tests with

Invoke-Pester

That is, if we’ve named everything the way Pester expects, we just take all the defaults!

I learned the hard way that PowerShell won’t re-import a module it has already imported, by default. Making changes to Epochs.psm1 and then doing

Import-Module ./Epochs
Invoke-Pester

made me think my tests were still passing with my code changes, but it hadn’t actually reloaded my module. You have to -force it

Import-Module ./Epochs -force
Invoke-Pester

I’m still stumbling around when it comes to PowerShell, but Pester makes me feel more at home!

Testing in PowerShell

Hello, PowerShell!

Last week, while I was at Abstractions, I heard that PowerShell for Linux was released. Today, I tried it out!

My desktop machine at home is currently running Xubuntu 16.04.1, which is one of the platforms already packaged up. I downloaded the .deb, checked its sum

$ sha256sum Downloads/powershell_6.0.0-alpha.9-1ubuntu1.16.04.1_amd64.deb 
5d56a0419c23ce879dd4ddaca009f03e888355fccc9eecf882b64d63da5f38e3 Downloads/powershell_6.0.0-alpha.9-1ubuntu1.16.04.1_amd64.deb

and followed their instructions. I already had the two dependencies, so

$ sudo apt install libunwind8 libicu55

had no effect. Installing their deb

$ sudo dpkg -i ~/Downloads/powershell_6.0.0-alpha.9-1ubuntu1.16.04.1_amd64.deb

gave me a powershell executable.

$ which powershell
/usr/bin/powershell

Writing a quick hello world in PowerShell with that as the shebang line

#!/usr/bin/powershell

$name = $args[0]
if (!$name) {
    $name = "World"
}

write-host "Hello, $name!"

worked great!

$ ./hello.ps1
Hello, World!

$ ./hello.ps1  foo
Hello, foo!

Okay, how about those regexes with multiple named captures I talked about a while back? If we write this in multicapture.ps1

#!/usr/bin/powershell

$string = 'foo bar baz'
$pat = [regex] "(?:(?<word>\w+)\W*)+"
$m = $pat.match($string)
$m.groups["word"].captures | %{$_.value}

then lo and behold we get

$ ./multicapture.ps1 
foo
bar
baz

Now that I don’t have to boot Windows to do it, I might play with PowerShell a lot more! Thanks, Microsoft!

Hello, PowerShell!

Metacpan Download URL

This morning I read that we can now get the download URL for a CPAN module from the Metacpan API! For example, if we visit this URL

$ curl https://api-v1.metacpan.org/download_url/Path::Tiny
{
   "download_url" : "https://cpan.metacpan.org/authors/id/D/DA/DAGOLDEN/Path-Tiny-0.096.tar.gz",
   "version" : "0.096",
   "status" : "latest",
   "date" : "2016-07-03T01:36:29"
}

we get this blob of JSON. If we just want the URL, we could run it through jq

$ curl -s https://api-v1.metacpan.org/download_url/Path::Tiny | jq .download_url
"https://cpan.metacpan.org/authors/id/D/DA/DAGOLDEN/Path-Tiny-0.096.tar.gz"

OALDERS does the same thing in Perl, but it uses three different CPAN modules. HTTP::Tiny is in the standard library, right? Oh, but it needs help from IO::Socket::SSL and Net::SSLeay to get an https URL. And we still need something to encode the URI and something to decode the JSON. Here’s my first crack at it.

#!/usr/bin/env perl

use v5.24;
use warnings;
use HTTP::Tiny;
use JSON;
use URI::Encode qw(uri_encode);

my $module = shift // die "Usage: $0 module\n";

my $uri = uri_encode("https://api-v1.metacpan.org/download_url/$module");

my $res = HTTP::Tiny->new->get($uri);
die "Failed!\n" unless $res->{success};

say decode_json($res->{content})->{download_url};

We didn’t need to use LWP, but we still needed help from CPAN. If we can’t do it with the standard library, why not use Mojolicous? This is a web framework, of course, but it includes some excellent client-side tools too. Here is the same thing using the Mojolicious user agent and JSON decoder.

#!/usr/bin/env perl

use v5.24;
use warnings;
use Mojo::UserAgent;

my $module = shift // die "Usage: $0 module\n";

say Mojo::UserAgent->new
    ->get("https://api-v1.metacpan.org/download_url/$module")
    ->res
    ->json
    ->{download_url};

We can even make it a one-liner using the delightful ojo module!

$ perl -Mojo -E 'say g("https://api-v1.metacpan.org/download_url/".shift)->json->{download_url}' Path::Tiny
https://cpan.metacpan.org/authors/id/D/DA/DAGOLDEN/Path-Tiny-0.096.tar.gz

I’m a little disappointed that it’s not easy to do something as simple as this with just the standard library, but if we use CPAN then we have lots of choices. TMTOWTDI!

Metacpan Download URL