A few words about Docker

Av Marcus Folkesson den 25 Januari 2016
Docker has become one of those buzzwords that really annoys me. All buzzwords do because there are too often no substance behind it.
But in fact, Docker is really useful and I like it.

What is Docker and what can it do for you?

Docker it built upon Linux Containers, LXC, which is an virtual environment on an operating-system-level.
It let you run multiple isolated systems (called containers) on a single host. All these systems are totally separated
with respect to processes, networking, users, mounted filesystem and so on.
This combined with cgroups allows limitation and prioritization of resources like CPU, memory and networking.
LXC with cgroups is quite complex, but Docker is not.

So, it is a virtual machine, right?

Kind of… but no. Docker is using your currently running kernel, so it does not load a complete operating system like VMs do.
It just isolate the environment from your host and other containers.
This makes Docker quite lightweight and there is no big deal to have several running containers in parallel, and they are fast!
Possible use cases could be:
  • Run services, like a webserver, in an isolated container. If there is any vulnerabilty in the software, it will not affect the host system at all. It is also incredible easy to replicate and distribute the container.
  • Put your build environment in a container. I use Jenkins a lot at work. One big drawback is that the Jenkins server needs to have all build-environments installed and it is a mess to reinstall all these environment if the server needs to be migrated.A solution to this is to use docker for all kind of build-environments and setup them as Jenkins slaves. For example, one container could have a cross toolchain for ARM, another for PPC. The obvious advantage is that the environment may be replicated on any number of machines with no effort. Another big advantage is that I may share this container to our customers and they will have exactly the same environment as we are using on our buildserver.
  • Experiment with your system without being afraid to break things. If you screw things up, just discard the image and start over.

Create a container

Ok, lets create an container from the beginning. There are several public container images that docker may use, and you can of cause create your own repos. Here we will take one of those public Ubuntu-images and put an ARM toolchain on it.
Create a Dockerfile, this file will contain all changes that you will overlay the Ubuntu image with. All commands will run sequentially.
Here is a example of an Dockerfile:

FROM ubuntu
MAINTAINER Marcus Folkesson <marcus.folkesson@gmail.com>
RUN apt-get update
RUN apt-get install -y vim
RUN apt-get install -y openssh-server
ADD gcc-linaro-5.1-2015.08-x86_64_arm-linux-gnueabihf.tar.xz /opt
ADD start_services.sh /
RUN useradd -s /bin/bash -m build
ADD authorized_keys /home/build/.ssh/authorized_keys
RUN mkdir /var/run/sshd
CMD ["/bin/bash", "/start_services.sh"]
EXPOSE 8080/tcp 22/tcp

This Dockerfile will:
  • Download an ubuntu image
  • Run several apt-get commands
  • Put an ARM toolchain (that I have in the same directory as the Dockerfile) into /opt
  • Put my start_services into /
  • Add my public RSA key so that I don’t need to type a password when I ssh
  • Start my start_services.sh
  • Expose port 8080 and 22
Docker will automatically create snapshots on each step, so you could magically jump inte the middle of two steps.
Create an image called ‘experimental’.

#docker build -t experiment <path/to/Dockerfile>

This will do all the steps in Dockerfile.
You can now inspect your image with:

#docker images

REPOSITORY          TAG                 IMAGE ID            CREATED              VIRTUAL SIZE
experiment          latest              68a982bf6ebe        About a minute ago   188.7 MB

Start the container in interactive mode with bash;

#docker run -ti experiment /bin/bash

root@fcb1c9f84edb:/# ls
bin  boot  devetc  home  liblib64  media  mnt  optproc  roo t  run  sbin  srv  sys  tmp  usr  var

That’s it!
There is of cause many other things that you may look at.
For example remap ports from your host to the container (otherwise a running webserver would be pretty useless), share folders between host and a container (or between containers!), controlling your machine with stop, start, exec (see docker manual) and much much more.
And also, try to keep your fingers away from doing things inside the machine (unless you are experimenting with some obscure stuff), instead, describe it all in your Dockerfile. Otherwise its hard to reproduce the same container again.
With that said, let’s start doing some obscure stuff in our own isolated container!

Swetugg 2016

Av Per Salmi den 17 December 2015

sponsor-portrait

Swetugg är en ideellt anordnad svensk konferens för .NET-utvecklare. Vi på Combitech tycker detta är viktigt, därför är vi med som stödsponsor för Swetugg 2016. Konferensen går  under två dagar, 1-2 februari 2016 i Stockholm.

Program med komplett talarlista, sessioner, och all annan info hittar du på http://swetugg.se/. Biljettpriset ligger dessutom på bara 1000:-!

Bland talarnas ämnen för de två dagarna hittar man allt från Angular, Azure, kontainerteknik med Docker till Continuous Delivery, crossplatform utveckling och UX.

IoT with Qt/QML and PubNub in 10 minutes

Av Erik Larsson den 29 Oktober 2015

I really like coding for embedded systems, I still like Internet and connect my embedded systems. But managing servers is really boring! So lets keep IoT as simple as possible.

When we attended the Qt World Summit in Berlin a few weeks ago I went and listen to a seminar about IoT. It was a company called PubNub that had the seminar and they talk about there service for publishing messages and subscribing to messages, and also how they handled over a trillion messages per month. The only thing we need to do is to have our system/gadget/embedded system connected to Internet and PubNub will handle the reset. The sales guy challenged  me that I would write my first test application in 20 minutes using PubNub. He was wrong, the first application took me just 10 minutes.

PubNub have recently released the API for Qt. As a Qt lover I thought I would test it out. The Qt API was working, but not really what I hopped for. So yesterday evening I wrote a simple QML extension plugin to wrap PubNub in Qt and make it even simpler. And that was the birth of pubnubqml. The GitHub page contains the code and a simple example. Happy hacking!

pubnubqml works really simple. You need to sign up at PubNub to get your subscription/publish keys. When you have them you can send and listen to messages in QML just like you can use signals and slots in Qt.

This code is a simple chat example:

import QtQuick 2.3
import QtQuick.Window 2.2
import com.combitech.pubnub 1.0

Window {
    visible: true
    width: 300
    height: 400

    PubNubSubscriber {
        subscribeKey: "<Your sub-key here!>"
        channels: "chat"
        onMessage: {
            msgModel.append({message: message})
        }
    }

    PubNubPublisher {
        id: pub
        publishKey: "<Your pub-key here!>"
        subscribeKey: "<Your sub-key here!>"

        onPublishResult: {
            console.log("Publish result")
        }
    }

    Rectangle {
        id: rec
        anchors.left: parent.left
        anchors.right: parent.right
        anchors.bottom: parent.bottom
        height: 28
        border.width: 2

        TextInput {
            id: msg
            anchors.left: parent.left
            anchors.right: parent.right
            anchors.verticalCenter: parent.verticalCenter
            anchors.rightMargin: 50
            anchors.margins: 2
            font.pixelSize: 10
        }
        Rectangle {
            anchors.left: msg.right
            anchors.top: rec.top
            anchors.right: rec.right
            anchors.bottom: rec.bottom
            color: "green"
            Text {
                anchors.centerIn: parent
                text: "Send"
                color: "white"
                font.pixelSize: 10
            }
            MouseArea {
                anchors.fill: parent
                onClicked: pub.publish("chat", msg.text)
            }
        }
    }

    ListModel {
        id: msgModel
        ListElement { message: "Start chat" }
    }

    ListView {
        anchors.left: parent.left
        anchors.right: parent.right
        anchors.top: parent.top
        anchors.bottom: rec.top

        model: msgModel
        delegate: Text {
            height: 20
            text: message
            font.pixelSize: 12
        }
    }

}

Introduction to the Linux perf tool

Av Marcus Folkesson den 5 September 2015

I’ve been using perf for a few years now but I’m still often surprised at how useful it is.

For the uninitiated, perf is a profiling tool that comes with the Linux kernel. Unlike other profiling tools like gprof or callgrind, perf is profiling the entire system, which mean both kernel and userspace code. One other big advantage is that the functionality is fully integrated into the kernel so no daemons or what so ever is needed (as for example oprofile needs).

Perf is used with several subcommands, these are:

  • stat: measure total event count for single program or for system for some time
  • top: top-like dynamic view of hottest functions
  • record: measure and save sampling data for single program
  • report: analyze file generated by perf record; can generate flat, or graph profile
  • annotate: annotate sources or assembly
  • sched: tracing/measuring of scheduler actions and latencies
  • list: list available events
  • Get started

    The first step is to compile perf. Navigate to your Linux kernel source and move down to tools/perf/. Compile with make, the system will automatically detect your configuration and build in support for those libs that are available. I recommend that you have libelf on your system. Without libelf, perf is not able to resolv symbols for your application.

    Compile your application

    We will profile the application below.

    void c()
    {
    volatile int x = 0;
    volatile int z = 0;

    for(x = 0; x < 1000000; x++) {
    z = x + x;
    }
    }

    void b()
    {
    volatile int x = 0;
    for(x = 0; x < 100; x++) {
    c();
    }
    }

    void a()
    {
    b();
    }

    int main(int argc, const char *argv[])
    {
    a();
    return 0;
    }

    The flags used when compiling the application is -g and -fno-omit-frame-pointer. The former is for debug symbols and the latter to make it possible to make call graphs.

    gcc -g -fno-omit-frame-pointer ./main.c -o app

    Record a run

    Start the application with perf:
    perf record -g ./app

    Perf may also be attached to an existing process with -p PID (may be combined with -t TID for a specific thread).

    This will result in a perf.data file containing the profiling data.

    Examine the result

    By default, a Linux system does not allow any user resolve kernel addresses. Change the permissions by write 0 to /proc/sys/kernel/kptr_restrict. See man proc for more details.
    echo 0 | sudo tee  /proc/sys/kernel/kptr_restrict

    The most simple way to examine the result is to run perf report. Report will start a nice ncurses interface.

    perf-report

    perf-report

    The interface will let you see the CPU usage for each function for both kernel and userspace. It is also possible to expand each function to see the call sequence. As we see in the image above, c() is taking 99.77% of the CPU time.

    It is also possible to annotate with source code. Here we can see that the instruction that compares x with 1000000 is the most time consuming operation. (I guess the next instruction (jle, jump if  lesser or equal) spoils the instruction pipeline and is therefor the *real* bottleneck)

    perf-annotate

    perf-annotate

    Going further

    This was just the most common scenario where perf may be really useful. Perf has a lot more features such as trig on cache-misses, hardware events, software events, support for dynamic tracing and so on. See …/tools/perf/Documentation for further reading.

    ASP.NET 5

    Av Mats Sjövall den 30 Augusti 2015

    Några regniga dagar under sommaren så roade jag mig med att testa ASP.NET 5 (beta6). Projektet jag gjorde var en enkel bloggmotor. På det stora hela kan jag konstatera att det är ett mycket trevligt ramverk. Det tog inte speciellt lång tid att bygga upp den funktionalitet jag ville ha och prestandan blev över förväntan. Det enda som jag upplevde mest som ”beta” var tooling-stödet i Visual Studio. En hel del buggigt beteende och mycket man fick göra via kommando-raden.

    Några av teknikerna jag använde mig av var:

    • ASP.NET MVC
    • Azure AD Authentication
    • SQL Server
    • Entity Framework
    • Bootstrap
    • Bower
    • Gulp


    Det hela hostas i Microsoft Azure.

    Några observationer:

    • Nytt projektfilformat, xproj där man inte behöver lista filerna som ingår utan den kompilerar det som ligger i katalogerna.
    • Ny json-baserad nuget-hantering där man slipper lägga in dll-referencer i projektfilen utan dll:er från NuGet-paketen refereras automatiskt.
    • Entity Framework manageras nu via .Net Execution Environment istället för som tidigare via Nuget Powershell.
    • Roslyn analyzers funkar inte än :(
    • Jag fick många gånger göra manuell restore av packages för att få saker att bygga.
    • Unittest ramverket som stöds är enbart xunit
    • Azure-autentiseringen fungerade inte med CoreCLR (Open Source varianten)
    • Hosting i IISExpress får den att krasha med ”Access Violation”, dnx funkar dock bra.


    Beta 7 släpps i dagarna nu och där har man fokuserat på att få klart cross-platform stödet, så min plan här framöver är att hosta min blogg på en Linux-maskin istället för i IIS som nu.

    Resultatet av mitt hackande kan beskådas på http://matssjovall.azurewebsites.net.