Create virtual LTTng environments with vlttng


In my day to day job at EfficiOS, I often have to manually test different versions and configurations of LTTng. The typical way to test LTTngĀ 2.7 if LTTngĀ 2.8 is installed, for example, is, for each of the three LTTng projects (LTTng-tools, LTTng-UST, and LTTng-modules):

  1. Uninstall the current version (2.8).
  2. Check out version 2.7.
  3. Configure, build, and install version 2.7.

This whole manual process becomes painful over time. I'm not even mentioning the situations where I also need to test different versions of the tools' dependencies, like Glib and Libxml2. The same laborious process applies to Babeltrace.

In an ideal world, I would have multiple environments containing different versions and configurations of the tools and dependencies which form the LTTng ecosystem. This is totally possible with the various environment variables and configure flags (--prefix, --with-lttng-ust-prefix, and the rest) of the projects. And this is exactly what vlttng exploits to achieve this. vlttng is a project I've been working on in my spare time.

In this article I explain what vlttng is exactly, after which I show a concrete example.

Tracing Java applications using the LTTng-UST Java agent

Hi everyone! Alexandre here.

In this blog post, I will demonstrate how to generate LTTng traces from the Trace Compass viewer using the LTTng-UST Java agent integration. This should effectively explain:

  1. How to obtain LTTng traces from any Java application instrumented with java.util.logging tracepoints.
  2. How to trace the Trace Compass application.

This is basically an update of Tutorial: Tracing Java Logging Frameworks for LTTng 2.7 and above. I will also go into more details regarding how to use configuration files and how to record Trace Compass traces.

Announcing barectf 2.1


Today I am pleased to announce the release of barectf 2.1!

The major new feature is the ability to include one or more external YAML files as the base properties of the metadata, clock, trace, stream, and event objects.

I wrote a few "standard" partial configuration files which are installed with barectf 2.1.

I moved all the single-page documentation to a more organized project wiki. This could eventually be moved again to a standalone website if need be.

This new version also allows optional object properties to be forced to their default values by setting them to null, not unlike CSS3's initial keyword. This is useful when using type inheritance or inclusions, for example:

    class: int
    size: 23
    align: 16
    base: oct
    $inherit: base-int
    base: null

In the example above, the child-int type object inherits its parent's base property (oct), but it's reset to its default value (dec) by setting it to null.

I also added configuration file tests to the project. The tests are executed by LTTng's CI tool. The testing framework only depends on Bash. Those tests highlighted a bunch of bugs here and there which I fixed.

A summary of the changes is available in the new barectf changelog.

Tutorial: Remotely tracing an embedded Linux system


BeagleBone Black with GPIO test circuit.

Embedded systems usually present some kind of constraints, such as limited computing power, minimal amount of memory, low or nonexistent storage, etc. Debugging embedded systems within those constraints can be a real challenge for developers. Sometimes, developers will fallback to logging to a file with their beloved printf() in order to find the issues in their software stack. Keeping these log files on the system might not be possible.

The goal of this post is to describe a way to debug an embedded system with tracing and to stream the resulting traces to a remote machine.

Monitoring real-time latencies


Photo: Copyright © Alexandre Claude, used with permission

Debugging and monitoring real-time latencies can be a difficult task: monitoring too much adds significant overhead and produces a lot of data, while monitoring only a subset of the interrupt handling process gives a hint but does not necessarily provide enough information to understand the whole problem.

In this post, I present a workflow based on tools we have developed to find the right balance between overhead and quickly identifying latency-related problems. To illustrate the kind of problems we want to solve, we use the JACK Audio Connection Kit sound server, capture relevant data only when it is needed, and then analyse the data to identify the source of a high latency.

The following tools are presented in this post: