Blog

Announcing barectf 2.1

Comments

Today I am pleased to announce the release of barectf 2.1!

The major new feature is the ability to include one or more external YAML files as the base properties of the metadata, clock, trace, stream, and event objects.

I wrote a few "standard" partial configuration files which are installed with barectf 2.1.

I moved all the single-page documentation to a more organized project wiki. This could eventually be moved again to a standalone website if need be.

This new version also allows optional object properties to be forced to their default values by setting them to null, not unlike CSS3's initial keyword. This is useful when using type inheritance or inclusions, for example:

type-aliases:
  base-int:
    class: int
    size: 23
    align: 16
    base: oct
  child-int:
    $inherit: base-int
    base: null

In the example above, the child-int type object inherits its parent's base property (oct), but it's reset to its default value (dec) by setting it to null.

I also added configuration file tests to the project. The tests are executed by LTTng's CI tool. The testing framework only depends on Bash. Those tests highlighted a bunch of bugs here and there which I fixed.

A summary of the changes is available in the new barectf changelog.

Tutorial: Remotely tracing an embedded Linux system

Comments

BeagleBone Black with GPIO test circuit.

Embedded systems usually present some kind of constraints, such as limited computing power, minimal amount of memory, low or nonexistent storage, etc. Debugging embedded systems within those constraints can be a real challenge for developers. Sometimes, developers will fallback to logging to a file with their beloved printf() in order to find the issues in their software stack. Keeping these log files on the system might not be possible.

The goal of this post is to describe a way to debug an embedded system with tracing and to stream the resulting traces to a remote machine.

Monitoring real-time latencies

Comments

Photo: Copyright © Alexandre Claude, used with permission

Debugging and monitoring real-time latencies can be a difficult task: monitoring too much adds significant overhead and produces a lot of data, while monitoring only a subset of the interrupt handling process gives a hint but does not necessarily provide enough information to understand the whole problem.

In this post, I present a workflow based on tools we have developed to find the right balance between overhead and quickly identifying latency-related problems. To illustrate the kind of problems we want to solve, we use the JACK Audio Connection Kit sound server, capture relevant data only when it is needed, and then analyse the data to identify the source of a high latency.

The following tools are presented in this post:

Announcing the release of LTTng 2.7

Comments

We're happy to announce the release of LTTng 2.7 "Herbe à Détourne". Following on the coattails of a conservative 2.6 release, LTTng 2.7 introduces a number of long-requested features.

It is also our first release since we have started pouring considerable efforts into our Continuous Integration setup to test the complete toolchain on every build configuration and platform we support. We are not done yet, but we're getting there fast!

While we have always been diligent about robustness, we have, in the past, mostly relied on our users to report problems occurring on non-Intel platforms or under non-default build scenarios. Now, with this setup in place at EfficiOS, it has become very easy for us to ensure new features and fixes work reliably and can be deployed safely for most of our user base.

Testing tracers—especially kernel tracers—poses a number of interesting challenges which we'll cover in a follow-up post. For now, let's talk features!

barectf 2: Continuous Bare-Metal Tracing on the Parallella Board

Comments

My last post, Tracing Bare-Metal Systems: a Multi-Core Story, written back in November 2014, introduced a command-line tool named barectf which would output C functions that are able to produce native CTF binary streams out of a CTF metadata input.

Today, I am very happy to present barectf 2, a natural evolution of what could now be considered my first prototype. The most considerable feature of barectf 2 is its platform concept: prewritten C code for a specific target managing the opening and closing operations of packets, effectively allowing continuous tracing from the application's point of view.

This post, like my previous one, explores the practical case of the Parallella board. This system presents a number of interesting challenges. To make a long story short, a bare-metal application being traced is running on the 16-core Epiphany, therefore producing a stream of packets, which must be extracted (or consumed) on the ARM (host) side.