Blog

Tracing Java applications on Linux using the LTTng-UST Java agent

Hi everyone! Alexandre here.

In this blog post, I will demonstrate how to generate LTTng traces from the Trace Compass viewer using the LTTng-UST Java agent integration. This should effectively explain:

  1. How to obtain LTTng traces from any Java application instrumented with java.util.logging tracepoints.
  2. How to trace the Trace Compass application.

This is basically an update of Tutorial: Tracing Java Logging Frameworks for LTTng 2.7 and above. I will also go into more details regarding how to use configuration files and how to record Trace Compass traces.

Announcing barectf 2.1

Comments

Today I am pleased to announce the release of barectf 2.1!

The major new feature is the ability to include one or more external YAML files as the base properties of the metadata, clock, trace, stream, and event objects.

I wrote a few "standard" partial configuration files which are installed with barectf 2.1.

I moved all the single-page documentation to a more organized project wiki. This could eventually be moved again to a standalone website if need be.

This new version also allows optional object properties to be forced to their default values by setting them to null, not unlike CSS3's initial keyword. This is useful when using type inheritance or inclusions, for example:

type-aliases:
  base-int:
    class: int
    size: 23
    align: 16
    base: oct
  child-int:
    $inherit: base-int
    base: null

In the example above, the child-int type object inherits its parent's base property (oct), but it's reset to its default value (dec) by setting it to null.

I also added configuration file tests to the project. The tests are executed by LTTng's CI tool. The testing framework only depends on Bash. Those tests highlighted a bunch of bugs here and there which I fixed.

A summary of the changes is available in the new barectf changelog.

Tutorial: Remotely tracing an embedded Linux system

Comments

BeagleBone Black with GPIO test circuit.

Embedded systems usually present some kind of constraints, such as limited computing power, minimal amount of memory, low or nonexistent storage, etc. Debugging embedded systems within those constraints can be a real challenge for developers. Sometimes, developers will fallback to logging to a file with their beloved printf() in order to find the issues in their software stack. Keeping these log files on the system might not be possible.

The goal of this post is to describe a way to debug an embedded system with tracing and to stream the resulting traces to a remote machine.

Monitoring real-time latencies

Comments

Photo: Copyright © Alexandre Claude, used with permission

Debugging and monitoring real-time latencies can be a difficult task: monitoring too much adds significant overhead and produces a lot of data, while monitoring only a subset of the interrupt handling process gives a hint but does not necessarily provide enough information to understand the whole problem.

In this post, I present a workflow based on tools we have developed to find the right balance between overhead and quickly identifying latency-related problems. To illustrate the kind of problems we want to solve, we use the JACK Audio Connection Kit sound server, capture relevant data only when it is needed, and then analyse the data to identify the source of a high latency.

The following tools are presented in this post:

Announcing the release of LTTng 2.7

Comments

We're happy to announce the release of LTTng 2.7 "Herbe à Détourne". Following on the coattails of a conservative 2.6 release, LTTng 2.7 introduces a number of long-requested features.

It is also our first release since we have started pouring considerable efforts into our Continuous Integration setup to test the complete toolchain on every build configuration and platform we support. We are not done yet, but we're getting there fast!

While we have always been diligent about robustness, we have, in the past, mostly relied on our users to report problems occurring on non-Intel platforms or under non-default build scenarios. Now, with this setup in place at EfficiOS, it has become very easy for us to ensure new features and fixes work reliably and can be deployed safely for most of our user base.

Testing tracers—especially kernel tracers—poses a number of interesting challenges which we'll cover in a follow-up post. For now, let's talk features!