As an embedded software engineer I want to encourage you to make more and better use of abstraction as a means of good programming. Here I will motivate my reasoning by referring to the current situation in embedded software development (as I see it) and the problems we are facing.

At the end of the article you will understand what abstraction is and how it relates to embedded software. In the coming posts I will present practical examples to show how the principles mentioned here can help you write better code. Since we are dealing with resource-constraints systems, I will especially focus on the costs associated using them.

Abstraction in a nutshell

abstraction (n) – the quality of dealing with ideas rather than events.

By using abstractions, one can more directly use ‘mental concepts’ in software. Every time you write a class you basically abstract and hide implementation details. When well-applied, the user does not need to know about the internals underneath the public interface. All the user sees, is the idea – the abstraction. This holds true not only for classes, but also for types and functions. This is the original idea underlying OOP.

More abstraction lead to safer code

In the embedded domain we are often dealing with products, where malfunction is not tolerated. Aside from industries where safety is of paramount importance (i.e. medical, defense, aerospace) this also holds true for much simpler everyday products. Who likes to power cycle his wireless keyboard by temporarily removing its batteries?

Additionally, we commonly have to deal with very hard to track-down error sources, such as hardware malfunction or circuit errors, sensor data  disturbed in a way not anticipated, race conditions arising from an array of external events, …  the list could go on and on.

These errors often need a frustrating long time and attention to deal with. This is why we should give our best to prevent the more obvious programming errors altogether. Fortunately, modern software engineering can lead us this way.

Type safety is the extent to which a programming language discourages or prevents type errors.

Consider the following valid code snippet which will compile just fine:

uint16_t current_raw = GetRawFromADC() & 0x0FFF; // 12-bit ADC
int32_t current_mA = RawToMilliAmpere(current_raw);
current_mA = current_mA & 0x100; // Bitwise operation on a quantity?
static int32_t const shunt_mOhm = 50;
int32_t voltage_mV = current_mA*shunt_mOhm; // Result in millivolt? Not microvolt?

Granted, the errors made here are somewhat easy to spot. But why do we have to manually track the type and magnitude of our variables here? Why is a bit-operation even allowed on our variable? Can the compiler help us?

Yes it can. By employing proper abstraction and using a distinct type for our units the compiler will gladly inform us about these mistakes and issue an error.

Memory-mapped registers are the single most import interface an embedded programmer faces. Yet the common way to interact with them is to bit-manipulate them directly using pre-defined address constants. This guarantees basically no type safety at all and therefore allows for all kinds of misuse. Wouldn’t it be nice, if the compiler refuses our read from a write-only register? Refuses to compile when we are using a bitmask from Register A to mask a read from Register B?

Clearly, this would ease our job quite a bit. I guess everybody – if not right away, than surely after the next 2-hour debug session – would agree that:

The more errors caught at compile time, the better.

Fortunately, the kinds of errors shown above can be consistently prevented today – by leveraging the features of a strongly typed language such as C++ .

Better Code Reuse

Code which is implemented in a more abstract way can be reused more easily. Highly abstract code can be grouped together to form libraries.
Code reuse obviously increases programmers productivity. The use of high-quality (mature and well tested) libraries leads to more efficient and less error-prone software.

Unfortunately, the embedded community seems to be particularly bad at reusing code. Maybe this is a result of poor customization due to the lack of powerful generalization mechanisms (such as templates) in C. Preprocessor-customization is both error-prone and not scaling well. Even device drivers of widely used chips are frequently (re)written from sratch, mostly due to somewhat “special requirements” perceived for the particular application. I would argue, that most applications could in principle use the same driver – given, that it is sufficiently flexible. In C++ Policy based class design can provide this kind of flexiblity.

Currently there are only a few libraries I know of that are tailored specifically for ressource-constraints needs. Fortunately, this situation seems to change rapidly. Recent promising examples include

  • Kvasir, a library to access SFR with type-safety on Cortex-M processors
  • boost-experimental.sml, a highly efficient State Machine Library
  • type_safe, a zero overhead utilities for preventing bugs at compile time

A tipping point for modern embedded development?

I assume, that most programming for embedded devices is nowadays still done in C. I suspect that most teams which choose C++ are using it more or less as a C with classes. This is an assumption I draw from personal experience, since solid evidence supporting this is hard to come by.

However, here is why I believe we reached a point that a transition towards C++ is getting more and more likely:

  • C++ is evolving way faster than it used to. The advent of new constructs like constexpr make compile-time programming much more accessible for everyday use.  Formerly a domain of expert-level template metaprogramming, compile-time programming means to calculate as much as possibly already at compile-time rather when executing the actual code. This is of particular interest for embedded programmers. For example a CRC table can actually be computed during compilation by regular functions – no more fancy arrays of poorly understood magic numbers needed.
  • C++ is gaining way better compiler support for embedded targets. For instance the widely used ARM Cortex-M processors can be programmed using a state-of-the-art GCC toolchain, providing full support for the latest language features.
  • Embedded used to be a domain of electronical engineers. Highly specialized in hardware and circuit design, they often lacked a similiar level of expertise in software engineering. Ever-evolving requirements and the rising complexity of even simple-looking devices (think of a typical IoT-connected device) raise the bar for software quality considerably. This already led to an influx of more traditionally trained software engineers in the embedded area.