First steps with Portaudio

Posted on

For quite some time, I’m thinking about creating an open source hearing aid project. Reinventing the wheel is very useful for learning but in such big project, I need to rely on some libraries. Maybe my choice at this stage is the wrong one for the aim of the system, but I have to begin somewhere. This means, to handle the sound card driver (I/O management), I choose the C library Portaudio.

Portaudio is one of a few libraries that is cross-platform, robust, and is on the market for several years. Its development though is quite slow (but is it an issue in this business, I don’t think so) and the API learning curve is a bit steep at first. It exist few other options such as RtAudio developed by McGill University, OpenAl from Creative with a focus on audio 3D, Fmod that is usually used for games and probably some others that I’ll try to keep up to date here.

According to the official documentation, Portaudio can be used in embedded systems (PDF) and that is one of the final goals that I would like to reach.

Where should I begin?

Even if Portaudio has a steep learning curve, the documentation is really good and give a good overview. But, if like me, you’re not a senior programmer, you’ll have to dig deeper in the code examples to understand a bit more what to do.

In the case of an hearing aid system, the idea is that I need a real time computation following the Figure 1. By real time, it means that the computation time must be inferior to a time constraint. In our case, the time constraint must respect two rules:

  • the computation must be faster than the size of the buffer,
  • the computation must be inferior to the integration time of the hearing system.

In the first constraint, the idea is to avoid audio glitches. At least, it will be very annoying for the user, at worse, it can damage the hearing aid system or the hearing of the user (the idea is to improve the hearing not the other way around).

Schematic representing three blocs such as inputs, processing, outputs. They define the aim to reach with Portaudio that manage the I/Os Figure 1. Description of the simplest model used for an hearing aid system.

From that point, we can dig up the code and try to understand what we need to do. At the moment, we won’t perform any processing, we’ll just take the samples received from the input (microphone(s)) and send it directly to the output (speaker(s)).

!!!! Be careful, when you’ll test the system, you are creating a feedback line that can lead to the damage of your audio system or your hearing.

First, let’s create the main function that we’ll call Portaudio.

#include <stdio.h>
#include <stdlib.h>
#include <portaudio.h>

static PaError err;
static PaStream *stream = NULL;

int main (int argc, char *argv[])
	// Definition of variables we need
	PaStreamParameters in_params, out_params; // structures for I/Os configuration
	int num_channels = 2; // 2 channels (stereo)
	int samplerate = 44100; // sample rate
	int device_id = 3; // sound card device ID (is dependent of your system
	int frames_per_buffer = 256; // audio buffer for portaudio
	char sample_format = paFloat32; // type of sample format

	// Initialization of portaudio

	in_params.device = device_id;
	in_params.channelCount = num_channels;
	in_params.sampleFormat = sample_format;
	in_params.suggestedLatency =
	in_params.hostApiSpecificStreamInfo = NULL;
	out_params.device = device_id;
	out_params.channelCount = num_channels;
	out_params.sampleFormat = sample_format;
	out_params.suggestedLatency =
	out_params.hostApiSpecificStreamInfo = NULL;

	/* Configuration of stream
	 * We call the callback we'll need to avoid blocking I/Os
	err = Pa_OpenStream(
			   paNoFlag, /* portaudio will clip for us */

	// Starting of the stream
	err = Pa_StartStream(stream);

	// close the program whenever you want
	printf("Hit ENTER to stop program.\n");
	// stop the stream


If the comments are not enough, I would recommend to either check the doc of Portaudio or send me a message to tell me where you didn’t understand something. There are few things to pay attention, first we have to write the function PaErrorTest() that is necessary to catch a meaningful information about Portaudio errors.

 * FUNCTION: PaErrorTest()
 * Inputs:	error		error generated by Portaudio
 * Returns:	return on stderr the nature of the error in a meaningful way.
void PaErrorTest(PaError error) {
	if (error != paNoError) {
		// using Pa_Terminate is crucial to avoid resource leaks
		fprintf(stderr, "Error: %s\n", Pa_GetErrorText(error));

The importance of killing gracefully Portaudio through Pa_Terminate() is very important because otherwise, for example, the audio device could be unavailable until a reboot of your system. Then we can proceed to the initialization of Portaudio by itself using the function PaInit().

static void PaInit() {
	static int initialized;

	if(!initialized) {
		err = Pa_Initialize();

		initialized = 1;

Nothing really complex here, I invite you to read the documentation if you need a better understanding of what is going on in this part. Let’s do the same for the stopping of Portaudio using PaClose().

static void PaClose(PaStream *stream) {
	err = Pa_StopStream(stream);

The only difference (but it makes sense) is that the function takes the stream pointer as an argument. The first need to stop the stream. The use of the function Pa_StopStream() is used here because I made a choice of stopping the stream after all buffers have been played. Another way would be to stop the stream right away and through away the pending buffers. This would be achieved using Pa_AbortStream().

Now, the most difficult part: the callback function. Since the V19 version, it is possible to choose between the callback or the blocking I/O model. I chose to use the callback model because it’s currently compatible with all APIs. I decided to call it PlayrecCallback().

 * FUNCTION: PlayrecCallback()
 * Inputs:	input_buffer		array of interleaved input samples
 *		output_buffer		array of interleaved output samples
 *		frames_per_buffer	number of frames to be processed
 *		time_info		struct with time in seconds
 *		status_flags		flags indicating whether input and/or
 *					output buffers have been inserted or
 *					will be dropped to overcome underflow
 *					or overflow conditions
 *		user_data		pointer to the StreamInfoStruct for this
 *					stream,	as passed to Pa_OpenStream()
 * Return:	paComplete or paAbort to stop the stream if either user_data
 *		is NULL or stopStream has been set, or paContinue for the
 *		stream to continue running.
int g_num_no_imputs = 0;
int PlayrecCallback(const void *input_buffer,
			   void *output_buffer,
			   unsigned long frames_per_buffer,
			   const PaStreamCallbackTimeInfo* time_info,
			   PaStreamCallbackFlags status_flags,
			   void *user_data) {
	float *out = (float *) output_buffer;
	const float *in = (const float *)input_buffer;
	unsigned int i;

	(void) time_info;
	(void) status_flags;
	(void) user_data;

	if(input_buffer != NULL) {
		for(i=0; i<frames_per_buffer; i++) {
			*out++ = *in++;
			*out++ = *in++;
	return paContinue;

If you want the complete version of this small code snippet, you can find it (and a bit more) on the Git repository. To finish, if you want to compile it (at least under GNU/Linux), you can use gcc or write a Makefile. Here some pointers to do it with a Makefile:

# Tools Makefile
# To enable DEBUG mode, use the command
# > make DEBUG=True

TARGET = ewobasic
BUILD_DIR = build
OBJ_DIR = obj
ARCH = $(shell uname -m)

CC ?= gcc
CFLAGS += -std=c11 -Wall -Werror -pedantic-errors

ifndef DEBUG


LIBS += jack portaudio-2.0
LDLIBS = $(shell pkg-config --libs $(LIBS))
SOURCES := $(shell find $(SRC_DIR) -name '*.$(SRC_EXT)')

RM ?= rm -rf

#default: all

# EXEC without rules
.PHONY: clean, mrproper

# Deactivation of implicit rules

all: $(TARGET)

$(TARGET): devices_info.c
	@echo "Building $@..."
	@echo $(ARCH)
	@echo $(SOURCES)
	@echo $(OBJECTS)
	$(CC) $^ -o $(TARGET) $(CFLAGS) $(LDLIBS)

#ewo: main.o
#	$(CC) -o $@ $^ $(LDLIBS)

#%.o: %.c
#	$(CC) -o $@ -c $< $(CFLAGS)

	$(RM) $(TARGET)

	$(RM) $(EXEC)


As you saw, we built a very simple system taking the input of the microphone and send it directly to the stereo output. For sure, it seems a bit simple as no transformation on the signal is done in between but from an hearing aid system perspective, that’s the first requirement. Let’s just imagine now that we will basic signal processing such as low-pass filter or something more complex like beamforming processing coming from an antenna of microphones. The sky is the limit and I invite you to refer to the examples of the documentation to get a better overview. I added as well this example in a Git repository if you want to play with it more easily.