The Smooth Operator of Automotive UX - How To Drive Multiple Threads

In-car user interfaces were extremely simple in the past: Communication between the vehicle and the passengers/driver was facilitated by the steering wheel, pedals, and indicators – that’s all UX meant at that time. Electronics, however, turned upside down this two-way relationship. How exactly? We show you an example.

Today’s cars often have multiple displays, which concurrently show a variety of “contents”. The displays are driven by several devices, rather than just one, and the calculation tasks are distributed among several on-board computers. Although they include traditional one computer–one display solutions, a new paradigm has gained ground recently, where a powerful central unit drives several displays as well as several operating systems at the same time. This latter is important because the critical level of the tasks vary, and they can be served by an industry-specific operating system to a greater extent.

Clustered automotive systems

A typical example is dashboard display rendering: It is done by the safer and more reliable QNX or Linux system, as the displayed information is critical in a driving situation. On the other hand, the media centre of the centre console is not essential, so its interface is often controlled by an Android application. Additionally, users often involve their own devices, phone, tablet in the in-car environment and wish to connect them with the car’s system – which also has to be managed somehow.

For the above reasons, in-car UX, user experience in the traditional sense of the word, generates special demands for developers to cope with. For instance, the car cannot start off slowly: A couple of seconds after you insert the ignition key, everything has to run and operate, and software engineers have to find efficient tricks and solutions to satisfy this demand. Another characteristic of this area: The automotive industry is extremely fragmented in terms of software. As mentioned before, multiple operating systems cooperate in one single car, and different car makers have different preferences as to the best mix of systems. An automotive supplier has to manage these platforms on its own; moreover, it has to do so in a cost-effective way.

NNG’s solution is to develop a shared C++ core used as a foundation of a variety of software developments, and deliver this core together with a minimal platform-specific “glue code”, so that it fits in the individual systems of the car makers. Scalability and distributed operation are substantial requirements: Certain elements of the delivered software will run on one OS, other modules on another OS, while certain parts, for instance, on the passenger’s phone. So, for example, the backseat passenger can manage the maps of the navigation system on his/her tablet, and recommend via points (e.g. an ice cream parlor) to the navigation application.

It is a basic requirement, of course, that a distributed system runs fast and smoothly, even if some operations are performed on separate devices. To minimise latency and provide distributed operation in NNG’s solution, the majority of components are run in an asynchronous way, which means that the components do not wait for each other (unless it’s absolutely necessary). Currently, not all function calls are fully asynchronous in NNG’s software, however, I/O and processing, that is, the most problematic components already are. Although this solution addresses several issues, it has its drawback: Asynchronous programming is cumbersome and slow, and places heavy burden on the developer.

This is especially true for prototyping, where the architecture is not final yet. In asynchronous programming, you don’t only have to carry out a given task but also have to have an understanding of the entire environment to achieve efficient integration. As there is a growing number of dependencies between the program threads, the number of problem sources attributed to asynchronous operation is parallelly increasing. Some unique asynchronous operations require advanced technology providing standardised data flow, and their number may grow uncontrollably. Moreover, they are normally developed with the use of asynchronous logic, so these mechanisms and logics can be leveraged to a lesser extent.

Since this approach codes the majority of operations in the architecture, software engineers and architects having a comprehensive knowledge of the operation of the code base play a key role – however, their time is hugely expensive for performing simple tasks, and there are only few of them. The approach increases the professional level of new recruits in the organisation: The ability of writing an asynchronous code will be a requirement of every developer (applying for either a junior or senior position).

Multiple threads – a basic requirement in an automotive environment

It is a known problem in a PC environment that if the software performs all operations on one single thread, certain time consuming tasks (e.g. intensive I/O operation) can cause a bottleneck, thus compromising user experience. Therefore, the programmer’s task is to arrange these activities on separate threads and return their results to the main program thread. Frequently, there is a dedicated I/O thread, which loads the texture, and then returns the result in a queue. This can mitigate latency to a certain extent, and you can, for example, carry on drawing instrument pointers until you get the result. It is a problem, however, that based on this relatively simple scenario, you have to build and maintain two queue mechanisms, which may not be leveraged after all.

So we had a complex challenge to respond to: Separate the code, which performs a given operation, from its environment, have a mechanism, which performs these operations in an asynchronous way; enable the results to be linked and transferred among each other; make sure that certain tasks are not carried out concurrently; have the system be scalable in accordance with the available resources; and, of course, have the system operate on all target platforms.

Home-made solution

To resolve this problem, NNG came out with a home-developed solution called TaskScheduler (or simply, Scheduler). This application enables us to keep the synchronous approach during the task programming phase, and then the Scheduler links tasks in a chain and makes sure that the results are properly transferred. In addition, this solution controls the processing threads; prioritizes, schedules, and allocates the tasks among the threads; and ensures that the results reach their target places.

Therefore, the writer of the asynchronous code writes synchronous subtasks, while the Scheduler compiles adequate tasks from these subtasks.  As the linking is performed by the Scheduler, the implemented subunits can be leveraged because they are not linked to one unique queue mechanism. In the application, the Scheduler is the only mechanism that manages asynchronous execution, no other asynchronous mechanism is involved.

There are dedicated threads as well – certain operations (e.g. rendering) can only be carried out on certain threads on certain platforms. In this case, it helps if you specify the parameters of the tasks and instruct the Scheduler to carry out the tasks on the given dedicated thread only.

An advantage of this approach is that the mechanism used for loading the textures is well separated in the code, and can be found at a single place. You don’t have to search for queues, or tasks that use these queues, in the project, because the asynchronous process is fully transparent. The distribution of tasks improves resource exploitation: This method allows the Scheduler to allocate capacity among a higher number of small and simple tasks in a more efficient way, and make the most of the platform’s capacity.

The Scheduler is also used for scheduling parallel execution – as two tasks on the same thread aren’t carried out concurrently, the Scheduler will ensure that they get on separate threads. To deal with concurrency, we also developed a using_resources attribute, which is designed to specify the resources used by a given task, and which allows the Scheduler to ensure that no tasks using at least one common segment of the resources are carried out at the same time. This allows us to write lock-free codes with consequently defined resources and avoid that the tasks are waiting for each other.

Additionally, the Scheduler ensures that non-thread-safe libraries are concurrently called by one thread only. An example is the Freetype library used to define raster fonts and glyph of the text: The using_resources attribute ensures that two glyph loaders are not run at the same time.

If the Scheduler has an insight into all of the tasks, it’s quite obvious that you can use it to monitor performance as well. While running, the system logs all events, and you can process and visualise program runs, task interactions, Scheduler decisions, etc. A twist in this approach is that it also works in the above mentioned system of multiple devices, and it can manage hardware logs concurrently, so a technical hitch or slow-down can be identified even in a cross-device operation. In the image above, the task is to get a texture from another computer, but it is delayed because the other computer is busy with running another time-consuming task.

The solution has several common points with the a Microsoft’s PPL/PPLX library; however, we distribute the worker threads in a different way and so, we have dedicated execution threads. Moreover, the Scheduler also supports platforms that the PPLX does not, so it runs on QNX and in a browser (by the use of Emscripten), as well as the integrated visualisation and performance analysing tool mentioned above.