Insights / Deep Tech / Impressions from the Embedded World Conference

·

5 mins read

Impressions from the Embedded World Conference

Aleksandar Atanasovski, Team Lead for Embedded Software Engineering

Embedded World 2020 conference was held in Nuremberg, Germany, from February 25 – 27. Every year, it presents the latest developments in embedded system technologies. It provides the embedded community an opportunity to obtain information about new products and innovations, present their products, services, and innovations, and connect with others. This year, it gathered around 32,000 trade visitors and approximately 2,200 conference participants from 77 countries. Our colleagues from the HTEC embedded team were there, presenting our services and products, but they also had a chance to learn more about the latest trends in this area. 

Although the impact of COVID-19 lead to a higher number of cancelations, especially from the category of most prominent players in embedded technologies at the Embedded World Conference, our colleagues from the embedded team were still able to bring these valuable technology-related insights and news. You can read the impressions from the conference brought by our Team Lead for Embedded Software Engineering, Aleksandar Atanasovski.

What are the latest trends considering intelligent systems, AI, and ML in the embedded world? 

Computer vision and sensor network systems that are widely adopted in ADAS systems in the automotive industry are now widespread in various application areas: medical and healthcare, industrial, railway, and consumer as well. New silicone products are offering higher performance and a level of technology integration that allows moving the decision and processing load from the cloud to the edge. 

Also, pattern recognition and smart vision solutions are available from many vendors (including both hardware and software stacks) and for a specific purpose: face detection, object filtering, recognition and counting, and even thermal image processing. Unfortunately, the majority of the blue-chip ranked companies were not present at the exhibition, which reduced the technology novelties that we expected. However, every company tried to present its solutions as either smart, intelligent, or as part of such a system. 

In the sensor’s area, the focus was on homogeneous and heterogeneous sensor integration for creating data streams with high information value useful for AI/ML processing and decision making. It was especially interesting that LiDaR and mmWave radar sensors are now available and quite cheap. Some companies are developing smart systems based on radar sensor networks that are compatible with equivalent smart vision solutions. They do it by offering higher accuracy/selectivity (for some specific applications), reduced price, and avoiding issues with privacy protection.

The conclusion is that technology is there; it is not too expensive, ready for integration, and shows good performance. The part that is lacking the technology progress is the utilization in the applications. 

What can you share with us about the novelties in Embedded Operating Systems? Are the standards changing, and if yes, how? 

The list of embedded software companies this year was maybe the most complete compared to the companies from other sectors. Since the embedded software and firmware quality requirements are very high in terms of reliability, companies are trying to offer tools and utilities that can achieve these goals with minimal cost and effort. Focus is still on C and C++ programming languages, while some companies are supporting Spark and ADA, RUST, and GO. They promote the advantages of these compared to the C, which is insecure and leads to bugs and issues that are costly and hard to fix and maintain. However, the embedded software industry is moving faster to adopting C++ as a primary coding language rather than these alternatives. That’s why companies are starting to offer more than ever tools and services or C++ code checking and analysis as well as for certification (MISRA C++ and AUTOSAR, DO-178B/C, DO-278A and others). 

Since the most popular RTOS-es are now supported by the big companies (for ex. freeRTOS and mbedOS are supported by Amazon and ARM respectively), the other companies are trying to attract the users with high-quality standard and reliability that their RTOS and middleware offers, as well as with direct technical support. Most of the leading companies in this area were trying to admit to the audience that a high price is worth comparing to the development and debugging effort they save. 

Some of the projects HTEC is developing are dealing with Sound Analysis and Speech Processing. What insights have you brought back to us in this area from the conference?

HTEC participates in several embedded projects that are strongly driven by voice services and sound processing features. Some of our partners are leaders in that industry, which allows us to utilize state of the art sound and voice technologies. 

EW2020 was a place to be this year for companies specialized in sound/voice domains to promote their solutions that are interesting for the system integrator companies like HTEC. Unfortunately, the majority of companies were from Europe only, but the competition is stronger every year.

This year’s focus was on the Voice User Interface (VUI) solutions that are independent of the cloud services (provided by the big cloud companies and data centers). It means that voice recognition and interaction are directly performed on the device hardware without exchanging the data with the cloud. 

The sound data flows that are processed on the edge requires combining several technologies that ensure a quite good user experience and sound quality. Therefore, EW2020 attracted leading European companies that offer algorithms for human speech processing (real-time filtering, noise and echo reduction, beamforming, voice detection), software models, and libraries for offline voice recognition and text to speech engines and datasets. The majority of sound/voice software stacks are compatible with application processors based on ARM cores (even with single instance only). In contrast, some lightweight variants are developed for MCU devices based on Cortex M4 cores.

Contrary to the last years, when the focus was on some Alexa-ready solutions that integrated only keyword detection and all the logic is handled on the cloud, this year the focus was moved to the better edge hardware utilization. 

Expectations for the next year are related to the solutions based on the direct hardware execution of the sound and voice algorithms by using ML engines integrated into MCU/SoC cores that are already available. This cooperation between silicon companies and processing library providers we assume will lead to high-quality VUI services that will be able to run on very constrained MCU hardware with significant cost reduction, which will be a great step forward.