Long time IoT devices security

10 February 2021
No Comments

Quite often IoT devices used for potentially critical applications, do not have the standard security or protection features. How to prepare IoT devices in advance to face future threats?

Three pratic designing ways

Quite often IoT devices used for potentially critical applications do not have the standard security or protection features, so how to prepare IoT devices in advance to face future threats?
IoT applications are increasingly getting ahead in our daily lives – from industrial robots to precision instruments, self-driving cars and self-driving drones. Many of these applications may already have an impact on our privacy and on the protection and security of their users.
Since in some cases, the costs incurred in case of failure would be high, so it is essential to build devices that comply with the relevant standards. In the world of IoT development characterized by agility, compliance requirements can only arise at a later time, when the code has already been written and verified.

Without a doubt, it is better to include all compliance activities in the software design from the beginning but as you know, rigorous development processes can affect time to market, especially if you do not have advanced automation and tools. However, not many developers use comprehensive tests simulating and documenting every possible situation. Agile and fast development teams often cannot afford to lose momentum to integrate compliance features that may be needed in the future. Most of the time we prefer the motto: “release preliminary version now and then we expand”.
In this way we’ll discover in the hardest way that the total costs to develop the product in accordance with the order are orders of greater entity compared to financial outlay that occurs if a compliant product is developed from the beginning.

So what steps can you take today with a little extra effort to prepare to meet tomorrow’s stringent compliance requirements?

Rule n. 1 : get an overview of the technical gap.

It is important to understand where the project is at the moment. The technical gap is the cost of potential rework due to the complexity of the code, combined with any violations of the coding standards and IoT security requirements that are currently still in the code. This lack results from the subsequent need to clean, repair and test the code. One way to get information about the current state of a project is to automatically analyze static code. It provides insights about the quality and security of a code base and shows any violations of coding standards.

Unfortunately, many teams which develop applications integrated into C or C ++ still rely on their compilers or manual code checks to find vulnerabilities, rather than doing static analyzes.
For a variety of reasons, some teams have difficulty introducing static analysis tools. They may find that these tools make “noise” and are difficult to use, or they may not be able to involve them in the development process due to urgent daily problems. A common (incorrect) assessment is that the needed time to decide which rule violations are worth overcoming outweighs the actual benefits of eliminating the problem.

We found that teams that followed only a small set of critical and mandatory rules had to spend much less time reworking the code if they faced functional IoT safety audits at a later stage of the project. Safe and secure systems can be created from scratch more easily if you consider the security aspect from the beginning (for example by observing the Guidelines for safe coding CERT C). It starts from the bottom, thanks to a sophisticated CERT priority setting system. The severity, probability and repair costs are assessed in three stages, for a total of 27 levels.

When using modern tools from expert providers, you can easily read the compliance status on a preconfigured dashboard. Some standards measure cyclomatic complexity to keep it below a certain threshold. Complexity metrics can also be used to estimate a test measure. For example, the number of test cases required to demonstrate 10 0% branch-level coverage for compliance with IEC 61508 SIL 2 may be proportional to the cyclomatic complexity (McCabe) of a function.

Static analysis also helps companies to understand their technical “lack”. To this end, data points are collected that aid management in terms of safety and IoT security compliance. Managers can easily get answers to important questions like these:

What is the current status? How many non-critical coding standards violations are currently present in my code base?

Trend data: are new and resolved violations reported in each build? Are we getting better or worse?

How complex is my code currently? Is the complexity increasing?

Some standards measure cyclomatic complexity to keep it below a certain threshold. Complexity metrics can also be used to estimate a test measure. For example, the number of test cases required to demonstrate 10 0% branch-level coverage for compliance with IEC 61508 SIL 2 may be proportional to the cyclomatic complexity (McCabe) of a function.

So it starts with very simple things. Once the team has become familiar with the most critical errors, the range of standard violations recorded can be expanded. In no case are all the rules established in the stone, but it is important to decide which rules fall under the coding standard of the project and which are not. As a minimum solution, a set of mandatory rules from various important coding standards (for example MISRA Mandatory Standards or CERT C) can ensure that future IoT security and protection arguments for a networked device are simplified.

Rule n. 2 : set up a qualifiable test framework and measure code coverage

Most pragmatic developers should agree with the statement whereby it is of little use to blindly set up module tests for all functions. However, it is a reasonable investment to give the team access to a module test framework as part of the project toolbox. Module tests are always useful if developers consider it useful for testing certain complex algorithms or data manipulations in isolation. In addition, there is a decisive advantage in developing module tests. As we have heard from companies, writing and running unit tests alone helps to make the code more solid and better.

If there are security or compliance requests, a company can quickly step up its module test measures by temporarily hiring additional staff for this task. In order for this work to be scaled quickly, the module test framework and associated process must already be understood and documented throughout the project.
A list of common features of a future compliance-oriented scalable module test framework:

  • Qualification for the intended use for a certain safety standard (eg Through a TÜV / Accredia certificate)
  • Integration into an automated compilation system
  • Documentation of the required code coverage metric
  • Recording of results and coverage of tests performed for each build over time
  • Eligibility for multiple projects and teams

The key finding from this is that all test techniques required by a future IoT security standard should be installed, albeit to a minimal extent. If certification becomes necessary, it is easier to resize it instead of starting from scratch.

Rule n. 3 : isolate critical functionality

There are many factors to consider when designing embedded systems, such as simplicity, portability, maintainability, scalability and reliability. At the same time, an appropriate balance must be found between the requirements of latency, productivity, energy consumption and space. When it comes to designing a system that can potentially be networked with a large IoT ecosystem, many teams do not prioritize safety criteria over these other quality factors.

To simplify future security compliance (and adhere to good architectural practices), individual components can be separated in space and time. For example, you can design a system in which all critical operations are performed by a separate special CPU, while all non-critical operations are performed on another, so that there is a spatial separation. Another option is the use of a separation kernel hypervisor and the use of microkernel concepts. In addition to these, there are other options, but it is essential to apply separation, concerns, defense in depth and separation by mixed critical architecture principles as soon as possible.
These concepts not only reduce the amount of rework required to comply with safety and protection standards, but also increase the quality and resilience of the application. Here are some ways to isolate critical code:

  • Files
  • Modules
  • Directory
  • Libraries

At execution level:

  • Discussion, RTOS activities, hypervisor
  • Core CPUs , separated CPUs

In conclusion, delimiting critical functions from non-critical functions reduces the scope of future verification measures that will be needed to demonstrate compliance.

Back to Top