​Safety and security coding standards for C language

Coding standards, including Motor Industry Software Reliability Association (MISRA) C and CERT C, exist to service the safety and security markets performing C language development. The requirements of these markets are described and some background information on these coding standards and the C language are provided. A comparison of MISRA C and CERT C is provided, and the possibility of a single standard that addresses the combined safety and security market is discussed.

Go to the profile of Robert C. Seacord
Aug 21, 2017
0
0
Upvote 0 Comment

Author(s): Robert C. Seacord

Introduction

The C Standards Committee has kept as a major goal the preservation of the traditional spirit of C, summarised in phrases such as:

  1. Trust the programmer.
  2. Do not prevent the programmer from doing what needs to be done.
  3. Keep the language small and simple.
  4. Provide only one way to do an operation.
  5. Make it fast, even if it is not guaranteed to be portable. The C programming language serves a variety of markets including safety-critical systems and secure systems. While advantageous for system level programming, facets (a) and (b) can be problematic when developing safe/secure systems in C. Consequently, the C11 revision added a new facet to the original list.
  6. Make support for safety and security demonstrable.

One element of developing safe/secure systems in C is the development and adoption of appropriate coding standards. The safety-critical systems market is primarily served by The Motor Industry Software Reliability Association (MISRA), a UK-based collaboration between manufactures, component suppliers, and engineering consultancies. The security market is primarily addressed by The CERT C Coding Standard [1,2] written by the author of this article and published by Addison-Wesley.

The automotive and aerospace industries are major consumers of coding standards for safety-critical systems. At many organisations, safety-critical code is written in C [3]. With C’s long history, there is an extensive tool support for this language including strong source code analysers, logic model extractors, metrics tools, debuggers, test support tools, and a choice of mature, stable compilers. The safety community traditionally constrains development to a subset of the C language that is considered less prone to error. These language subsets are influenced by the IEC 61508 series of international standards for electrical, electronic, and programmable electronic safety-related systems [4]. These standards support the assessment of risks to minimise these failures in all E/E/PE safety-related systems, irrespective of where and how they are used. ISO 26262 [5] is an adaptation of IEC 61508 for automotive electric/electronic systems that has been widely adopted by the major automotive manufacturers. The DO-178C ‘Software Considerations in Airborne Systems and Equipment Certification’ standard published by the Radio Technical Commission for Aeronautics (RTCA) [6] is used by certification authorities such as the FAA, EASA, and Transport Canada to approve all commercial software-based aerospace systems. Rather than placing all of the new guidance in DO-178C, the RTCA placed the vast majority of the new guidance in six other documents including RTCA DO-332: Object-Oriented Technology and Related Techniques Supplement to DO-178C and DO-278A [7]. The Joint Strike Fighter Air Vehicle C++ Coding Standard (JSF++) [8] has also influenced the development of safe C language subsets. JSF++ provides guidance to C++ programmers to produce safe, reliable, testable, and maintainable code. These rules are intended for air vehicle and suggested for non-air vehicle C++ development. Since the C and C++ languages maintain a high degree of compatibility, JSF++ is relevant to C coding standards, particularly with respect to features of the language that should not be used. The overall JSF++ philosophy is essentially an extension of C++’s philosophy with respect to C: provides safer alternatives to unsafe facilities to avoid known problems with low-level features.

The security community serves a broader market in that there is generally no concept of a security-critical system outside of government high-assurance systems. Security is more often an attribute of applications and systems whose primary purpose is to deliver functionality and for which security is typically one of several system qualities that may be traded-off against other qualities such as performance and usability. These applications frequently make use of the whole language, including dynamic memory, which makes subsetting the language too costly to consider.

Safety engineering traditionally excludes malevolent behaviour, but recent attacks on automobiles [9–11] have demonstrated how remote attackers can control the cyber-physical systems in automobiles and have raised concerns that vulnerabilities in automotive systems [12] can be exploited to jeopardise system safety. Consequently, automotive manufacturers are increasingly motivated to adopt coding standards that address both safety and security concerns. Several evolving standards aim at addressing these concerns. SAE J3061 [13] defines a cybersecurity process framework and provides guidance to help organisations identify and assess cybersecurity threats and design cybersecurity into cyber-physical vehicle systems throughout the entire development lifecycle process. SAE J3061 recommends both MISRA C and CERT C for guidance on avoiding vulnerabilities and unpredictable behaviour in the software. ISO/TC 22 Road Vehicles is also proposing a new standard on vehicle cybersecurity engineering based on SAE J3061 [14].

The remainder of this article is organised as follows: ‘MISRA C’ section provides an overview of MISRA C including their provenance, status, and future direction. This section also describes how compliance to MISRA C is determined. ‘CERT C coding standard’ section provides a similar treatment for CERT C. ‘Comparison’ section compares MISRA C and CERT C and ‘Conclusions’ section presents conclusions.

MISRA C

The MISRA C Guidelines define a subset of the C language that reduces the opportunities for mistakes. The first edition of MISRA C, ‘Guidelines for the use of the C language in vehicle based software’ [15] was published in 1998 to provide a restricted subset of C to meet the requirements of IEC 61508 Safety Integrity Level 2 and above. Since that time, MISRA C has been adopted by a wide variety of industries and applications including the rail, aerospace, military, and medical sectors. The second edition, known as MISRA C: 2004 [16] is titled ‘Guidelines for the use of the C language in critical systems’. The first two editions of MISRA were based on C90 [17]. MISRA C: 2012 [18] extends support for C99 [19] while maintaining guidelines for C90.

Provenance

MISRA started in the early 1990s as a project in the UK government’s SafeIT programme. This programme funded projects across a broad range of industries concerned with safety-related electronic systems. The MISRA project was conceived to develop guidelines for the creation of embedded software in road vehicle electronic systems. In November 1994, development guidelines for vehicle-based software were published. Once the official funding had finished, the MISRA members continued to work together on an informal basis. Today, the MISRA Consortium is coordinated by a steering committee of the member companies. The project management has been provided by MIRA Limited, a for-profit organisation.

Compliance

MISRA C guidelines are classified as either being a rule or a directive. Rules are completely described and checkable by static analysis. Directives are partially described, but require knowledge of programmer intent to evaluate compliance. MISRA C also defines a deviation procedure in cases where it is necessary to deviate from their guidelines. MISRA C guidelines are further categorised as mandatory, required, or advisory. Mandatory guidelines are required for conformance, and deviations are not permitted. Required guidelines are also required for conformance, though a formal deviation is allowed. Advisory guidelines should be followed as far as is reasonably practical and any non-compliance should be documented. For a software system to claim to be compliant to the MISRA C Guidelines, all mandatory rules must be met and all required rules and directives must either be met or subject to a formal deviation. For compliance purposes, there is no distinction between rules and directives.

Status and future directions

In April 2016, MISRA published two documents to address software security risks. MISRA C: 2012 Addendum 2 [20] maps the coverage of MISRA C: 2012 against ISO/IEC TS 17961: 2013 to justify the viewpoint that MISRA C is both applicable in a security-related environment and a safety-related one. MISRA C: 2012 Amendment 1 [21] defines 14 additional MISRA security rules mostly in the area of the hosted implementation that MISRA C has not traditionally considered (see ‘Comparison’ section). The MISRA Consortium is also working on a MISRA C: 2012 Technical Corrigendum 1 that they plan to release as a standalone report in 2016. Additionally, MISRA is developing a CERT C Coverage matrix, though they have no plans to directly address the CERT C rules.

CERT C coding standard

The CERT C Secure Coding Standard was developed at the request of, and in concert with, the C Standards Committee [22]. The first edition, also known as CERT C: 2008 [1], was published in 14 October 2008. CERT C: 2008 provided guidance to programmers in the secure use of the C language and specifically supported C99 [19]. After the publication of CERT C: 2008, the C Standards Committee established a study group to produce analysable secure coding guidelines for the C language. The study group first met on 27 October 2009, and in 2013 published ISO/IEC TS 17961 Information Technology – Programming Languages, Their Environments and System Software Interfaces – C Secure Coding Rules [23]. ISO/IEC TS 17961 establishes a baseline set of requirements to diagnose insecure code beyond the requirements of the language standard for analysers including static analysis tools and C language compilers. These rules must be enforceable by static analysis and analysers that implement these rules must be able to effectively discover secure coding errors without generating excessive false positives. The second edition of The CERT C Coding Standard [2] was updated to support C11 [24] and to align with ISO/IEC TS 17961. Published in 2014, it is also known as CERT C:2014.

Provenance

Both editions of The CERT C Coding Standard were authored by Robert C. Seacord while employed by the CERT Division of the Software Engineering Institute at Carnegie Mellon University. Addison-Wesley owns the copyright.

ISO/IEC TS 17961 is the only coding standard referenced in this article that was developed by an official standards organisation and is currently maintained by the C Standards Committee. A second edition, ISO/IEC TS 17961: 2016, that cancels and replaces the first edition, is currently being prepared by ISO/IEC for publication.

Compliance

CERT C: 2008 contains both secure coding rules and recommendations. Rules are normative requirements for a conforming system. Recommendations are additional guidance. A violation of a requirement indicates a defect in the program and a possible security vulnerability. Violations of recommendations do not necessarily indicate a defect in the code and are consequently not required for conformance to the standard. Since the inclusion of requirements and recommendations in CERT C: 2008 caused some confusion, the recommendations were eliminated from CERT C: 2014 and are not currently maintained.

Status and future directions

A new edition of the coding standard is being derived from CERT C: 2014 to address the needs of the combined safety and security community. Additionally, the coding standard is being updated to address the evolution of the C language resulting from the defect report and technical corrigendum processes.

Comparison

This section provides a comparison between MISRA C: 2012 and CERT C: 2014. Table 1 summarises the main features of the CERT and MISRA coding standards, as well as ISO/IEC TS 17961. At an initial glance, these coding standards are quite different as they have different provenance, serve different markets, and apply to different editions of the standard. Consequently, they support substantially different languages.

Coding standard

C standard

Security standard

Safety standard

International standard

Whole language

MISRA C:2004

C90

no

yes

no

no

MISRA C:2012

C99

no

yes

no

no

CERT C:2008

C99

yes

no

no

yes

CERT C:2014

C11

yes

no

no

yes

ISO/IEC TS 17961

C11

yes

no

yes

yes

Table 1: Key features of coding standards

Hosted versus freestanding implementations

The C Standard supports two forms of conforming implementations: hosted and freestanding. In a freestanding environment, a C program execution may take place without the benefit of an operating system, as is common in low-end embedded systems. MISRA C has no library-specific restrictions on the subset of headers required in freestanding implementations, but places major restrictions and prohibitions on many of the remaining standard headers in hosted implementations. CERT C fully supports both hosted and freestanding environments.

Dynamic memory allocation

Dynamic memory allocation is a complex issue and a key differentiator between MISRA C and CERT C. For MISRA C, Directive 4.12 and Rule 21.3 requires that memory is not dynamically allocated. Memory allocation is frequently disallowed in safety-critical systems because malloc and garbage collectors often have unpredictable behaviour that can significantly impact performance [3]. CERT C allows dynamic memory management, which is widely used in critical infrastructure systems and other applications. Errors resulting from the misuse of dynamic memory allocation facilities in the C language are addressed through a variety of rules including: ‘MEM30-C. Do not access freed memory’, ‘MEM31-C. Free dynamically allocated memory when no longer needed’, and ‘MEM34-C. Only free memory allocated dynamically’.

For airborne systems, DO-178C and DO-332 allow object-oriented technology and dynamic memory allocation, but stress the importance of having verification activities to ensure that dynamic memory allocation is performed correctly. Verification activities include showing that the dynamic memory management is robust to memory exhaustion and unbounded allocation or deallocation times. JSF++ AV Rule 206 revises the MISRA rule by prohibiting dynamic memory allocation and deallocation after initialisation. In layman’s terms, malloc is ok until the plane takes off.

Though MISRA C disallows dynamic memory, it does describe steps to be taken if a decision is made to use it. MISRA is considering relaxing the restrictions on the use of dynamic memory to permit its use during the initialisation phase [25].

The only way to dynamically claim memory in the absence of memory allocation from the heap is to use stack memory [3]. MISRA Required Rule 18.8 prohibits variable-length arrays. Variable-length arrays have a size that is not an integer constant expression. The storage for these arrays is typically allocated on the stack in a manner similar to the non-standard alloca function. CERT Rule ARR32-C, ‘Ensure size arguments for variable-length arrays are in a valid range’ allows variable-length arrays, but ensures that the allocations are bounded. Any violation of CERT Rule ARR32-C would be diagnosed by the stricter MISRA rule. Which rules are correct for safety and security is arguable. When properly bounded, variable-length arrays are perfectly safe. Since MISRA disallows dynamic memory allocation, a conforming MISRA program might be required to allocate a statically sized stack array that is the maximum size that might be required, regardless of how much storage is required. Excessively large stack allocations could, in some case, lead to stack exhaustion. In the absence of recursion, an upper-bound on the use of stack memory can be derived statically, making it possible to prove that an application will always live within its pre-allocated memory means. Characteristically, the CERT Standard disallows the specific error that causes undefined behaviour, while the MISRA rules disallow the use of programming constructs that are simply prone to misuse.

Structures and unions

MISRA Rule 19.2 advises against the use of unions in conforming programs because of issues with padding, alignment, endianness, and bit-order. Though advisory, these guidelines are meant to be followed as far as is reasonably practical. CERT C allows the use of unions, and deals more precisely with undefined behaviours with a series of rules including ‘EXP39-C. Do not access a variable through a pointer of an incompatible type’ and ‘EXP42-C. Do not compare padding data’ that eliminate the undefined behaviours associated with unions and allows for their safe and secure use in C programs.

Confidentiality, integrity, and availability, also known as the CIA triad, is a model designed to guide policies for information security within an organisation [26]. A loss of confidentiality is the unauthorised disclosure of information. A loss of integrity is the unauthorised modification or destruction of information. A loss of availability is the disruption of access to or use of information or an information system. CERT defines an additional rule dealing with the confidentiality of information in structures and unions, ‘DCL39-C. Avoid information leakage when passing a structure across a trust boundary’. DCL39-C deals specifically with passing a pointer to a structure across a trust boundary to a different trusted domain, because the padding bytes and bit-field storage unit padding bits of such a structure might contain sensitive information. Rules such as this one are absent from MISRA, which is not concerned with confidentiality.

Unreachable code

MISRA Rule 2.1 requires that a project does not contain unreachable code, whereas Rule 16.4 requires that every switch statement have a default label. These two rules are themselves contradictory. MISRA requires that the default label be used for defensive programming, to catch the case where the controlling expression of the switch statement fails to match a converted case constant expression. As a result, any code following the default label is always unreachable code. CERT C both allows default labels and, consequently, unreachable code.

Type selection

MISRA Directive 4.6 advises that types that indicate size and signedness (for example, int16_t and uint32_t) should be used in place of the basic numerical types (for example, char and int). If followed, this advice can result in portability issues. For example, it is important that character data be represented as char for compatibility with existing string library functions. The rationale for Directive 4.6 identifies situations in which following this advice is counterproductive, along with a short, but incomplete list of exceptions. CERT C takes a different approach to this problem, defining rules such as ‘INT31-C. Ensure that integer conversions do not result in lost or misinterpreted data’ that focus on conversions that result in unrepresentable values.

Conclusions

Expanding requirements for safety and security can best be serviced by a combined safety and security standard for the C programming language. One approach in producing such a standard would be to start with MISRA C:2012 as a base and extend it to include rules for security. A second approach would be to further restrict The CERT C Coding Standard to address safety concerns. CERT already contains the detailed coding rules regarding the safe/secure use of dynamic memory management to allow this restricted use of dynamic memory. In general, because of the use of a greater number of more precise rules, The CERT C Coding Standard can be more easily adapted to serve a variety of markets.

Acknowledgements

Thanks to the following reviewers: Rob Wood, Ollie Whitehouse, Paul Ashton, Graham Bucholz, Daniel Mayer, Thomas Plum, and Jeremy Brandt-Young.

References

  1. Seacord R.: ‘The CERT C secure coding standard’ (Addison-Wesley, 2008).
  2. Seacord R.: ‘The CERT C coding standard, second edition: 98 rules for developing safe, reliable, and secure systems’ (Addison-Wesley, 2014).
  3. Gerard J.: ‘NASA/JPL laboratory for reliable software. 2006. The power of 10: rules for developing safety-critical code’,Computer, 2006, 39, (6), pp. 95–97, doi: http://www.dx.doi.org/10.1109/MC.2006.212.
  4. IEC 61508:2010: ‘Functional safety of electrical/electronic/programmable electronic safety-related systems’, International Electrotechnical Commission, in 7 parts published in 2010.
  5. ISO/DIS 26262 – Road vehicles – Functional safety. The standard consists of several parts, published in 2011.
  6. DO-178C/ED-12C: ‘Software Considerations in Airborne Systems and Equipment Certification’, RTCA, 2011.
  7. RTCA DO-332: ‘Object-Oriented Technology and Related Techniques Supplement to DO-178C and DO-278A’, December 2011.
  8. Lockheed Martin: ‘Joint Strike Fighter Air Vehicle C++ Coding Standards for the system development and demonstration program’, Document Number 2RDU00001 Rev C., December 2005. Available at http://www.stroustrup.com/JSF-AV-rules.pdf, accessed 22 April 2016.
  9. Checkoway S. McCoy D. Kantor B. et al.: ‘Comprehensive experimental analyses of automotive attack surfaces’. D. Wagner (Chair), SEC’11, Proc. of the 20th USENIX Conf. on Security, San Francisco, CA, 8–12 August 2011. Available at http://www.usenix.org/events/sec11/tech/full_papers/Checkoway.pdf.
  10. Hack the S. Available at http://www.su-tesla.space/2016/04/hack-s.html, accessed 19 April 2016.
  11. Miller C. Valasek C.: ‘Remote exploitation of an unaltered passenger vehicle’, August 2015.
  12. McCarthy C. Harnett K. Carter A.: ‘Characterisation of potential security threats in modern automobiles: a composite modelling approach’. Report no. DOT HS 812 074 , National Highway Traffic Safety Administration, Washington, DC, October, 2014.
  13. SAE J3061 Cybersecurity Guidebook for Cyber-Physical Vehicle Systems. Available at http://standards.sae.org/wip/j3061/.
  14. Road Vehicles – Vehicle Cybersecurity Engineering. Available at standardsproposals.bsigroup.com/Home/Proposal/5410, accessed 21 April 2016 .
  15. MISRA (Motor Industry Software Reliability Association): ‘Guidelines for the use of the C language in vehicle based software’ (MIRA, Nuneaton, UK, 1998), ISBN 978-0-9524156-6-4.
  16. MISRA (Motor Industry Software Reliability Association): ‘ MISRA C: 2004 Guidelines for the use of the C language in critical systems’ (MIRA, Nuneaton, UK, 2004), ISBN 095241564X.
  17. ISO/IEC: ‘Programming languages – C (ISO/IEC 9899:1990)’ (ISO, Geneva, Switzerland, 1990).
  18. MISRA (Motor Industry Software Reliability Association): ‘ MISRA C3: Guidelines for the use of the C language in critical systems 2012’ (MIRA, Nuneaton, UK, 2012), ISBN 978-1-906400-10-1.
  19. ISO/IEC: ‘Programming languages – C (ISO/IEC 9899:1999)’ (ISO, Geneva, Switzerland, 1999, 2nd edn.).
  20. MISRA C: 2012 Addendum 2 Coverage of MISRA C: 2012 against ISO/IEC TS 17961:2013 ‘C Secure’. Nuneaton, UK: HORIBA MIRA, 2016 (ISBN 978-1-906400-15-6).
  21. MISRA C: 2012 Amendment 1. Additional security guidelines for MISRA C: 2012. Nuneaton, UK: HORIBA MIRA, 2016 (ISBN 978-1-906400-16-3).
  22. Seacord R.: ‘C secure coding rules: past, present, and future’. Available at http://www.informit.com/articles/article.aspx?p=2088511, accessed 18 April 2016 .
  23. ISO/IEC TS 17961: ‘Information technology – programming languages, their environments and system software interfaces – C secure coding rules’ (ISO, Geneva, Switzerland, 2012).
  24. ISO/IEC: ‘Programming languages – C (ISO/IEC 9899:2011)’ (ISO, Geneva, Switzerland, 2011, 3rd edn.).
  25. MISRA C – WG14 Liaison Report WG14 Meeting, London 11th–14th April 2016 Andrew Banks. Available at http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2035.pdf, accessed 22 April 2016.
  26. FIPS PUB 199: ‘Standards for Security Categorization of Federal Information and Information Systems’, 2004.
  27. Kazman R. Klein M. Barbacci M. et al.: ‘The architecture tradeoff analysis method’. Technical Report, CMU/SEI-98-TR-008 , Software Engineering Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania, 1998.

 

No comments yet.