ASN1C adopts a model of code generation that we feel represents a good compromise between functionality and performance. However, we often handle requests from customers who need a smaller or faster solution due to system or processing constraints. In these cases, some additional steps may be necessary.
See below for some strategies for minimizing the code footprint and maximizing application performance.
Optimizing Code Footprint
Remove Helper Functions
By default, ASN1C generates two different types of helper functions that can create some added bloat in some specifications.
The first type are initialization functions, used to improve performance over calls to
memset. These improve performance by replacing calls to expensive C library functions with a custom initialization call that sets only pertinent values inside the generated structures. Use
–noinit to prevent the generation of these functions.
The second type are enum conversion functions that are used to provide text-friendly handles to
ENUMERATED types in the specification. In specifications loaded with enumerated types, these can take up quite a bit of space. Use
–noEnumConvert to remove them.
Remove Encoders or Decoders
–noencode if needed.
In cases where the application needs only decode or encode messages (e.g., as part of a message consumer or producer pair), removing the encoding or decoding methods is a good way to trim down unnecessary parts of the code.
Configuration File Options
See Compiler Configuration File in our documentation.
We allow some fine-tuning of the generated sources using a configuration file. For example, it’s possible to reduce the number of types for which code is generated by using the
Optimizing Code Performance
Use static memory
–static. Use a configuration file.
We use a custom memory management module in the ASN1C runtime that helps to minimize calls to
delete), but sometimes even this isn’t fast enough.
Since using the memory heap is expensive relative to allocating memory on the stack, using
–static is a good way to try to minimize heap usage. In practice, this needs to be used with a configuration file that specifies sane limits on
SEQUENCE OF sizes; this permits the use of static arrays instead of dynamic.
Remove unneeded debugging code
ASN1C can generate special trace handlers that allow PER applications to generate a bit dump during decoding (use
–trace). While this is a great tool for debugging, it generates a lot of code and slows applications down considerably. (Internal testing suggests that encoding and decoding are up to ten times slower.)
During development it makes sense to turn on bit tracing, but generally speaking such an option should be left off during deployment.
All of ASN1C’s supported encoding rules, except for DER, support streaming encoding and decoding. Memory access is faster than disk access as a rule, but as soon as the data kept in memory exceed a certain limit, the machine will start to “thrash”: memory pages will be continually written to disk and read back and written again until the machine seizes up or the program crashes.
Some of our customers have seen a 30% improvement in real-world scenarios when switching to streaming: data are written or read on demand rather than stored in memory. In applications that clear large volumes of data, this is often much faster than trying to work exclusively in memory.
The flexibility of ASN1C means that there are other strategies that can be employed, but these are the most commonly used. As always, feel free to email us with questions if any arise.