Performance in ASN1C

ASN1C adopts a model of code generation that we feel represents a good compromise between functionality and performance. However, we often handle requests from customers who need a smaller or faster solution due to system or processing constraints. In these cases, some additional steps may be necessary.

See below for some strategies for minimizing the code footprint and maximizing application performance.

Optimizing Code Footprint

Remove Helper Functions

Use –noinit and –noEnumConvert.

By default, ASN1C generates two different types of helper functions that can create some added bloat in some specifications.

The first type are initialization functions, used to improve performance over calls to memset. These improve performance by replacing calls to expensive C library functions with a custom initialization call that sets only pertinent values inside the generated structures. Use –noinit to prevent the generation of these functions.

The second type are enum conversion functions that are used to provide text-friendly handles to ENUMERATED types in the specification. In specifications loaded with enumerated types, these can take up quite a bit of space. Use –noEnumConvert to remove them.

Remove Encoders or Decoders

Use –nodecode or –noencode if needed.

In cases where the application needs only decode or encode messages (e.g., as part of a message consumer or producer pair), removing the encoding or decoding methods is a good way to trim down unnecessary parts of the code.

Configuration File Options

See Compiler Configuration File in our documentation.

We allow some fine-tuning of the generated sources using a configuration file. For example, it’s possible to reduce the number of types for which code is generated by using the <include> or <exclude> directives.

Optimizing Code Performance

Use static memory

Use –static. Use a configuration file.

We use a custom memory management module in the ASN1C runtime that helps to minimize calls to malloc and free (or new and delete), but sometimes even this isn’t fast enough.

Since using the memory heap is expensive relative to allocating memory on the stack, using –static is a good way to try to minimize heap usage. In practice, this needs to be used with a configuration file that specifies sane limits on SEQUENCE OF sizes; this permits the use of static arrays instead of dynamic.

Remove unneeded debugging code

ASN1C can generate special trace handlers that allow PER applications to generate a bit dump during decoding (use –trace). While this is a great tool for debugging, it generates a lot of code and slows applications down considerably. (Internal testing suggests that encoding and decoding are up to ten times slower.)

During development it makes sense to turn on bit tracing, but generally speaking such an option should be left off during deployment.

Use streaming

All of ASN1C’s supported encoding rules, except for DER, support streaming encoding and decoding. Memory access is faster than disk access as a rule, but as soon as the data kept in memory exceed a certain limit, the machine will start to “thrash”: memory pages will be continually written to disk and read back and written again until the machine seizes up or the program crashes.

Some of our customers have seen a 30% improvement in real-world scenarios when switching to streaming: data are written or read on demand rather than stored in memory. In applications that clear large volumes of data, this is often much faster than trying to work exclusively in memory.

The flexibility of ASN1C means that there are other strategies that can be employed, but these are the most commonly used. As always, feel free to email us with questions if any arise.

3 thoughts on “Performance in ASN1C

  1. nir

    What about disabling/enabling decoding of open types at runtime? I have noticed a 10% degradation in performance when generating Java code using the -tables option. When you are selective on the open types you need for your application you would like to generate the code with the -tables option for debugging and String generation purposes but when you do real applicative work it is preferred to choose based on your needs the open types you want to decode in order to save those 10% or more.

  2. Ethan Post author

    That’s a good point, Nir. Typically our customers don’t often use fine-grained open-type decoding, but it can definitely speed up processing to do so if it fits the application.

    Thanks for the comment!

  3. Ethan Post author

    Just a note to readers, we have implemented Nir’s suggestion in ASN1C 6.4 and will include it as the default behavior in version 6.5.

Comments are closed.