Thoughts on the current state of ASN.1 and XML technologies.

Archive for category FAQ

Viewing Huawei IMS CDR’s in ASN1VE (Updated)

This is an update to the original blog post on this topic done on June 28, 2011 at http://www.obj-sys.com/blog/?p=355. At the time, we did not have support in ASN1VE to process 3GPP TS 32.297 CDR headers, which is what the Huawei IMS CDR’s mentioned in the post contained. Although it is still possible to skip these headers, this is not reliable as the headers may be variable length. The updated procedure to view these CDR files with a newer version of ASN1VE is as follows:

  1. Open the CDR file. The “Assign All Items Wizard” popup window will appear:

    wizard1

    Select the “cdr” option (not “ber”) and click Next.

  2. The second wizard popup will appear:

    wizard1

    On this popup, check the “3GPP TS 32.297 Headers” radio button and click Next.

The normal procedure for assigning an ASN.1 schema file can then be followed from that point forward. The result will be the display of the CDR file with the headers fully decoded.

An example of this can be found in the sample/ts32297 directory within an ASN1VE installation.

No Comments

Optimizing PER Encoding and Code Footprint

PER encodings are designed to be compact: they are schema informed and bit packed, which permits significant compression across the wire. Unfortunately, it also eliminates the regular structure of BER encodings, which makes it nearly impossible to improve throughput by skipping past unneeded elements. Moreover, the code footprint for many specifications is quite large, making it difficult to deploy protocol stacks on constrained devices.

This is a problem we’ve thought about along with our customers, and we have provided a few different features in ASN1C as a result. The ASN1C Compiler Configuration File is instrumental in helping to achieve these optimizations. Following is a deep dive on the various configuration options that can be used.

Read the rest of this entry »

No Comments

ASN1C Performance and Benchmarking

We’re commonly asked to provide benchmarking results to prospective customers who are concerned about processing data in a timely fashion. Bearing in mind Mark Twain’s famous dictum about statistics, we hoped to blog briefly about the complexities of ASN.1 benchmarking; we provide a link to a paper below with some extended discussion.

Benchmarking software performance can be difficult because there are no standard hardware configurations to measure software performance. ASN.1 applications complicate benchmarking further by introducing many additional variables:

  • encoding rules — BER has two different length variants and two canonical forms (CER and DER). PER supports aligned and unaligned variants and offers standard and canonical forms (ASN1C uses canonical PER in all applications). XER has canonical and non-canonical forms.
  • specification complexity — specifications with a high degree of complexity result in larger code size and generally slower processing.
  • message complexity — large messages may impose memory constraints that can dominate processing time.
  • code generation options — ASN1C supports many options that can alternately improve and degrade performance. Strict constraint checking, for example, is more computationally expensive than lax constraint checking.
  • runtime compilation options — ASN1C comes with libraries suitable for debugging that are larger and slower than optimized runtime libraries suitable for deployment. Some of the features, like bit trace handling, can be very computationally expensive.
  • programming language choice — while modern programming languages often perform comparably (especially in the fuzzier definitions of user experience), we have found that lower-level languages still enjoy a perceptible advantage.
  • user implementation — in many cases, performance can be improved by unwrapping outer types and processing the inner data in batches (see, for example, the sample_ber/tap3batch program we ship with each language).

Our users are normally interested in “records-per-second” metrics: how many Ericsson R12 CDRs we can process in a second, for example, or whether we can handle the data coming from a switch in real time. These sort of metrics can be deceiving, though: decoding 3,000 records per second does not mean much if the messages are two bytes long. Looking at overall throughput (in kilobytes per second, for example) is often a better way to evaluate needs and performance.

In conclusion, then, we would make the following recommendations:

  1. First, identify the performance need, preferably using a metric that is consistent across all invocations of the application.
  2. Second, identify likely bottlenecks in performance: message size and memory use are the most common in our experience. If needed, adjust your interface code to reduce memory use.
  3. Third, deploy your applications with optimized runtime libraries instead of non-optimized libraries.

While performance will certainly vary from application to application, our runtime libraries have been used in real-time applications as well as large-scale data clearing houses—if you have questions about how well ASN1C might perform for you, feel free to drop us a line. We’ll be glad to chat. Click here for a longer discussion of benchmarking, including data points collected against our Java runtime libraries.

No Comments

Linux Build System Compatibility

We don’t often write about our build and test infrastructure, but one of our more observant users noticed a change in our Linux package descriptions recently, and this is a subject of broader interest for those concerned with our support policies and practices.

We built ASN1C version 6.7.3 for Linux on an Ubuntu 10.04.4 LTS system. We have transitioned to using CentOS 6.5 in version 6.7.4 and higher, being motivated by two maintenance concerns: first, the hardware for our 32-bit Linux system was slow and aging; second, newer versions of Ubuntu include patches to the C runtime library that break ABI compatibility with existing installations of ASN1C.

Maintaining support for older systems is critical for us, since we routinely build new versions for old systems and old versions for new systems. As our hardware and software needs evolve, finding a more stable build infrastructure became increasingly important.

Linux containers provide light-weight virtualization of target operating systems and compilers. This has allowed us to provide stable packages to our existing user base while simultaneously testing packages against newer Linux distributions without the overhead of full-fledged virtualization solutions. We test ARM systems and older kernels (2.4-era) through the use of containers, too. On the balance, we’ve found that this decreases the amount of time it takes to build packages and increases flexibility on our host systems.

So if you’ve noticed a few changes in the package contents and descriptions, it is because some of our infrastructure has changed. We don’t expect you’ll find any incompatibilities, but if you have any problems, don’t hesitate to contact us; we’ll be glad to work with you to resolve any issues.

No Comments

Embedded System Support

From time to time we receive emails asking about our embedded system support: do we support this operating system or that architecture or an unusual compiler?

The usual answer to these questions is a simple yes: ASN1C and XBinder support most embedded configurations. Usually we need to know a bit about the target system configuration to make a firm answer:

  • the host and target operating systems
  • the target architecture and ABI
  • the target compiler and, if applicable, the C runtime library
  • any special build options

Our local resources are usually sufficient to make a custom build, though it’s often useful to be pointed to a freely-available compiler to make sure that no unexpected complications arise.

In this way we have built embedded libraries for all of our target languages (C/C++, C#, and Java) across multiple architectures and operating systems: Android, iOS, different versions of embedded Linux, NIOS2, QNX, Symbian, Windows CE, and vxWorks for ARM, MIPS, PowerPC, SH3/4, Xscale, and others. Some of our customers have also adapted our libraries for use with custom operating systems and compilers, too.

From the beginning we have written our runtime libraries with a “least common denominator” philosophy that minimizes the use of features that would compromise their portability across different systems, and that explains in no small part why they have found widespread adoption among diverse platforms.

We write our C runtime libraries in ANSI-compliant C, and our generated code is also ANSI compliant. As a consequence, the same code that runs on an iPhone or iPad may be used without modification on Android and Windows—and pretty much everything else in between. And while most of our sales for embedded systems target C and C++, we also produce CLDC-compliant versions of our Java runtime library alongside the same one that runs under Android using the Dalvik VM. Our C# libraries have been used in a variety of compact configurations, too.

If you’re exploring the use of ASN.1 on your embedded systems, we’d advise you to take some time to precisely describe your development and deployment platforms. We have found two significant barriers in the delivery of embedded kits:

  1. The same vocabulary is often used for different instruction set architectures—ARM is a great example of this, since it is a family of architectures. Some implementations, for example support only software floating point, while others support hardware and even vector floating point operations.
  2. Compilers vary in their adherence to standard APIs; commonly their libraries exclude POSIX or Windows sockets, for example, or standard calls for obtaining the time and date.

In each case, our build options change to provide a compatible kit.

If you’re curious about your own configuration, let us know: we’ll be glad to talk to you about it.

No Comments