SAP Knowledge Base Article - Preview

2075551 - How To Collect Core Files on Linux for SQL Anywhere Processes during Hangs and Crashes

Symptom

  • “My SQL Anywhere database server has hung, or is frozen, or is otherwise not responding - what’s going on and what can I do about it?”
  • “Why am I seeing a 100% CPU spin in the SQL Anywhere database server that never stops?”
  • “Why did the SQL Anywhere database server crash all of a sudden?”
  • "Why did my database application programming interface call fail in my client application?"

All of these questions have a similar answer: we need to figure out at the programming level, exactly which code is being executed in the process to get into the condition we now see in the application. This technique is generally defined as debugging a process and on Linux requires collecting a core file of the processalong with the associated system libraries. This document will outline the various techniques for specifically debugging the SQL Anywhere software / libraries on the Linux operating system, across different processor architectures (x86 / x64).

What is a “core” file?

A “core” file is a snapshot of what a program was executing at the time the core file was taken. A core file is used by a software engineer to determine a potential problem in software code by inspecting the stack trace, after referencing the appropriate shared libraries for both the execution environment and the process.

What is a “shared library”?

A “shared library” (.so) file is a file on the file system that can be included in different applications to re-use common programming functionality. There are two library “types” that a process can use that are important for debugging – the application-specific shared libraries (found in the /lib, /lib32 or /lib64 sub-directories inside the SQL Anywhere installation directory) and the system shared libraries (typically located in various places on the file system – usually /lib, /lib64, /usr/lib, /usr/lib64)

What is the difference between an “Application Crash” and “Program Termination”?

Debugging is usually performed on a process for one of two reasons – the first typical reason is that a process is running “unexpectedly” and we need to know more about what’s happening internally in the code at runtime since application logging cannot locate the source of the issue. The other typical reason to perform debugging is that the process has crashed (i.e. is stopped suddenly by the operating system because it tried to perform an illegal operation, and the process ID is no longer listed in any system monitoring tools such as “ps”) and we would like to understand how the process got in to that state to see if it can be prevented in the future.

Crashes can be caused for a wide variety of reasons – software bugs, operating system bugs, permissions issues, low memory/disk space issues, hardware issues, etc. It is important to capture as much information as possible during these scenarios to better understand the nature of the crash and the circumstances it arises in. It is also important to understand the difference between a crash and a program termination – a program termination is a “normal” exit routine done by the software and is always logged as such in the SQL Anywhere server console logs (-o output). For the SQL Anywhere database server in a program termination scenario, the exit line will be displayed as:

01/01 12:00:00. Database server shutdown requested via server console

For the MobiLink Synchronization server you will see:

I. 2014-01-01 12:00:00. <Main> MobiLink server shutting down
I. 2014-01-01 12:00:00. <Main> MobiLink server undergoing soft shutdown

If you see these lines in the console log, this is an expected exit condition and no core file will be automatically created.

When a software program crashes however, this is an unexpected condition – when this happens, the application will automatically attempt to create a core file so that it can be analyzed by software engineers at a later time. Typically when an application crashes on an operating system, this is also logged to the system event log (by the “syslogd” process). Check your operating system documentation for further info regarding operating-system logging options and the “/etc/syslog.conf” file for the current syslogd configuration. The user permission for logging to the system event log is controlled by the “-s” switch on the dbsrv command-line.

Note that application crashes are different than database assertions that happen within the database server. Assertions are checked conditions in the database server code to prevent crashes in the database server from happening. Any time the database server encounters a “bad state” (invalid data, a bad operating condition, etc.) the server will “assert” and display the assertion message on the database console for review. As part of this assertion process, a core file may be also be generated. This core file can sometimes be useful for analysis – this file should be simultaneously collected and provided to SAP engineering when investigating server assertions.

Understanding Core Files on Linux

There are two crucial pieces of information contained inside a core file. The first piece of information is the stack trace which needs to be extracted by a debugger (typically ‘gdb’, also known as the ‘GNU Debugger’). The other important piece of information to extract is the module memory map listing. When processes are loaded on Linux, they are loaded at a base address, and then all libraries are loaded from that base address’ offset. In order to open the core file, a software engineer needs to know which memory offsets the libraries were originally loaded into in order to match up the stack trace. The stack trace will show the current calls in the code (with offsets relative to the base address) and the memory map listing will show which addresses the shared libraries occupy to calculate those offsets.

Below is a sample stack trace from the database server:

===========================
#0  do_sigwait () at ../sysdeps/unix/sysv/linux/sigwait.c:65
#1  __sigwait (set=0x7f37911e9220, sig=0x7fff9921e15c) at ../sysdeps/unix/sysv/linux/sigwait.c:100
#2  0x00007f3790c1f0c5 in ?? () from /opt/16/lib64/libdbserv16_r.so
#3  0x00007f37908c3c66 in ?? () from /opt/16/lib64/libdbserv16_r.so
#4  0x00007f37908c532f in ?? () from /opt/16/lib64/libdbserv16_r.so
#5  0x00007f3790c20310 in real_main () from /opt/16/lib64/libdbserv16_r.so
#6  0x00000000004013e1 in pthread_mutex_unlock () at pthread_mutex_unlock.c:268
#7  0x000000395581e576 in __libc_start_main (main=0x4013b0 <pthread_mutex_unlock+216>, argc=4, ubp_av=0x7fff9921e568, init=0x40b340, fini=<value optimized out>, rtld_fini=<value optimized out>, stack_end=0x7fff9921e558)
    at libc-start.c:220
#8  0x000000000040131a in pthread_mutex_unlock () at pthread_mutex_unlock.c:268
#9  0x00007fff9921e558 in ?? ()
#10 0x000000000000001c in ?? ()
#11 0x0000000000000004 in ?? ()
#12 0x00007fff9921f5a8 in ?? ()
#13 0x00007fff9921f5b0 in ?? ()
#14 0x00007fff9921f5b3 in ?? ()
#15 0x00007fff9921f5b8 in ?? ()
#16 0x0000000000000000 in ?? ()
===========================

Notice that parts of the stack trace have been obscured and have been replaced with question marks (??) inside the debugger. The reason for this is because the SQL Anywhere shared libraries that are provided to customers are known as stripped libraries and do not include debugging information. Separate debugging libraries are maintained internally by the engineers at SAP.

As we can see from the output, the debugger will know which memory address is being executed, but it has no information to tell us what part of the underlying source code is being run.

To assist this debugging process further, we need to capture the module memory map listing on the source system from where the core file was generated in order to allow SAP engineers an opportunity to match this information when analyzing the core file later. Below is a sample module memory map from a Linux x64 system for the database server:

===========================
0x00007f37906cd4a0  0x00007f3790d87e68  Yes /opt/16/lib64/libdbserv16_r.so
0x00000000001dbbb0  0x00000000001e8c38  Yes /opt/16/lib64/libdbtasks16_r.so
0x0000003956405260  0x0000003956410a78  Yes /lib64/libpthread.so.0
0x0000003956000de0  0x0000003956001a08  Yes /lib64/libdl.so.2
0x0000003955c03e70  0x0000003955c46408  Yes /lib64/libm.so.6
0x000000395581e200  0x00000039559230c8  Yes /lib64/libc.so.6
0x0000003954400af0  0x0000003954419164  Yes /lib64/ld-linux-x86-64.so.2
0x000000000064eb10  0x000000000070d208  Yes /opt/16/bin64/../lib64/libdbicu16_r.so
0x00007f378ff3e5a0  0x00007f378ff3e718  Yes /opt/16/bin64/../lib64/libdbicudt16.so
0x00000039574022d0  0x0000003957406108  Yes /lib64/librt.so.1
0x0000000000bf1040  0x0000000000bf8bd8  Yes /lib64/libnss_files.so.2
0x00000000009abc30  0x00000000009ac2b8  Yes /opt/16/lib64/libdblaiod16.so
0x00007f3787e3d570  0x00007f3787e3d741  Yes /lib64/libaio.so.1
0x000000346aa0f1a0  0x000000346aa39ed8  Yes /usr/lib64/libldap_r.so
0x00000032f6403360  0x00000032f640b638  Yes /usr/lib64/liblber-2.4.so.2
0x000000395bc038c0  0x000000395bc10dc8  Yes /lib64/libresolv.so.2
0x000000396ae046f0  0x000000396ae14ad8  Yes /usr/lib64/libsasl2.so.2
0x0000003467613160  0x000000346763eac8  Yes /lib64/libssl.so.7
0x0000003466e5a160  0x0000003466f01df8  Yes /lib64/libcrypto.so.7
0x0000003966600a50  0x0000003966606e08  Yes /lib64/libcrypt.so.1
0x0000003466209260  0x0000003466228838  Yes /usr/lib64/libgssapi_krb5.so.2
0x0000003466a1ac60  0x0000003466a8abc8  Yes /usr/lib64/libkrb5.so.3
0x0000003464801220  0x0000003464801d58  Yes /lib64/libcom_err.so.2
0x0000003466605740  0x0000003466619678  Yes /usr/lib64/libk5crypto.so.3
0x0000003956801ef0  0x000000395680d748  Yes /lib64/libz.so.1
0x0000003465402400  0x0000003465407518  Yes /usr/lib64/libkrb5support.so.0
0x0000003962e00aa0  0x0000003962e01038  Yes /lib64/libkeyutils.so.1
0x0000003464005220  0x0000003464014818  Yes /lib64/libselinux.so.1
===========================

Tip: While a process is running, a file is automatically created on the file system underneath the “/proc” directory with this module map information already provided. If a process is still running when you go to debug, it’s best to capture this information right off of the file system. You first have to find the process’ pid and then copy the /proc/pid/maps file to a secondary file for submission to technical support at a later time.


Read more...

Environment

  • SAP SQL Anywhere (all versions)
  • Linux Operating System

Product

SAP SQL Anywhere all versions ; SAP SQL Anywhere, cloud edition all versions ; SQL Anywhere all versions

Keywords

sybase, hang, hanging, hangs, freeze, freezing , KBA , BC-SYB-SQA , SQL Anywhere (on premise, on demand) , How To

About this page

This is a preview of a SAP Knowledge Base Article. Click more to access the full version on SAP for Me (Login required).

Search for additional results

Visit SAP Support Portal's SAP Notes and KBA Search.