Symptom
- The used allocated disk space has suddenly increased or reached maximum capacity of 100% on the DATA area of the hana/data volume. The same may be true of the hana/log volume where the savepoint cannot be written and most log segments are in a "BACKEDUP" state.
- The filesystem's capacity has already been increased or extended as per SAP Note 1679938
- A subsequent restart of the HANA instance fails with another DISKFULL event during redo log replay after the data volume rapidly fills up again in a short amount of time.
2020-10-21 11:17:40.416388 i Basis TraceStream.cpp(00719) : ==== Starting hdbindexserver, version 2.00.045.00
............
2020-10-21 11:18:03.159545 i Logger PersistenceManagerImpl.cpp(03140) : Starting log replay at position 0xae65c3cc2
2020-10-21 11:18:03.159550 i Logger RecoveryHandlerImpl.cpp(01824) : Triggering recovery of remaining log
2020-10-21 11:18:04.418214 i Logger PersistenceManagerSPI.cpp(01612) : Found savepoint 303399 log record
2020-10-21 11:18:04.420073 i CSARedo RedoMerge.cpp(02696) : MergeInfoPersistenceManager instance for log replay created: instance @ 0x00007fe049607000, associated persistence manager @ 0x00007fe1bf7c9018
2020-10-21 11:18:04.420099 i CSARedo RedoMerge.cpp(01269) : Merge and optimize compression configuration during log replay: active = 0, merge type = NO_MERGE, optimize compression type = NO_OPTIMIZE_COMPRESSION, cancel jobs before going online = 1, delta row limit for enforced merges = 500000000, delta row limit for enforced synchronized merges = 1610612736, timeout for index handle lock = 120000, ignored tables =
2020-10-21 11:18:04.420108 i CSARedo Configuration.cpp(00166) : redo replay settings: UseDmlExecutorDataRedo = 1, AllowCachingOfRedoDmlContext = 1, UseDmlExecutorBatchRedo = 1, UseDmlExecutorBatchSortRedo = 1, UseDmlExecutorBatchUpdateRedo = 0, AllowMergeAndOcDuringRecoveryToFinishAnytime = 1, MaxRowsToLog = 0, MaxSizeToLog = 209715200, ForceDeltaLoad = 0, DisabledTableConsistencySubChecks = 12, ExpectWillCrash = 0, WarmupperLoadDeltaFragments = 1, TreatFirstNonDdlRedoOfContainerAsNoOpenCch = 1, MaxLoadUnitConversionThreads = 8, OptimizeReplicaForOnlineDdlInLogReplay = 0, UseOperationModeLogReplayReadAccess = 0, FirstDataItemToTrace = 0, NumDataItemsToTrace = 10, ParallelWriteThreshold = 20002020-10-21 11:18:13.160071 i Logger RecoveryHandlerImpl.cpp(00736) : Redo done up to position: 0xaeaf01540 and time: 2020-10-21 04:53:32.272912+00:00 (2%)
2020-10-21 11:18:23.160372 i Logger RecoveryHandlerImpl.cpp(00736) : Redo done up to position: 0xaefef8600 and time: 2020-10-21 04:58:55.022572+00:00 (4%)
2020-10-21 11:18:33.160662 i Logger RecoveryHandlerImpl.cpp(00736) : Redo done up to position: 0xaf4eed400 and time: 2020-10-21 05:04:51.811019+00:00 (6%)
2020-10-21 11:18:43.161000 i Logger RecoveryHandlerImpl.cpp(00736) : Redo done up to position: 0xaf9ee5ec0 and time: 2020-10-21 05:10:20.804945+00:00 (8%)
2020-10-21 11:24:03.170882 i Logger RecoveryHandlerImpl.cpp(00736) : Redo done up to position: 0xb99dc10c0 and time: 2020-10-21 08:49:13.440879+00:00 (80%)
2020-10-21 11:24:13.171183 i Logger RecoveryHandlerImpl.cpp(00736) : Redo done up to position: 0xb9edb6b00 and time: 2020-10-21 08:57:52.642887+00:00 (82%)
2020-10-21 11:24:23.171674 i Logger RecoveryHandlerImpl.cpp(00736) : Redo done up to position: 0xba3dae800 and time: 2020-10-21 09:05:50.842145+00:00 (84%)
2020-10-21 11:24:33.172043 i Logger RecoveryHandlerImpl.cpp(00736) : Redo done up to position: 0xba8da8300 and time: 2020-10-21 09:16:03.567668+00:00 (87%)2020-10-21 11:24:36.211432 i Service_Shutdown TrexService.cpp(00582) : Preparing for shutting service down
2020-10-21 11:24:36.211574 i Service_Shutdown TREXIndexServer.cpp(02592) : Stopping data backup
2020-10-21 11:24:36.211973 i Service_Shutdown TREXIndexServer.cpp(02597) : preparing auditing shutdown
2020-10-21 11:24:36.212030 i assign TREXIndexServer.cpp(02604) : unassign from volume 3
2020-10-21 11:24:36.212032 i Service_Shutdown TREXIndexServer.cpp(02606) : Preparing to shutdown
2020-10-21 11:24:36.212032 i Service_Shutdown ComponentManager.cpp(00183) : Stopping Table Distribution
2020-10-21 11:24:36.212032 i Service_Shutdown ComponentManager.cpp(00183) : Stopping Table Replication
2020-10-21 11:24:36.212091 i Service_Shutdown ComponentManager.cpp(00183) : Stopping SQL Plan Stability
2020-10-21 11:24:36.212119 i Service_Shutdown TREXIndexServer.cpp(02614) : stopping lock waits and transactions
2020-10-21 11:24:36.212311 i Service_Shutdown TREXIndexServer.cpp(02628) : stopping statistics server worker threads
2020-10-21 11:24:36.212387 i Service_Shutdown ComponentManager.cpp(00183) : Stopping System Replication
2020-10-21 11:24:36.212402 i Service_Shutdown TREXIndexServer.cpp(02652) : waiting for assign thread ...
2020-10-21 11:24:43.172387 i Logger RecoveryHandlerImpl.cpp(00736) : Redo done up to position: 0xbadd9dec0 and time: 2020-10-21 09:26:29.769205+00:00 (89%)
2020-10-21 11:25:03.172797 i Logger RecoveryHandlerImpl.cpp(00736) : Redo done up to position: 0xbb6d905c0 and time: 2020-10-21 09:43:51.375788+00:00 (93%)
2020-10-21 11:25:05.496724 i EventHandler LocalFileCallback.cpp(00491) : [DISKFULL] (1st request) [W] , buffer= 0x00007fe0a3f3c000, offset= 1150749245440, size= 0/16777216, file= "<root>/datavolume_0000.dat" ((open, mode= RW, access= rw-rw-r--, flags= ASYNC|DIRECT|MUST_EXIST|LOCK), factory= (root= "/hana/data/SID/mnt00000/hdb00000/" (access= rw-rw-r--, flags= AUTOCREATE_PATH, usage= DATA, fs= xfs, config= (async_write_submit_active=on,async_write_submit_blocks=all,async_read_submit=on,num_submit_queues=1,num_completion_queues=1,size_kernel_io_queue=512,max_parallel_io_requests=64,min_submit_batch_size=16,max_submit_batch_size=64))) {shortRetries= 0, fullRetries= 0 (0/10)}
2020-10-21 11:25:05.496795 i EventHandler EventManagerImpl.cpp(00675) : New event reported: 'DiskFullEvent[id= 2, path= /hana/data/SID/mnt00000/hdb00000/, state= NEW]'
2020-10-21 11:25:05.496795 i Logger LoggerImpl.cpp(00104) : Logger notified of new DiskFull: DiskFullEvent[id= 2, path= /hana/data/SID/mnt00000/hdb00000/, state= NEW]
- Investigation into the largest table and disk usage shows that the "GRACSODREPDATA" is responsible for vast bulk of disk consumption:
- Refer to the attachment of note "1969700 - SQL statement collection for SAP HANA", and run the following SQL in the attachment:
- Disk size history: This is only for HANA 2.0
HANA_Disks_DiskUsage_2.00.<revisions>+.txt (Note: you need to modify the timeframe in modification section)
Sample Output of HANA_Disks_DiskUsage script:
--------------------------------------------
|SNAPSHOT_TIME |DATA_GB|
--------------------------------------------
|2020/10/23 04:47:00| 1091.07|
|2020/10/22 04:47:00| 1091.07|
|2020/10/21 04:47:00| 622.07|
|2020/10/20 04:47:00| 454.20|
-------------------------------------------- - Overview of largest tables (including indexes and LOBs):
HANA_Tables_LargestTables_<revisions>+.txt (Note: you need to modify the timeframe in modification section)
Sample Output of HANA_Tables_LargestTables script:
----------------------------------------------------------------------------------------------------------------------------------
|TABLE_NAME | RECORDS| DISK_GB| MEM_GB| LOB_DISK_GB| LOB_MEM_GB|
----------------------------------------------------------------------------------------------------------------------------------
|GRACSODREPDATA | 41705| 1036.85| 4.84| 949.39| 4.84|
----------------------------------------------------------------------------------------------------------------------------------
Image/data in this KBA is from SAP internal systems, sample data, or demo systems. Any resemblance to real data is purely coincidental.
Read more...
Environment
- SAP HANA PLATFORM EDITION 1.0
- SAP HANA PLATFORM EDITION 2.0
Product
SAP HANA 1.0, platform edition ; SAP HANA, platform edition 2.0
Keywords
data volume full, hana/data full, , KBA , HAN-DB-PER , SAP HANA Database Persistence , GRC-SAC-ARA , Access Risk Analysis , Problem
About this page
This is a preview of a SAP Knowledge Base Article. Click more to access the full version on SAP for Me (Login required).Search for additional results
Visit SAP Support Portal's SAP Notes and KBA Search.