Symptom
After clicking "Start Data Persistency", the peak memory in the Data Integration Monitor log (accessed by Views Monitor) rapidly escalates. The memory information "XXX GiB of peak memory used in the view persistency runtime" showed in the log is significantly larger than the peak memory without the DECLARE script. For instance, with the script, the peak memory is less than 1 GB, whereas without the script, the peak memory surges to over 30 GB.
Environment
SAP Datasphere
Reproducing the Issue
- Add the DECALRE script to the SQL view, for example: "DECLARE z_test NVARCHAR(6) = 'mytest';", and deploy the view.
- Click on "Start Data Persistency".
- In the Views Monitor, it displays "XXX GiB of peak memory used in the view persistency runtime", which is significantly larger than the peak memory usage without the DECLARE script.
Cause
By utilizing DECLARE, the UDF (User Defined Function) is not unfolded, and the result is fully materialized and transferred using ITAB from remote. On the other hand, in the absence of DECLARE, the function is unfolded and the result is transferred using ODBC result set. These behaviors are by design and the difference in behavior leads to different peak memory usage during query execution.
Resolution
Modify the SQL scripts to remove the use of DECLARE or limit the resultset from the remote by including additional conditions, such as a WHERE clause.
Keywords
KBA , DS-MD-VIW , Views , DS-MD , Modeling (Data Builder) , HAN-CLS-HC , HANA Cloud Services HANA Cloud , Problem
Product
Attachments
| Picture1.png |
| 2025-05-21_13-40-48.jpg |
| Pasted image.png |
SAP Knowledge Base Article - Public