Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Spike - testing CSV exports in production #3201

Open
3 tasks
jtimpe opened this issue Sep 25, 2024 · 1 comment
Open
3 tasks

Spike - testing CSV exports in production #3201

jtimpe opened this issue Sep 25, 2024 · 1 comment
Labels
P3 Needed – Routine spike

Comments

@jtimpe
Copy link

jtimpe commented Sep 25, 2024

Description:
#3162 addressed memory issues related to the csv export, noted in #3137 and #3138. While testing the solution, we noticed an out-of-memory error is still possible, but substantially more records are able to be reliably exported before the export breaks. The limitations are noted in @ADPennington 's comment on 3162.

Since we are no longer caching the queryset and file I/O operations are done as efficiently as possible using python io and gzip, the thought is that the celery worker is leaking memory. Indeed the following warning message is present everywhere we use DJANGO_DEBUG=True (which is the case for all deployed dev environments)

/celery/fixups/django.py:203: UserWarning: Using settings.DEBUG leads to a memory leak, never use this setting in production environments!

In dev environments, we can turn this setting to False to test this assumption. In production, DJANGO_DEBUG is False already, so we'd like to observe the behavior of the csv export in production with the following questions in mind:

Open Questions:
Please include any questions, possible solutions or decisions that should be explored during work

  • Are reliable csv exports of 600k+ records possible in production (or with DJANGO_DEBUG=False)? 900k-1m+?
  • How do production's system resources behave during large csv exports?
  • Do simultaneous operations (large queries, memory-heavy processes) on the backend create memory pressure for celery tasks?
  • Open Question 1
  • Open Question 2

Deliverable(s):
Create a list of recommendations or proofs of concept to be achieved to complete this issue

  • Turn DJANGO_DEBUG to False in a non-prod environment. Run exports of 600k, 900k, 1m+ rows and observe memory.
  • Repeat; perform memory-heavy operations while the exports run. Observe.
  • Observe memory in production environment during exports.

Supporting Documentation:
Please include any relevant log snippets/files/screen shots

In QASP, this was tested by running the following script while a csv export was simultaneously run

while true
do
    echo "Watching memory..."
    cf app tdp-backend-qasp >> memory.txt
    sleep 1
done
  • Results: memory.txt
    • QASP where DJANGO_DEBUG=True, export of ~950k records
    • Notice that memory started around 760MB and rose quickly to 1.7GB once the export started. It stayed at 1.7GB until the export completed, then returned to ~800MB.
    • This export succeeded, but it doesn't always. See this comment for an example of the error.
@jtimpe jtimpe added the spike label Sep 25, 2024
@jtimpe
Copy link
Author

jtimpe commented Sep 25, 2024

This research may best be done after the introduction of #3046

@vlasse86 vlasse86 added the P3 Needed – Routine label Oct 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
P3 Needed – Routine spike
Projects
None yet
Development

No branches or pull requests

2 participants