Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimize slacker requests #132

Open
kmarekspartz opened this issue Aug 30, 2017 · 7 comments
Open

Optimize slacker requests #132

kmarekspartz opened this issue Aug 30, 2017 · 7 comments

Comments

@kmarekspartz
Copy link
Contributor

When collecting activity in the last N days, we do this twice per channel, once for the warner and once for the archiver. Perhaps more importantly, we collect everything, then filter it and see if it matches certain conditions. If we invert control here, and pass stopping conditions into the "activity in the last N days" method, we should get a substantial performance increase to the point where it might be okay to do it twice per channel. Busy channels should stop sooner rather than later... right now it spends more work on busy channels.

@kmarekspartz
Copy link
Contributor Author

Thinking this could be done with generators, yielding messages out.

@kmarekspartz
Copy link
Contributor Author

The warner may be able to first check if it has already warned using "Retrieving a single message" from https://api.slack.com/methods/channels.history, then doing the regular stale check.

@katherinelim
Copy link

It doesn't rate limit enough. In my testing most times it triggers the rate limit retry as detailed in the slack api https://api.slack.com/docs/rate-limits

A 2 second sleep seems to work around the limit retry for me:

--- destalinator/slacker.py	2017-10-27 11:27:20.000000000 +1100
+++ slacker.py	2017-10-27 12:19:52.000000000 +1100
@@ -73,6 +73,7 @@
                 time.sleep(retry_after)
                 continue
             payload = response.json()
+            time.sleep(2)
 
         return payload

@kmarekspartz
Copy link
Contributor Author

Interesting. The docs specify using the Retry-After header, but I've noticed that it doesn't seem to be enough. For a while I tried double the Retry-After header, but it didn't seem to help or hurt.

Separately, there was a Slack outage this week, https://status.slack.com/2017-10/8b0d4d44ea53726f. Might you have been testing during that?

@katherinelim
Copy link

katherinelim commented Nov 2, 2017

No, I was testing this last week, check my patch snippet for confirmation.

What part of the code ensures that it communicates at one transaction per second? (I would prefer to adjust the delay there)

@kmarekspartz
Copy link
Contributor Author

I see, thanks.

None, it currently waits for the 'Retry-After' header.

@kmarekspartz
Copy link
Contributor Author

Getting a PR together.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants