Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(redis): Allow configuring Redis pools individually #3843

Merged
merged 22 commits into from
Jul 24, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,8 @@

- "Cardinality limit" outcomes now report which limit was exceeded. ([#3825](https://github.com/getsentry/relay/pull/3825))
- Derive span browser name from user agent. ([#3834](https://github.com/getsentry/relay/pull/3834))
- Redis pools for `project_configs`, `cardinality`, `quotas`, and `misc` usecases
can now be configured individually. ([#3843](https://github.com/getsentry/relay/pull/3843))

**Internal**:

Expand Down
34 changes: 11 additions & 23 deletions relay-config/src/config.rs
Original file line number Diff line number Diff line change
Expand Up @@ -16,15 +16,14 @@ use relay_kafka::{
};
use relay_metrics::aggregator::{AggregatorConfig, FlushBatching};
use relay_metrics::MetricNamespace;
use relay_redis::RedisConfigOptions;
use serde::de::{DeserializeOwned, Unexpected, Visitor};
use serde::{Deserialize, Deserializer, Serialize, Serializer};
use uuid::Uuid;

use crate::aggregator::{AggregatorServiceConfig, ScopedAggregatorConfig};
use crate::byte_size::ByteSize;
use crate::upstream::UpstreamDescriptor;
use crate::{RedisConfig, RedisConnection};
use crate::{create_redis_pools, RedisConfig, RedisConfigs, RedisPoolConfigs};

const DEFAULT_NETWORK_OUTAGE_GRACE_PERIOD: u64 = 10;

Expand Down Expand Up @@ -1024,7 +1023,7 @@ pub struct Processing {
pub kafka_validate_topics: bool,
/// Redis hosts to connect to for storing state for rate limits.
#[serde(default)]
pub redis: Option<RedisConfig>,
pub redis: Option<RedisConfigs>,
/// Maximum chunk size of attachments for Kafka.
#[serde(default = "default_chunk_size")]
pub attachment_chunk_size: ByteSize,
Expand Down Expand Up @@ -1585,7 +1584,7 @@ impl Config {
}

if let Some(redis) = overrides.redis_url {
processing.redis = Some(RedisConfig::single(redis))
processing.redis = Some(RedisConfigs::Unified(RedisConfig::single(redis)))
}

if let Some(kafka_url) = overrides.kafka_url {
Expand Down Expand Up @@ -2279,26 +2278,15 @@ impl Config {
&self.values.processing.topics.unused
}

/// Redis servers to connect to, for rate limiting.
pub fn redis(&self) -> Option<(&RedisConnection, RedisConfigOptions)> {
let cpu_concurrency = self.cpu_concurrency();
/// Redis servers to connect to for project configs, cardinality limits,
/// rate limiting, and metrics metadata.
pub fn redis(&self) -> Option<RedisPoolConfigs> {
let redis_configs = self.values.processing.redis.as_ref()?;

let redis = self.values.processing.redis.as_ref()?;

let options = RedisConfigOptions {
max_connections: redis
.options
.max_connections
.unwrap_or(cpu_concurrency as u32 * 2)
.min(crate::redis::DEFAULT_MIN_MAX_CONNECTIONS),
connection_timeout: redis.options.connection_timeout,
max_lifetime: redis.options.max_lifetime,
idle_timeout: redis.options.idle_timeout,
read_timeout: redis.options.read_timeout,
write_timeout: redis.options.write_timeout,
};

Some((&redis.connection, options))
Some(create_redis_pools(
redis_configs,
self.cpu_concurrency() as u32,
))
}

/// Chunk size of attachments in bytes.
Expand Down
Loading
Loading