Skip to content

Commit

Permalink
Merge pull request #168 from nickvsnetworking/service_overhaul
Browse files Browse the repository at this point in the history
Service Overhaul
  • Loading branch information
davidkneipp committed Oct 9, 2023
2 parents 540b68c + 553b68b commit c2ce1ab
Show file tree
Hide file tree
Showing 40 changed files with 8,913 additions and 6,476 deletions.
48 changes: 48 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
# Changelog

All notable changes to PyHSS are documented in this file, beginning from [Service Overhaul #168](https://github.com/nickvsnetworking/pyhss/pull/168).

The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## [1.0.0] - 2023-09-27

### Added

- Systemd service files for PyHSS services
- /oam/diameter_peers endpoint
- /oam/deregister/{imsi} endpoint
- /geored/peers endpoint
- /geored/webhooks endpoint
- Dependency on Redis 7 for inter-service messaging
- Significant performance improvements under load
- Basic Rx support for RAA, AAA, ASA and STA
- Rx MO call flow support (AAR -> RAR -> RAA -> AAA)
- Dedicated bearer setup and teardown on Rx call
- Asymmetric geored support
- Configurable redis connection (Unix socket or TCP)
- Basic database upgrade support in tools/databaseUpgrade
- PCSCF state storage in ims_subscriber
- (Experimental) Working horizontal scalability

### Changed

- Split logical functions of PyHSS into 6 service processes
- Logtool no longer handles metric processing
- Updated config.yaml
- Gx CCR-T now flushes PGW / IMS data, depending on Called-Station-Id
- Benchmarked capability of at least ~500 diameter requests per second with a response time of under 2 seconds on a local network.

### Fixed

- Memory leaking in diameter.py
- Gx CCA now supports apn inside a plmn based uri
- AVP_Preemption_Capability and AVP_Preemption_Vulnerability now presents correctly in all diameter messages
- Crash when webhook or geored endpoints enabled and no peers defined
- CPU overutilization on all services

### Removed

- Multithreading in all services, except for metricService

[1.0.0]: https://github.com/nickvsnetworking/pyhss/releases/tag/v1.0.0
35 changes: 24 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,20 +41,28 @@ Basic configuration is set in the ``config.yaml`` file,

You will need to set the IP address to bind to (IPv4 or IPv6), the Diameter hostname, realm, your PLMN and transport type to use (SCTP or TCP).

Once the configuration is done you can run the HSS by running ``hss.py`` and the server will run using whichever transport (TCP/SCTP) you have selected.
The diameter service runs in a trusting mode allowing Diameter connections from any other Diameter hosts.

The service runs in a trusting mode allowing Diameter connections from any other Diameter hosts.
To perform as a functioning HSS, the following services must be run as a minimum:
- diameterService.py
- hssService.py

## Structure
If you're provisioning the HSS for the first time, you'll also want to run:
- apiService.py

The file *hss.py* runs a threaded Sockets based listener (SCTP or TCP) to receive Diameter requests, process them and send back Diameter responses.
The rest of the services aren't strictly necessary, however your own configuration will dictate whether or not they are required.

Most of the heavy lifting in this is managed by the Diameter class, in ``diameter.py``. This:
## Structure

* Decodes incoming packets (Requests)(Returns AVPs as an array, called *avp*, and a Dict containing the packet variables (called *packet_vars*)
* Generates responses (Answer messages) to Requests (when provided with the AVP and packet_vars of the original Request)
* Generates Requests to send to other peers
PyHSS uses a queued microservices model. Each service performs a specific set of tasks, and uses redis messages to communicate with other services.

The following services make up PyHSS:
- diameterService.py: Handles receiving and sending of diameter messages, and diameter client connection state.
- hssService.py: Provides decoding and encoding of diameter requests and responses, as well as logic to perform as a HSS.
- apiService.py: Provides the API, to allow management of PyHSS.
- georedService.py: Sends georaphic redundancy messages to geored peers when defined. Also handles webhook messages.
- logService.py: Handles logging for all services.
- metricService.py: Exposes prometheus metrics from other services.

## Subscriber Information Storage

Expand All @@ -71,12 +79,17 @@ Dependencies can be installed using Pip3:
pip3 install -r requirements.txt
```

Then after setting up the config, you can fire up the HSS itself by running:
PyHSS also requires [Redis 7.0.0](https://redis.io/docs/getting-started/installation/install-redis-on-linux/) or above.

Then after setting up the config, you can fire up the necessary PyHSS services by running:
```shell
python3 hss.py
python3 diameterService.py
python3 hssService.py
python3 apiService.py
```

All going well you'll have a functioning HSS at this point.
All going well you'll have a functioning HSS at this point. For production use, systemd scripts are located in `./systemd`
PyHSS API uses Flask, and can be configured with your favourite WSGI server.

To get everything more production ready checkout [Monit with PyHSS](docs/monit.md) for more info.

Expand Down
49 changes: 29 additions & 20 deletions config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -30,17 +30,17 @@ hss:
#IMSI of Test Subscriber for Unit Checks (Optional)
test_sub_imsi: '001021234567890'

#Device Watchdog Request Interval (In Seconds - If set to 0 disabled)
device_watchdog_request_interval: 0

#Async Queue Check Interval (In Seconds - If set to 0 disabled)
async_check_interval: 0

#The maximum time to wait, in seconds, before disconnecting a client when no data is received.
client_socket_timeout: 120

#The maximum amount of times a failed diameter response/query should be resent before considering the peer offline and terminating their connection
diameter_max_retries: 1
#The maximum time to wait, in seconds, before disconnecting a client when no data is received.
client_socket_timeout: 300

#The maximum time to wait, in seconds, before discarding a diameter request.
diameter_request_timeout: 3

#The amount of time, in seconds, before purging a disconnected client from the Active Diameter Peers key in redis.
active_diameter_peers_timeout: 10

#Prevent updates from being performed without a valid 'Provisioning-Key' in the header
lock_provisioning: False
Expand Down Expand Up @@ -68,22 +68,24 @@ hss:
api:
page_size: 200

external:
external_webhook_notification_enabled: False
external_webhook_notification_url: https://api.example.com/webhook
benchmarking:
# Whether to enable benchmark logging
enabled: True
# How often to report, in seconds. Not all benchmarking supports interval reporting.
reporting_interval: 3600

eir:
imsi_imei_logging: True #Store current IMEI / IMSI pair in backend
sim_swap_notify_webhook: http://localhost:5000/webhooks/sim_swap_notify/
no_match_response: 2 #Greylist
tac_database_csv: '/etc/pyhss/tac_database_Nov2022.csv'

logging:
level: DEBUG
level: INFO
logfiles:
hss_logging_file: log/hss.log
diameter_logging_file: log/diameter.log
database_logging_file: log/db.log
hss_logging_file: /var/log/pyhss_hss.log
diameter_logging_file: /var/log/pyhss_diameter.log
geored_logging_file: /var/log/pyhss_geored.log
metric_logging_file: /var/log/pyhss_metrics.log
log_to_terminal: True
sqlalchemy_sql_echo: True
sqlalchemy_pool_recycle: 15
Expand All @@ -98,18 +100,25 @@ database:
password: password
database: hss2

## External Webhook Notifications
webhooks:
enabled: False
endpoints:
- http://127.0.0.1:8181

## Geographic Redundancy Parameters
geored:
enabled: False
sync_actions: ['HSS', 'IMS', 'PCRF', 'EIR'] #What event actions should be synced
sync_endpoints: #List of PyHSS API Endpoints to update
endpoints: #List of PyHSS API Endpoints to update
- 'http://hss01.mnc001.mcc001.3gppnetwork.org:8080'
- 'http://hss02.mnc001.mcc001.3gppnetwork.org:8080'

## Stats Parameters
#Redis is required to run PyHSS. A locally running instance is recommended for production.
redis:
enabled: False
clear_stats_on_boot: True
# Whether to use a UNIX socket instead of a tcp connection to redis. Host and port is ignored if useUnixSocket is True.
useUnixSocket: False
unixSocketPath: '/var/run/redis/redis-server.sock'
host: localhost
port: 6379

Expand Down
Loading

0 comments on commit c2ce1ab

Please sign in to comment.