Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Acess Denied for S3 buckets is not reporting #8240

Open
Roushan45 opened this issue Jul 30, 2024 · 1 comment
Open

Acess Denied for S3 buckets is not reporting #8240

Roushan45 opened this issue Jul 30, 2024 · 1 comment

Comments

@Roushan45
Copy link

Roushan45 commented Jul 30, 2024

Environment info

  • NooBaa Version: noobaa-core-5.15.4-20240704.el9.x86_64
  • Platform: RHEL9

Actual behavior

see the below o/p specially the bold one, dir permission for user s3user-1-dir is 770 and there is bucket associated to it bucket-new-1 for which I set the chmod to 000, no permission at all, then why there is no invalid bucket[ACCESS_DENIED] in health command output of /usr/local/noobaa-core/bin/node /usr/local/noobaa-core/src/cmd/health --all_bucket_details | jq | less
If i did same for s3user-1-dir then am getting the ACCESS_DENIED but for bucket there is no error for same scenario
[root@cluster4-41 ~]# ll /mnt/cesSharedRoot/
...
drwxrwx---. 3 10 wheel 4096 Jul 26 19:03 s3user-10-dir
drwxrwx---. 3 bin bin 4096 Jul 26 19:02 s3user-1-dir
drwxrwx---. 3 daemon daemon 4096 Jul 26 19:03 s3user-2-dir
drwxrwx---. 3 adm sys 4096 Jul 26 19:03 s3user-3-dir
...
[root@cluster4-41 ~]# ll /mnt/cesSharedRoot/s3user-1-dir
total 1
d---------. 2 bin bin 4096 Jul 26 19:02 bucket-new-1
[root@cluster4-41 ~]# /usr/local/noobaa-core/bin/node /usr/local/noobaa-core/src/cmd/health --all_bucket_details | jq | less
{
"service_name": "noobaa",
"status": "OK",
"memory": "267.6M",
"checks": {
"services": [
{
"name": "noobaa",
"service_status": "active",
"pid": "71215",
"error_type": "PERSISTENT"
}
],
"endpoint": {
"endpoint_state": {
"response": {
"response_code": "RUNNING",
"response_message": "Endpoint running successfuly."
},
"total_fork_count": 2,
"running_workers": [
2,
1
]
},
"error_type": "TEMPORARY"
},
"buckets_status": {
"invalid_buckets": [], <-- no invalid bucket
"valid_buckets": [
{
"name": "bucket-1",
"storage_path": "/mnt/cesSharedRoot/s3user-1-dir/bucket-new-1"
},
{
"name": "bucket-6",
"storage_path": "/mnt/cesSharedRoot/s3user-6-dir/bucket-new-6"
},
{
"name": "bucket-9",
"storage_path": "/mnt/cesSharedRoot/s3user-9-dir/bucket-new-9"
},

Expected behavior

  1. bucket having no access should be reported in invalid buckets in health CLI

Steps to reproduce

  1. manually changed the permission of the bucket folder to 000

More information - Screenshots / Logs / Other output

@romayalon
Copy link
Contributor

thank you @Roushan45
we can definitely add a check for that, initially, we did only existence check, and I added also permissions check for account's new_buckets_path, so we need to add permissions check for that bucket's path.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants