-
Notifications
You must be signed in to change notification settings - Fork 78
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NC | NSFS | Bucket Policy Should Be Managed Only by Bucket Owner #8288
Comments
@shirady have you tried it against AWS? |
@romayalon, today I tried it with Sagi: {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": { "AWS": [ "ARN-of-user-in-different-account" ] },
"Action": [ "s3:*" ],
"Resource": [ "arn:aws:s3:::shira-bucket/*",
"arn:aws:s3:::shira-bucket" ]
}
]
} Summary
Bucket policy actions output:
Thanks @@sagihirshfeld |
@romayalon, I will share what I saw in the containerized deployment. Testing Details:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": { "AWS": "user1" },
"Action": [ "s3:*" ],
"Resource": [ "arn:aws:s3:::bucket-user-2/*",
"arn:aws:s3:::bucket-user-2" ]
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": { "AWS": "*" },
"Action": [ "s3:*" ],
"Resource": [ "arn:aws:s3:::bucket-user-2/*",
"arn:aws:s3:::bucket-user-2" ]
}
]
}
|
Environment info
Actual behavior
Expected behavior
Steps to reproduce
Create 2 accounts and give the account2 bucket policy full access. Then, try to put the bucket policy by account2 and remove the bucket policy.
sudo node src/cmd/manage_nsfs account add --name shira-1001 --new_buckets_path /tmp/nsfs_root1 --access_key <access-key> --secret_key <secret-key> --uid <uid> --gid <gid>
Create account2 with the CLI:
sudo node src/cmd/manage_nsfs account add --name shira-1002 --new_buckets_path /tmp/nsfs_root2 --access_key <access-key> --secret_key <secret-key> --uid <uid> --gid <gid>
Note: before creating the account need to give permission to the
new_buckets_path
:chmod 777 /tmp/nsfs_root2
.sudo node src/cmd/manage_nsfs bucket add --name my-bucket --path /tmp/nsfs_root1/my-bucket --owner shira-1001
sudo node src/cmd/nsfs --debug 5
Notes:
process.env.NOOBAA_LOG_LEVEL = 'nsfs';
in the endpoint.js (before the conditionif (process.env.NOOBAA_LOG_LEVEL) {
)config.NSFS_CHECK_BUCKET_BOUNDARIES = false; //SDSD
because I'm using the/tmp/
and not/private/tmp/
.alias nc-user-1-s3='AWS_ACCESS_KEY_ID=<access-key> AWS_SECRET_ACCESS_KEY=<secret-key> aws --no-verify-ssl --endpoint-url https://localhost:6443'
.alias nc-user-2-s3='AWS_ACCESS_KEY_ID=<access-key> AWS_SECRET_ACCESS_KEY=<secret-key> aws --no-verify-ssl --endpoint-url https://localhost:6443'
.nc-user-2-s3 s3 ls
nc-user-1-s3 s3api put-bucket-policy --bucket my-bucket --policy file://policy.json
policy.json:
nc-user-2-s3 s3 ls
nc-user-2-s3 s3api put-bucket-policy --bucket my-bucket --policy file://policy.json
, remove the bucket policy by account 2:nc-user-2-s3 s3api delete-bucket-policy --bucket my-bucket
More information - Screenshots / Logs / Other output
The text was updated successfully, but these errors were encountered: