Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

upgrade: Warn and sleep if we find a deprecated v0 format container #4084

Merged
merged 4 commits into from
Oct 12, 2022

Conversation

cgwalters
Copy link
Member

This is prep for ostreedev/ostree-rs-ext#332

jlebon
jlebon previously approved these changes Oct 11, 2022
@jlebon jlebon enabled auto-merge October 11, 2022 14:34
@cgwalters
Copy link
Member Author

OK, not entirely sure what's going on with the Jenkins CI. My best guess is that we're hitting OOM issues, but I'm not seeing that offhand in the pods or events.

jlebon
jlebon previously approved these changes Oct 11, 2022
@jlebon
Copy link
Member

jlebon commented Oct 11, 2022

Yeah, it's possibly an effect of coreos/coreos-ci-lib#116. I've added another commit on top.

@cgwalters
Copy link
Member Author

OK that got us farther, the next thing I think we're hitting though is the growth in Fedora repodata cutting against our default kola 1G memory limits...

@cgwalters
Copy link
Member Author

OK, that time the Rust build got killed by OOM. We're hardcoding the jobs to 5, which seems like it should be enough...needs debugging.

@jlebon
Copy link
Member

jlebon commented Oct 12, 2022

I think vmcheck timed out because 30 mins is no longer enough for the lower parallelism. Personally, I think it's fine to keep nhosts = 5 and just do the s/1024/1536/ bit.

jlebon and others added 3 commits October 12, 2022 09:30
We've switching to also setting a memory limit in coreos-ci-lib:
coreos/coreos-ci-lib#116

It looks like we're not requesting enough memory for the RPM build.
Let's bump it to 4Gi and lower parallelism by 1.
@cgwalters
Copy link
Member Author

/override ci/prow/fcos-e2e
We have previous passes here, and I think our Prow jobs running on build02 may be slow because we broke autoscaling trying the latest build there

@jlebon jlebon merged commit 5448ebd into coreos:main Oct 12, 2022
@openshift-ci
Copy link

openshift-ci bot commented Oct 12, 2022

@cgwalters: Overrode contexts on behalf of cgwalters: ci/prow/fcos-e2e

In response to this:

/override ci/prow/fcos-e2e
We have previous passes here, and I think our Prow jobs running on build02 may be slow because we broke autoscaling trying the latest build there

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants