Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simulate file locks in blockio-sim #415

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

jorisdral
Copy link
Collaborator

@jorisdral jorisdral commented Oct 3, 2024

What's neat is that we can now find out in our StateMachine tests whether our implementation is leaking file descriptors!

TODO: the test I added shows that my implementation of file locks is not right just yet... Two ExclusiveLocks work fine, but SharedLocks don't work. I was assuming that mocked file handles don't allow concurrent read and write locks, but it seems they do, so we can't just piggyback on the OpenModes

@jorisdral jorisdral self-assigned this Oct 3, 2024
The simulation piggybacks on existing `HasFS` functionality for opening files in
different `OpenMode`s. The simulation does not simulate inter-process locking.
With the nice addition that we now assert that there are no open file handles at
the end of each iteration of the tests!
It's better but not perfect. It passes the test, but would fail if there
were genuine concurrency. Doing it properly would need an implementation
in fs-api and fs-sim. Fortunately we do not need a concurrency safe
implementation (for now).
Follow a more normal convention, and allow running individual tests more
easily in GHCi via quickCheck $ theProperty
Copy link
Collaborator Author

@jorisdral jorisdral left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dcoutts looks great

--
-- We implement this using the content of the lock file. The content is a
-- counter, positive for readers and negaive (specifically -1) for writers.
-- There can be any number of readers, but only one writer.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
-- There can be any number of readers, but only one writer.
-- There can be any number of readers, but only one writer. Writers can not coexist with readers.

--
-- Warning: This implementation is not robust under concurrent use (because
-- operations on files are not atomic) but should be ok for casual use. A
-- proper implementation would need to be part of the underlying 'HasFs'
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
-- proper implementation would need to be part of the underlying 'HasFs'
-- proper implementation would need to be part of the underlying 'HasFS'

Comment on lines +44 to +47
-- Warning: This implementation is not robust under concurrent use (because
-- operations on files are not atomic) but should be ok for casual use. A
-- proper implementation would need to be part of the underlying 'HasFs'
-- implementations.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍


readCount :: Handle h -> m Int
readCount h = do
content <- API.hGetSome hfs h 10
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should use hGetAllAt or something similar, because hGetSome can do partial reads

writeCount :: Handle h -> Int -> m ()
writeCount h n = do
API.hSeek hfs h AbsoluteSeek 0
_ <- API.hPutSome hfs h (BS.pack (show n))
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should hPutAllStrict or something similar, because hPutSome can do partial writes

Comment on lines +38 to +48
-- | Lock files are reader\/writer locks.
--
-- We implement this using the content of the lock file. The content is a
-- counter, positive for readers and negaive (specifically -1) for writers.
-- There can be any number of readers, but only one writer.
--
-- Warning: This implementation is not robust under concurrent use (because
-- operations on files are not atomic) but should be ok for casual use. A
-- proper implementation would need to be part of the underlying 'HasFs'
-- implementations.
--
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should also maybe warn that the simulation only defines how tryLockFiles on the same path interact. A regular hOpen on the same path will still work

@jorisdral jorisdral marked this pull request as ready for review October 4, 2024 15:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants