Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

set createTime per node and purge old nodes if maxNodes is reached #57

Closed
wants to merge 5 commits into from
Closed

Conversation

btkador
Copy link

@btkador btkador commented Aug 6, 2018

fixes #56

Copy link
Owner

@nictuku nictuku left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is great! A couple minor comments only.

@@ -512,6 +512,10 @@ func (d *DHT) needMoreNodes() bool {
return n < minNodes || n*2 < d.config.MaxNodes
}

func (d *DHT) GetNumNodes() int {
return d.routingTable.numNodes()
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we really need this? I don't think we do and I think it's better to only expose methods when we really need to. Besides, I think this isn't safe to be used concurrently while the DHT is running?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agree. was just usefull for my debuging.

@@ -206,6 +206,12 @@ func (r *routingTable) cleanup(cleanupPeriod time.Duration, p *peerStore) (needP
r.kill(n, p)
continue
}
// kill old and currently unused nodes if nodeCount is > maxNodes
if len(r.addresses) > p.maxNodes && time.Since(n.createTime) > cleanupPeriod && len(n.pendingQueries) == 0 {
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd say we should kill the node even if there are pending queries. If it's so old, better refresh the routing table with newer models?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we may have just sent out a query a second ago to that specific node, because of the nearest distance to the searched hash.. so we don't want to loose that result?

Copy link
Owner

@nictuku nictuku left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it stop forever after searching that many times? Should that be a maximum rate instead? Like, X many queries per minute or so? My concern is that the failure mode here is a a DHT that gets stuck forever and can't recover. Unless I'm missing something.

Copy link
Owner

@nictuku nictuku left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think I agree with this patch - or I don't understand it. There is already NumTargetPeers to control this. Why don't you set a lower value for it if you don't want a super aggressive node? That's the point of that attribute after all :-).

@@ -105,13 +111,16 @@ func NewConfig() *Config {
MaxNodes: 500,
CleanupPeriod: 15 * time.Minute,
SaveRoutingTable: true,
PassivMode: false,
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is redundant with the RateLimit, right?

(In English we would spell it Passive, I think.)

// max get_peer requests per hash to prevent infinity loop
MaxSearchQueries int
// number of concurrent listeners on same port
ConnPoolSize int
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this will work? The code is not safe for concurrent use, right? If you want to use multiple goroutines you need different DHT instances

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as mentioned by email I broke my fork by commiting testing code and have no clue how to revert

@btkador btkador closed this Aug 9, 2018
@btkador
Copy link
Author

btkador commented Aug 9, 2018

discarded because broken. will create a new one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

high memory usage under heavy load
3 participants