Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using Examples with a dataset like ML-10M #111

Open
afcarvalho1991 opened this issue Oct 28, 2015 · 11 comments
Open

Using Examples with a dataset like ML-10M #111

afcarvalho1991 opened this issue Oct 28, 2015 · 11 comments

Comments

@afcarvalho1991
Copy link
Contributor

Hello,

I'm using a large-scale dataset such as ML-10M or netflix and I find that the DataModel<Long,Long> object is taking up to much space, in fact I run out of memory even before loading everything into the DataModel<Long,Long> structure, I removed the timestamp variable from all samples but it didn't do the trick.

Is it me or it's expected that happen? I have 16GB of RAM so it should be more than sufficient to load a sparse matrix into memory even for the netflix "problem" which is a ~3gb trainset.

Thanks,
André

@abellogin
Copy link
Member

Hi André,
that situation can happen. So far, we have used the framework with datasets of different nature, but most of them were not too large.

We are aware of this fact and that the current datamodels are not very generic (check issues #83 and #103). Probably the best solution at the moment would be to implement your own DataModel or rely on other frameworks already optimised for large datasets (such as RankSys [ http://ranksys.org/ ]).

Thank you for the interest, and let us know whatever works for you, so we can work on that in the future.
Alex

@saulvargas
Copy link

Hi all,

Thank you @abellogin for mentioning our framework. Indeed, by generalising your DataModel it would be easy for you to add more efficient implementations (or, even, plugging RankSys' ones).

Just to give you some perspective, I have measured the memory footprint of various implementations of the RankSys' PreferenceData inferface (equivalent to RiVal's DataModel) using the ML10M dataset. First, I have created an implementation equivalent to the current DataModel (RiValPreferenceData). Then, I used the two publicly available implementations in RankSys 0.3 (SimplePreferenceData and SimpleFastPreferenceData). Finally, I am including the results of our last RecSys'15 poster that applies state-of-the-art compression techniques and whose implementations will be published (hopefully soon) in RankSys 0.4.

The results are the following:

Data model Memory
RiValPreferenceData 1,969.2 MB
SimplePreferenceData 1,261.3 MB
SimpleFastPreferenceData 810.7 MB
compression - none 165.5 MB
compression - FOR 42.3 MB

As you can see, there is ample room for improvement with respect to having two nested maps. I am planning to publish these and other observations in a blog post once RankSys 0.4 has been released.

Just one more thing. RankSys 0.4 will be released under a much more relaxed license. That should allow its usage in other projects without requiring them to be licensed under the GPL (as it happens currently).

@afcarvalho1991
Copy link
Contributor Author

Thank you both @abellogin and @saulvargas !

I'll have a look over the RankSys, I think it fits my needs!

I will also have a look at your work "Analyzing Compression Techniques for In-Memory Collaborative Filtering" and hopefully will use it for mine.

@afcarvalho1991
Copy link
Contributor Author

Hi @abellogin,

If you want i can send you an adaptation to the CrossValidationSplitter<U, I> that I developed. I named it CrossValidationSplitterIterative<U, I>, instead of caching 5 folds to memory I compute one a write it promptly to the file (test or train respectively)

Let me know if you want

@abellogin
Copy link
Member

Sure @afcarvalho1991, you can do a pull request or upload it somewhere and paste here the URL. I think it can be useful to have an intermediate class that does not handle everything in memory for the n-fold case.

Thank you!
Alex

@abellogin
Copy link
Member

Related to #60 since strategies take a lot of memory when loading from file.
One solution would be to skip the recommendation step as it is right now, where all the recommendations are generated and dumped into a file. Instead of this, only the recommendations needed for a strategy would be generated.
Also related to #83.

@alansaid
Copy link
Member

@afcarvalho1991 can you please do a pull request with your code and we'll see if we can merge it?

@afcarvalho1991
Copy link
Contributor Author

afcarvalho1991 commented Nov 28, 2016

@alansaid Within the next two days no problem.
I also noticed that there are now slight modifications to the DataModel structure in the master branch.

Edit:
I'm now able to run the Netflix dataset ML-10M on a computer with 16GB with no problem. I also discarded the timestamp but that is optional.

@abellogin
Copy link
Member

Those are great news! I assume that is because of the last changes, which make use of the RankSys data representation /cc @saulvargas

@afcarvalho1991
Copy link
Contributor Author

afcarvalho1991 commented Nov 29, 2016

Hello, I did a pull from the latest version from the master branch. I have now a local branch with my modifications how can I perform a pull request? Can you help me?

My contribution is the implementation of a CrossValidationSpilitter (Iterative) and a functioning Test that was modified from the CrossValidatedMahoutKNNRecommenderEvaluator to create the CrossValidatedIterativeMahoutKNNRecommenderEvaluator.

Also, I would like to inform you that unable to execute the CrossValidatedMahoutKNNRecommenderEvaluator perhaps you need to review this class, some sort of problem with the timestamps table.

Thank you,
André

@abellogin
Copy link
Member

Hi André,

I guess it depends on whether you forked the repository and started doing changes there (preferred case) or if you simply cloned the repository and your changes are on the base branch.
Find here more info about this.

Alex

PS: I will check CrossValidatedMahoutKNNRecommenderEvaluator ASAP, thanks for noticing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants