Skip to content
This repository has been archived by the owner on Aug 14, 2024. It is now read-only.

Adding Bar Graph Visualization for games/applications #24

Open
wants to merge 10 commits into
base: main
Choose a base branch
from

Conversation

danielleodigie
Copy link
Collaborator

This pull request adds bar graph visualization for the applications the QxQ interns are working on. I added to the README, but to run it, after getting your results JSON, run:

python visualization/view.py <JSON FILE> <DATA TYPE>

For example:

python visualization/view.py results/0001_bench.json fidelity

Currently, I only have fidelity and meantime as available data types, but I'm open to adding more, I just wasn't sure which ones would be useful. If you have any suggestions, it would help a ton! :)

Also, I noticed it didn't work as well with the mapping benchmarks, so if you can help with that, that would be awesome

@danielleodigie
Copy link
Collaborator Author

This is what I get when I run the results from run_bv.py with fidelity:

image

Copy link
Contributor

@Yelissal Yelissal left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I ran your code on my computer and everything looks great! I checked your fidelities and they are within a similar range of the paper. Good job!

Copy link
Contributor

@Lementknight Lementknight left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The benchmark name for benchmark fidelity breaks down when you have multiple json with data from multiple benchmarks. Could you rearrange the table to be able to deal with this?
Figure_1

@Lementknight
Copy link
Contributor

This is the data from the json I used
data_table.txt

@Lementknight
Copy link
Contributor

Lementknight commented Aug 3, 2022

I used the visualization code and it no longer works. I noticed that you said that you added dual file comparison, but that wasn't the issue that I was experiencing. The issue that I had was with visualization of multiple fidelities in one json file. When you run multiple benchmark together in one call:
pytest red_queen/games/applications/grovers.py red_queen/games/applications/run_bv.py red_queen/games/applications/run_ft.py --store
The json file has the fidelities of each individual benchmark, so the code you had prior to the adding dual file comparison commit, struggled to visual multiple benchmark names.

Your code was perfectly fine, it just needed to be slightly modified to deal with multiple concurrent benchmark names in the fidelity. I hope that this comment offers clarity to my issue.

@danielleodigie
Copy link
Collaborator Author

I used the visualization code and it no longer works. I noticed that you said that you added dual file comparison, but that wasn't the issue that I was experiencing. The issue that I had was with visualization of multiple fidelities in one json file. When you run multiple benchmark together in one call: pytest red_queen/games/applications/grovers.py red_queen/games/applications/run_bv.py red_queen/games/applications/run_ft.py --store The json file has the fidelities of each individual benchmark, so the code you had prior to the adding dual file comparison commit, struggled to visual multiple benchmark names.

Your code was perfectly fine, it just needed to be slightly modified to deal with multiple concurrent benchmark names in the fidelity. I hope that this comment offers clarity to my issue.

ahh, okay I didn't know you could run multiple benchmarks at once, I didn't write this with that in mind, and your problem is that the fidelity breaks down when you have multiple different benchmarks in one json file?

@Lementknight
Copy link
Contributor

It's more so that the names of the benchmarks aren't displayed in a legible manor
Figure_1
I believe that this could be fixed by redesigning the graph structure, since the data displayed in the visualization is great.

@danielleodigie
Copy link
Collaborator Author

It's more so that the names of the benchmarks aren't displayed in a legible manor
Figure_1
I believe that this could be fixed by redesigning the graph structure, since the data displayed in the visualization is great.

oh it's just how it looks, I rotated the names as such, to see the bottom you may have to go into the subplot config thingy but that's alright
image

Copy link
Member

@mtreinish mtreinish left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've just put a few comments inline. You'll need to git mv the file in to the red_queen directory. The reason lint is passing right now is that it isn't being run on the file because it's outside the red queen package.

@@ -0,0 +1,153 @@
import sys
import json
from turtle import color
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is turtle in the requirements list? I think this will fail unless people manually install turtle so we should add it to the requirements list https://github.com/Qiskit/red-queen/blob/main/requirements.txt so that it gets installed with red queen. That being said I don;t think anything is actually using this.

Comment on lines +147 to +153
print("Invalid Data Type! Allowed parameters are: 'fidelity' and 'meantime'")
if __name__ == "__main__":
main(sys.argv)




Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
print("Invalid Data Type! Allowed parameters are: 'fidelity' and 'meantime'")
if __name__ == "__main__":
main(sys.argv)
print("Invalid Data Type! Allowed parameters are: 'fidelity' and 'meantime'")
if __name__ == "__main__":
main(sys.argv)

Comment on lines +10 to +25
try:
bench_name = {}
bench_fid= {}
for i,bench in enumerate(contents["benchmarks"]):
bench_name[bench["name"]] = i + 1
if bench_fid.get(bench["name"]):
bench_fid[bench["name"]] = bench_fid.get(bench["name"]) + bench["stats"]["quality"]["fidelity"]
else:
bench_fid[bench["name"]] = bench["stats"]["quality"]["fidelity"]
fidavg = []
for i in bench_fid:
fidavg.append(bench_fid[i]/bench_name[i])
return bench_name.keys(), fidavg
except Exception as e:
print("Uh, Oh! Something went wrong")
print(e)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason to just not have the exception here raise?

Comment on lines +94 to +102
if len(args) == 4:
viewtype = args[1]
file = args[2]
datatype = args[3]
if len(args) == 5:
viewtype = args[1]
file = args[2]
file2 = args[3]
datatype = args[4]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you leverage argparse for this so we have a --help to explain the options to users?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If need be, I could help you with this @danielleodigie.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants