Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ICDAR 2013 Test result is not reproducible! #94

Open
bado-lee opened this issue Mar 14, 2018 · 8 comments
Open

ICDAR 2013 Test result is not reproducible! #94

bado-lee opened this issue Mar 14, 2018 · 8 comments

Comments

@bado-lee
Copy link

Hi,
I'm trying to reproduce what is claimed in the paper.
Which is
image

However,
What I get is
Calculated!{"recall": 0.7300456621004567, "precision": 0.9147368421052633, "hmean": 0.8120218470647027}

Which is 7% short in F measure.

I'm using the ICDAR Challenge 2 (which is ICDAR2013) Task 1 test set downloaded from the ICDAR website. With your provided weights.

I've used all the codes from this repo and only thing I've modified is to rescale the coordinates into original image's by multiplying f to the text_lines result. And a bit of parsing to be in form of icdar evaluation submission format.
(So, basically I'm using default settings as is if possible unless provided)

Please let me know if there are additional parameters to be tuned or I'm missing something.
Thank you in advance.

@tianzhi0549
Copy link
Owner

@bado-lee Maybe the metric you used considers as an error the case that the method merges several words into one line. You could upload your results here (http://rrc.cvc.uab.es/?ch=2&com=evaluation&task=1&f=1&e=2) and check the performance. Thank you.

@bado-lee
Copy link
Author

bado-lee commented Mar 14, 2018

@tianzhi0549 Thank you very much for your reply :)

I was testing with the official test scrip which is
ICDAR evaluation
which got me result of

Evalutation - ICDAR
Calculated!{"recall": 0.7300456621004567, "precision": 0.9147368421052633, "hmean": 0.8120218470647027}

Taking your advice, I've tested with
Deteval evaluation
Which is also an official code.
and got

Evaluation - Deteval
Calculated!{"recall": 0.816986301369863, "precision": 0.9175438596491228, "hmean": 0.8643502212714309}

It's still 2% short.
Please let me know if the evaluation is the right one(the one you used for the paper).
And also tell me if there are things to be done further.

Again, many thanks in advance!!

@tianzhi0549
Copy link
Owner

@bado-lee Could you please try to upload the results on ICDAR official website (http://rrc.cvc.uab.es)? Thank you very much.

@lxx1884896
Copy link

@tianzhi0549 hello,I use the model you provide to test on the ICDAR2013-2015 Focus Scene Text --text localization.I have modified your code and printed the coordinates that shown in the pictures,and then I submitd my txt files including coordinates of every text line proposal to the server of the competition,but the result is bad,about only 3%-4%,it is unusual,I want to know whether the competition is available to the text line detection,if ont,how do you get your test result.Following picture is my modification in demo.py,please kindly give me some advice ,thanks a lot.

1

@lxx1884896
Copy link

@bado-lee ,hi, I am doing the same thing as you ,but the result is bad .And I want to know what do you mean by saying "multiplying f to the text_lines result",and what is the meaning of "f". I think maybe I have missed something.Can you kindly give me your reply? Thanks in advance

@bado-lee
Copy link
Author

bado-lee commented May 9, 2018

@lxx1884896 hi, in original code, output is scaled value.
the "f" I've mentioned is the one at

im, f=resize_im(im, cfg.SCALE, cfg.MAX_SCALE)

This to be in original image's scale, you need to scale it up as the original as

text_lines=text_detector.detect(im)
with open(osp.join(result_dir, "res_" + im_name.split(".")[0] + ".txt"),"w") as restxt:
    for each_line in text_lines:
        # need to rescale up
        intified = np.round(each_line / f)[:-1].astype(int).astype(str).tolist()
        restxt.write(",".join(intified) + "\n")

@lxx1884896
Copy link

@bado-lee hello,my friend,your solution works for me.Thank you again.^_^

@Matios-zenebe
Copy link

hello there,
I am just teaching my self and want to evaluate text detection codes. anyone can share me his evaluation code please!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants