Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hard-coded lookup for very short strings? #50

Open
bittlingmayer opened this issue Mar 8, 2016 · 6 comments
Open

Hard-coded lookup for very short strings? #50

bittlingmayer opened this issue Mar 8, 2016 · 6 comments

Comments

@bittlingmayer
Copy link

It's understandable that performance for very short strings is poor. Could we create a mapping with hand-assigned weights for those?

I believe strings like 'yeah', 'no', 'si', 'haha', 'hehe' and so on should always be classified reasonably. I am happy to donate my mapping for this.

@saffsd
Copy link
Owner

saffsd commented Mar 8, 2016

Hi! Thanks for the suggestion. How do you see such a mapping being used? Is there a hardcoded relationship (e.g. "yeah" -> "en"), or is it somehow used to modify the weights?

@bittlingmayer
Copy link
Author

It is totally hardcoded, but still included probabilities, where applicable to many languages.

I think it could gradually be extended to have some fuzzy bits, to get more coverage. As a first step, I have made the matching somewhat fuzzy. (So 'siiii' and 'nooo' still are covered, I canonicalise them to 'si' and 'no'.)

In some cases the language returned is 'und'.

Overall I see it as all upside, there is no benefit leaving these things to chance, and no risk in a hardcoded mapping, provided it is done thoughtfully.

@tripleee
Copy link

tripleee commented Mar 9, 2016

Just out of curiosity, what do you map these to? "No" is a valid and common word in at least Spanish, Italian, French, and English. "Si" could be either Spanish or Italian (though improperly accented) or marginally French, and of course, both words also exist as less common words in many other languages. I don't even think I can imagine what you map "haha" and "hehe" to, though I guess they are more common as test strings in some regions (French and German speaking regions?)

@bittlingmayer
Copy link
Author

bittlingmayer commented Mar 9, 2016

I split it somewhat evenly between the languages where it is somewhat possible as a complete standalone sentence (after removing diacritics). So although ' si ' may be more likely in Romanian or Albanian text, the sentence ' Si. ' is not likely in Romanian or Albanian.

Out of curiosity, does the model treat beginning of string and end of string as a character of sorts? That would partly remedy the above.

For 'haha', 'hehe' and ':-)' I believe it's most useful to clients to return 'und'.

But again, I don't really wish to get dragged into the details of the mapping for strings like 'no'. (As it stands, 'yeah' returns 'id' and '¡No!' returns 'zh'. We can't do worse than that.) The argument that there's no perfect answer is understood. Fundamentally we should be able and happy to incorporate a mapping of the top 1M queries/sentences and some "golden" probabilities. We can start with 10, then add a 100 and so on. Queries follow a somewhat Zipfian distribution, so a little bit yields a lot of coverage.

@EralpB
Copy link

EralpB commented Aug 3, 2018

I think that's a genius idea, "und" is a great example. I'm almost certain any text which uses "und" is german. Especially if it's short text (let's say in the order of a tweet)

@bittlingmayer
Copy link
Author

That's the "stop words" or "function words" approach, and it is also very effective.

To be clear though, above when I wrote 'und' I meant not the natural language string but the code returned for 'undetermined language'.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants