Bad automated translations

A user has been making strange translations.
Examples:

After communicating with the user, they said they’re using Google Translate.
They have made numerous translations in Israel and worldwide and this is just a tiny sample.

How do we proceed?

Skimming the other edits, most of them seem OK translations. But I haven’t thoroughly reviewed all of them.

As you noted, machine translation is inappropriate for this task. (It doesn’t matter which machine translation software they used.) Automated transliteration may yield better results but might still be imperfect.

A geographic feature in this part of the world would rarely have an English name like “Eye of the Lap” unless it happens to be a biblical name. Even then, few English biblical names appear on modern general-purpose maps. The linked Wikidata item is apparently sourced from the Geographic Names Server (GNS), which gives the name as either ‘En Heq or En Hek in Latin script and עין חיק in Hebrew script, confirming your expectations.

If they came up with all their name:en=* tags using the same method, then all their name:en=* tags in the region should probably be reverted. To avoid reducing coverage of English names in the area, you could use either GNS or the ICU library’s built-in Hebrew-to-Latin transform (available in this demo that’s currently down).

2 Likes

Quoting the user:

I have a list a points I need to update with latin alphabet. If I find a name published in a website with an english version, I would use the translation provided, if not I have to use Google Translate.

This will not be easy, but translated items must be removed for sure. Not only they wrong, but also they hide missing en tags from validators and minimize their chance to be added in the future. I asked user to do this cleanup, but we for sure should follow.