I’m a huge fan of quality assurance, all these tools like (just for example) osmose and PTNA, they help us improve the map and comply with our agreed standards.
but what should we do with those among us who misunderstand the meaning of a warning, in particular when that’s about incomplete information? for example …
a highway and a waterway crossing each other, without specifying ford/culvert/bridge? if the original mapper didn’t see it in the picture, they left it unspecified and the QA tool would highlight it for a local mapper. as remote mappers, I think we should simply leave it standing as a warning, however annoying all these warnings in our map.
an office=government, without government=*. I’ve seen instances of people deleting the office=government, just because they didn’t like the osmose flag in their area.
making up values just to fill in a missing tag.
changes in agreed tagging, which didn’t make it yet in the QA.
programming mistakes in the QA.
reducing coloured flags on any QA is a good thing. or isn’t it?
I totally understand your “issue”. It’s not only to “clean my area”, but also to be able working efficient on things can be solved remote and also the ability to derive a “todo”-list for future surveys.
If each QA tool would have some functionality to classify the findings (manually) whether it’s a mistake which can’t be fixed remotely, would be a big step. A little bit like the “Too hard/Can’t see” selection in MapRoulette.
I believe an important consideration here is what JOSM’s plugin called “Validator” (a final step before upload, all kinds of “data checks” are done about what is about to be uploaded) does: it presents both Errors (big, bad, red stop sign…FIX THESE!) and Warnings (mmmm, if you understand the sometimes-subtle issues that might be going on with some of these “flags,” feel free to fix them, but if not, it’s OK to ignore them, and they WILL “pop up” again, and someone else who does have the sort of experience to understand and fix the subtle issues can do so).
That is, I have found, an excellent strategy of a pretty awesome QA tool (step in my pipeline to upload high-quality OSM data): wave a big red flag at obvious errors (and really, everybody should “know enough basics” to fix these), wave a minor yellow flag at others, and if you can’t fix the yellow flags, simply pass on them. Someone else will eventually get to them later. Winning strategy right there!
And all such (QA-oriented) tools, whether Validator, Osmose, Inspector, MapRoulette (sort of), Relation Analyzer, OSMCha…there are a LOT of these) have “not quite perfect” versions of these: you might get false positives, you might get “I don’t quite know how to fix this, even as it is identified as a problem…” and while the latter is OK to leave alone and the former is not ideal, “any improvement to our map data is not too small.”
unfortunately, “in case of repeated problems”, the DWG is quite powerless against a behaviour like “oh, another block, let’s click it away”. my previous practice of alerting the DWG time and again has resulted in a warning against myself.