So, from the discussions and what’s seems to be common looking at very old IDs and relations that still exist, the relations/ways/nodes on OpenStreetMap, because their full history plus (what is complained as not allowing fast imports) care to avoid any kind of deletion are, by definition, already close to persistent/external identifiers. What’s actually not as desirable (compared to most authority control) is when concepts requires more than one unique identifier, which can happens for example since one way may be split because needs to have different metadata on a smaller part, so the initial information is copied on all it’s new parts
Potential counter argument: what about nearly created features that duplicate something, then get deleted? What about old things (like entire ways without any metadata, likely result of bad Imports), that eventually someone deleted in the future delete? My reply: even persist identifiers such as DOI can be aliased to new ones (see Changing or deleting DOIs - Crossref) and when truly in error (in OSM equivalent, either early duplicated not used outside or very old, but without any metadata at all to be used by anyone) could point to a truly 410 gone forever page without even explanation (like https://www.crossref.org/_defunct-doi/)
So, I’m genuinely open to criticism or comments here. But my argument is that inside OpenStreetMap, its data is far more often than not already consistent (even before any try any schema/validation on top of it), likely even allowing a decent level of full retroactive research anyone would expect from most well crafted library heading systems.
Potential counter argument: but if we compare Wikidata (SPARQL) to OpenStreetMap (Overpass QL), the Wikidata approach seems more organized! My reply: the way OpenStreetMap is organized (explicitly geodatabase, also far less scary to collaborate than Wikidata directly), even with is free tagging approach, by using the most popular conventions such as the ones to render would make OpenStreetMap data in RDF form much more well interlinked than Wikidata is for places. But it needs to turn inference on to have it pre-cached (which could easily expand data to a point make BlazeGraph collapse). Contrary to naive opinions, Wikidata is far less complete for places than OpenStreetMap is, and is no surprise that OSM data is often used to argue other datasets.
So, saying node/way/relation are okay, means the discussion of “persistent IDs” is not necessary? It is. But the actual downside is not quality, but duplicates not as easy to track outside. The expectative of a library catalog (or in DOI terms, an Register Agency) would be if users ask for an old ID, that evolved for something else, we expand this new collection.
Hypothesis: have an dedicated metadata search for IDs that “evolved”
The early idea from @SimonPoole and other comments from @SomeoneElse and @Jez_Nicholson (about nodes/way may be not trivial do use as external reference, but are unlikely to be removed) already are reasonable.
In addition to the idea of eventually having explicitly persistent identifiers (because even relations have limitations), my hypothesis is with some strategy to (even if not in the main API, but something looking at full history) to transform an old ID of something as an alias for something new. This might be easier for things that started with some kinds of nodes or relations (ways I’m really not sure if is as easy)
This alone could help a lot to reduce need to create high level identifiers for something that still on its infancy (such as a point to represent a hamlet (Tag:place=hamlet - OpenStreetMap Wiki) that can start as node, but become area, maybe even over time become a village or town). Also, if we explicitly document this approach, then we could make them stable for the outside world (again, here considering that OpenStreetMap always points to what is known to exist in current state, so for historical meaning outside users would need the date).
Since some people here already discuss RDF/Wikidata etc, maybe we could eventually have some sort of script or proxy that is designed to “upgrade” things that changed. This both would require some online link and (since very likely would be used a lot, in special for data conflation) how others could run it locally without speed limits. I could try do the code for this, but still interested on the strategies/algorithms we could use!
PS: the suggestion to we explicitly make some sort of metadata search for evolution of some items does not replace need of persistent IDs or like what I agree with @Minh_Nguyen comments, we try to improve data conflagation and/or terminology cross-walk without rely too much on outsiders. It’s easier for data already in OpenStreetMap format, but for compare external data, latitude/longiture plus maybe some extra metadata would tend to be usable in special to match data to help humans.