This is, by definition, impossible. And a hint is that a popular name for those who have such IDs for others use is authority control.
Even features that seem 100% procedural, let’s say latitude and longitude in a popular reference system WGS84, still need to be maintained: the algorithm needs to be published. Those who create hardware and software need to understand how to translate to other forms. The “maintaining” might be a book in a library that’s good enough to still be usable, but need to be maintained. This is less clear with things we take for granted, like what “0 1 2 3 4 5 6 7 8 9” means, but even datetime formats have conventions.
By logic, IDs created by direct procedural translation of another thing also need to be maintained (even if it means publishing and incentive people to use it over alternatives).
Then, we have the opaque identifiers, which are considered the best for the very long term, because new people have less incentive to change then without actually having funcional issues. The arks have discussion on why they are against even having organization as part of the prefix (like happens with some old DOIs) because eventually organizations might change names, so they might try to change the entire prefix or… give up keeping record of old ones
Note that even centuries ago, very few (often elite, sponsored by kings) were able to write books, yet very few survive, as people could decide by the cover that the book doesn’t seem worthy. Much more content is produced today, and the role of authority control is very, very important. Most people still complain about OpenStreetMap internal representation without being aware that it is more close to a library catalog, with full history even for very specific nodes, than discartable geometries that would be mere layers. Note how upset not just Data Working Group, but old contributors become when they see people deleting content: they’re reaction is similar to libraries seeing someone burning books.
However, different from DOIs (which there’s more effort to deserve to have one; and strict set of minimal metadata) new kinds of IDs to represent concepts such as the one for OpenStreetMap may from time to time be created by mistake and in massive quantities. If duplicated, then aliases from duplicates need a pointer to recommend reference. But (like they do on Wikidata) things poorly defined, even if without user request, might be worth delete. But even this procedure needs to be known, so let people with less fear.
My argument here is the following: the idea of identifiers for places, necessarily, means authority control. Think about centuries in future. However, the very nature of being geospatial, could allow easily the equivalent of concept identifiers on OpenStreetMap be far, far more updated by bots and by companies than happens on Wikidata after their initial setup by humans (or the first organization that starts the definition, but using their identifier as one of the properties) because is easier to make inferences based on location thar are resolvable better than in Wikidata. (because most of the time OpenStreetMap already only accepts things in space and time).
So, replying @o_andras, while the idea of UUID, made by Overture, is in fact perfect for distributed concept, if we assume OpenStreetMap as authority control (which is how is considered by the way document very well, I closing full history of every node), we would still have something such as serial incremental number at least for the concepts that have some level of notoriety. But even for things that would be heavily automated (think like a few big collaborators adding data) my next point makes me think that no single big player could create an identifier alone without others agreeing with more relevance.
(Hypothesis) strategy to deal with both notoriety and long term survival for concepts for place: require baseline standards on how interlinked is the definition
I had one idea about how to deal with persistence of identifiers that may not have sufficient information and already are not clearly likely to be notorious (like administrative boundaries): we, even more strict than Wikidata, enforce minimal metadata, maybe even some delay time (like weeks). The DOI, while allows even those authorized to issue codes to have private uses, for example does this for what’s expected to be used in public:
https://www.doi.org/doi_handbook/4_Data_Model.html#4.3.1
So, while not necessarily as big as UUID 4, if generic amenities (which is not clear their relevance) could get some sort of identifier, but not the same kind of the shorter ones, otherwise we would have far easier the issues the @SimonPoole pointed out. The types of places of interest that would likely to have more spam (so people even being paid add metadata to them, like happens on Wikipedia), then would have stricter requirements, however always more focused on what could make them well defined to be interlinked (avoid take in calculation for example ephemeral data that most users would do anyway, like what it sells).
A “DOI approach” would means automatically that some human need to decide if that place deserve a code (even if this is somewhat algoriticaly, like today is on Wikipedia that after some time a page may become a Wikidata Q item) and even then, much, much more medatada. This would means like take in consideration to value if can have or not an identifier, users add data of inception, etymology of the name, if this shop is part of some brand, etc, etc, etc, things that are less likely to change. Sadly, we cannot add personal information on OpenStreetMap, but if the shop had some famous founder in another authority control (like Wikidata Qs) then it could add as founder that person.
The idea of enforcing even stricter metadata for the same kind of shorter identifiers like administrative boundaries could have does not mean these places could have other more algorithmic or by user/company requests. We can do both. But the “first-class” persistent identifiers (even if places cease to exist) in my opinion should only be allowed if at least in theory they could eventually be cared for.
Since several people commented about the challenge of knowing when something changed or not, a good safe approach is to intentionally make it hard to give identifiers for the first wave of amenities, and I mean not “Wikidata level notoriety”, but “Wikipedia level notoriety” (aka already be famous). This alone could allow sufficient time to think about (but already seeing how the encoding is working) so we could start to define minimum metadata for both humans and anyone else (like companies) would need to have.