I’m just a small contributor here and I don’t wanna pretend I know much about the legal side of things or the OSM guidelines regarding translation/transliteration, but I do wanted to briefly share my thoughts.
OSM is one of the few projects that didn’t jump onto the AI hype yet. Of course, the big companies went first. Then smaller ones. Heck, even Mozilla Firefox, the only major FOSS browser still out there, has a built-in chat bot now. Meanwhile, Microsoft is wishing to reopen a nuclear power plant and major disaster site to fulfull the ever-growing energy consumption of these “AI” models.
I, as a mapper, might not be able to add the translation of a clock tower into 25 languages onto OSM, but at least I don’t consume half a litre of water every time I make a decision.
With regards to your post title, there is no “responsible” “AI”, there is no “innovation”, there is no “copyright compliance”. One cannot pretend these computer models consuming excessive amounts of power and water, ever-dwindling resources, are “responsible”; they’re accelerating global warming.
Jumping on a short-term trend can barely be called “innovation” too. Before the AI hype, we had the NFT hype; before that, the cryptocurrency hype. After the AI hype dies, the spatial computing hype will likely be next. These hypes have one primary purpose: financially benefiting big tech companies. We should seriously consider whether we, admittedly indirectly, want to support this trend.
Not to mention that AI models are trained on large amounts of stolen data. A quick search on fediverse scapers or Clearview sums it up. And then there’s the upcoming EU regulations regarding AI, which will likely make it necessary to indicate what data was produced using AI for at least some applications, which isn’t the case in your example.
Then there’s some practical concerns raised by @JeroenHoek’s wonderful post earlier in this thread.
Again, I’m just some random girl from Belgium; I don’t make decisions here, and I wouldn’t be able to deal with the burden of doing so either. But I do believe we should seriously consider the next moves we, as a community (that is what OSM is in the end), make here. Do we really want to utilise models that have proven to have a serious bias towards marginalised groups, consume massive amounts of power and water, and benefit larger tech companies, just so we can easily add translations to OSM objects on a large scale? I don’t believe so. Ethically speaking, I believe OSM should be a project aimed at the essence of producing map data, keeping what’s good, improving what’s bad, but not just jumping on the latest trends just because others do. True innovation lies in distinguishing yourself from others, not copying their behaviour.
Side remark: I can seriously imagine mappers having to clean up AI-generated garbage from OSM tags in the near future and wasting their time on that, because some editors would have implemented AI-assisted mapping.