Semi-Automated Tree Additions

How are you planning to conflate your data with trees that already exist in OSM? What assurances can you make that conflation is accurate, given the concerns already raised regarding tree position vs. photograph capture position?


He is using a phython-script, so it ist still scripting. One can say: “It’s more in the direction of building a new editor, than preparing a huge automated Edit.” None the less it is an automated or half automated Edit.

From the standpoint as to whether this is subject to the Automated Edits code of Conduct, whether the features were originally individually reviewed doesn’t matter, What matters is that they now have a collection of data, which they want to bulk upload to OSM. It is no different if your local municipality had collected points for all of their fire hydrants by sending someone out with a GPS and camera and then later some OSM mapper wanted to upload that data to OSM.

Now, in terms of evaluating whether an import should go forward, it is good to know how the data was collected, and individual GPS verification is better than some other methods.

1 Like

JOSM is built with JAVA. Everything on the computer is scripts. But if the data points that end up in OSM come from people taking pictures of trees and sharing the coordinates of them, then that isn’t a bot edit or mechanical edit as commonly defined in OSM.

Then logically speaking they should follow the import guidelines, not the Automated Edits CoC.


I’d say they should be following both!

Why? (Post must be at least 10 characters.)

1 Like

“Automated Edits” is an umbrella term that includes imports. If you look under scope on its wiki page is says it includes “data imports, including both fully automated imports and ones where a standard editor is used;” But, yes, it is good to explicitly point out here the import guidelines as well.

" The Import Guidelines, along with the Automated Edits code of conduct, shall be followed when importing data from external sources…"
Source: Import - OpenStreetMap Wiki

I could throw that straight back at you as Why not? :grinning:

But, they’re doing an, at least, semi-automated, import, which is also (as I was also going to mention earlier but forgot) being done by an Organised Group, so lets throw those rules in as well, shall we? Organised Editing - OpenStreetMap Wiki


They basically say the same anyway: never copy from copyrighted sources without explicit permission, write documentation for the proposed edits, get community support and make very sure that there’s QA involved (and document that too).

I already answered this question:

Luckily the OEG largely overlap with the other guidelines, and apparently the DWG won’t even enforce this guideline :person_shrugging:

Hello all. I hear you loud and clear. Will start answering questions below:

  1. We need to see the script, and some sample output (i.e. .osm file) before this all starts.
    edits_payload = f"“”


     edits_headers = {
     "Authorization": f"Bearer {access_token}",
     "Content-Type": "text/xml"
     edits_url = f"{sandbox_api_url}/api/0.6/node/create"
     response = client.put(edits_url, data=edits_payload, headers=edits_headers)
     if response.status_code == 200:
         print("Tree added successfully!")
         print("Response/Node ID: ", response.text)
         print("Failed to add tree. Status code:", response.status_code)
         print("Response:", response.text)

I have not included the full script here for security measures. This is just the code to add a node/tree. We are using kmeans clustering from sklearn.cluster to add the 7 changesets each containing trees.

  1. What validation have you done on the accuracy of the tree locations? For accuracy, we are still relying on photos. Of course, since a full picture of the tree has to be submitted to the app, inherently the GPS from the picture will be a little off from the actual tree base. However, a user does have the option to change the GPS coordinates manually on their side before submitting the data, but this is then subject to trust that the user can do this accurately. They would do this by going in the built in app, seeing on the map where the tree was recorded, and then drag that pint to where they currently are at the base of the tree. Alternatively, the user can walk to the base of the tree before submitting, and that GPS will be recorded. So hopefully that can resolve the disconnect between the tree location from photo and actual tree location.

  2. OSM doesn’t contain images per se. → I looked at this to find it: Key:image - OpenStreetMap Wiki. Seems like the key was was never accepted, so we may not proceed with image

  3. The app is open to all users, not necessarily only trained or vetted ones

  4. Why 14 trees at the time? Are you going to create 14-tree groups randomly in a forest? 14 is a arbitrary number we choose because we do not want our chnagesets to expand a large area, we wanted to keep them local as per the guidelines. 14 will ensure at least some degree of locality per changes

  5. A more in-depth explanation of the mapping/validation process seems in order. → First users take a picture of a tree. GPS is calculated. Then, the python script is run every 30 min. It takes in a picture, sees if there is already a node with that GPS in OSM. If not, it will go ahead with making the 7 change sets and adding the tree node into OSM. I want to reassure that we have made all our changes so far in the master api as per the osm guidelines. Nothing in the real map has been changed yet.

We will look into the import guidelines and try to follow those as well. Hopefully step 2 fixes the GPS concerns. I also have the ability to only allow tree submissions if the Horizontal accuracy is within X (my choice) meters. In my code, I can even make it so that trees that are close but not nessarily at the same GPS will not be added to OSM to avoid duplicates. The way we have it right now is no 2 trees at the exact same GPS can be added, but we can change it to no 2 trees within x degrees lat and long can be added.

Hope this address the concerns


Hello! I am also a team member of this application. Thanks for helping out with our concerns so far.

In the application we’re making, the approximate error while getting GPS location varies from 20 meters to around 5 meters at a time, and that error is printed on the user’s input dynamically so even if we do stand exactly by the base of the plant, we might be off by 5 meters, and the user will know of it. And since we’re giving them the allowance to re-place the pinned GPS coordinate, we are allowing the users to provide the most accurate location result.


This is the question that should be obvious to everyone. Hide this post if you will, but it’s just nuts ( and obviously not from the tree type).

That does not really sound like a good enough approach to me. Due to the height of most trees, every photo of almost every tree will be off by 5 to 20+ meters. Because it applies to almost every single tree, the GPS correction should be on a somewhat mandatory and at the same time intuitive basis.
Otherwise I can see many (especially new) users who either don’t know about the manual tree location correction or they don’t care about it and just want to take photos or think it’s too complicated.

1 Like

One idea would be to ask users to take three photos: overall shape, a leaf, and a close-up of the trunk. That will help with species identification and could be a very useful database generally. You then take the co-ordinates of the bark photo as the tree location. “Enforcement”, but actually useful, too.

Proposing to fork this thread here: aside from the technical stuff, and community acceptance, can you say who is behind this project and what the motivation is? What is the useful data that will result?

Asking because I’ve had exactly this idea for a while, with zero time, money or skill to do anything about it. (Fwiw my own motivation is to map the trees in my city to promote their conservation, and track climate change.)


Hi, thanks for the response, I’m not sure I follow this though.

  • How does your app know that the GPS location is wrong by ±5 to 20 metres?
  • What known accurate reference is being used as a comparison to the GPS location?
  • What motivation do the users have to correct the location manually, if this step is not being enforced?

My other question is still to be answered:
What are you doing to conflate your data with existing OSM data?

1 Like

Hello again,
To answer your questions:

  1. The GPS service provides a real time feedback on how strong it’s signal is and that is represented based of it’s approximate error
  2. We’re using a GPS service that relies on fused location providers that determine location through available on-device hardware, including (but not limited to) Wi-Fi, GPS, and cellular networks. As far as the working of the GPS goes, I am not completely sure of the architecture of their work is.
  3. The step is optional of course, before submitting the pinned location, we show the user their pins on an Imagery map, so that they can confirm their points. Since the person sees the imagery map with pins on, they are motivated to move the pins to the right locations if they see that the accuracy is off.

To answer your main question:
With this data, we aim to map trees in onto OSM(which is not as popular as mapping buildings roads and utlities) so for this mapping is helpful for:

  • More accurate 3D modelling of the world (w.r.t. OSM 3D mapping)
  • More relevant mapping information for first responders (a tree in the way of a disaster struck area could be an unseen problem for first responders)
  • A way of keeping in check the ecological changes through mapping too, etc.

That sounds very hit-and-miss to me.

  1. An “optional” invitation to “confirm” the (inaccurate) locations the app has suggested is an invitation to do exactly that: the default effect in human behaviour is very strong.
  2. Before re-positioning their pins, users will also need to overcome the expectation the app has itself set for them, that this is an “automated” process. So in the same process you’re asking them to trust, and not trust, the app. That’s doable, but difficult.
  3. Setting an accurate location freehand on a phone seems to be very difficult for novices: e.g., many (maybe most) Note pins are off-target, and this can be by 100s, even 1000s of metres. I think this is because (a) accuracy depends on zoom level, and (b) novice users don’t realise how precisely the database records location.
  4. Even allowing for this, how will users determine the “better” location? Aerial photos are often offset from ground truth. Existing map features can be better than aerial, but also worse: the user won’t know which. And many trees will be in open ground where there are few reference features anyway.

Did you see my suggestion that you have users take a close-up picture of the bark of the trunk to complete the workflow, and take reference coordinates from the photo of the trunk? This helps with identifying species, age and health of tree anyway, and would build a “true” GPS location* into the primary workflow.

*or at least, best possible

P.S., my OutdoorActive app shows the GPS error: I can watch it improve in real-time, so when I need a highly accurate location I can wait for it. This is immensely helpful.

Would still be good to know which organisation is behind this.

But some of my questions here have been answered elsewhere on this thread:

I agree with all of this. ^

Honestly, no answers from the project have made me confident that an automated/scripted approach is the best way to add this data. It would be much better to make the geotagged photos available to OSM so that users can manually review and add data. See as an example.