I posted on reddit, and someone mentioned this community to reach out to as well.
I am in a Graduate CS class on HCI, and gathering user feedback for a group project. We wanted to design a flow of interactions that let the user/hiker submit trail data to AllTrails/OSM directly through the app. We want to get your input on what you might value about how to interface with OSM in an app, and what kind of interactions you might not want.
Aside from that, are you interested in opinions from AllTrails users in particular, or is the survey also suitable for OSM contributors who might not know anything about AllTrails?
Thank you for pointing that out! I reached out to the person in charge of formatting the survey. I will update here when they get back to me. But generally the three tasks are:
prototype A: you are at a trail you are familiar with, start recording trail data, then submit it
prototype B: you want to filter for trails that need more data, select one, and get to the screen that has the option to start recording data – that option won’t be selectable
prototype C: you want to check what data is being recorded, try to find out
I can answer that question (because I’ve done it today!). Before today this hiking route was missing some detail from about 30% of its length. I walked that section today, recording a GPS trace, which also has GPS waypoints with extra information in them. The thing that I’m using to record a trace has an OSM map on it, so it’s easy to add things saying “there is a route marker for XYZ route relation here” and “there is no longer a hedge here”. Some things can be recorded immediately with StreetComplete or Vespucci, some have to wait.
For example, where the previous trail alignment was wrong I waited to compare my new proposed alignment with what was there before and what imagery and out of copyright maps suggested should have been there. A particular example that benefited from multiple sources was this bit.
That comparison of multiple sources can’t be done in the middle of a field on a phone, whereas “that pub has a wheelchair access ramp” can, so mapping is at least a two-stage process these days.
For completeness, the three devices used were a Garmin handheld with one of my maps on it, an Android phone running StreetComplete (with its own vector map background) and Vespucci (with a raster map of mine as background) and a PC where most of the editing was done in Potlatch (though other editors allowing extensive “free drawing” are also available). The interaction with each of these is very different.
It has a live prototype and a video that walks through its functions. We’d love to get your feedback, and how it might be actually applied in the field.
Our final prototype design supports the first two, maybe by doing number 4? I was wondering, if GPS was too data intensive, if it could log vector readings of the phone’s gyroscope…? But that’s also outside the scope of the project