I’m curious — was the idea of expanding the app’s functionality something that was requested or recommended by the grantmaker, or did you choose to focus on that yourself?
I’m asking because I personally would love to see some investment in cleaning up papercuts and UX shortcomings of the app (e.g. the map zooming out when I tap the + button — or even the presence of this location reconfirmation step at all, as @okainov suggests; or the lack of descriptions and illustrations for tag values; or the jumping of the map area as the data in the bottom panel changes; etc.) more so than expanding the functionality of the app.
IMHO there’s room for making this app efficient and welcoming for newcomers as well, not just for those who have gone though the initial learning curve and have learned its quirks.
Good question Simon, I too didn’t care in the past.
But it’s not all that predictable, or at least it needs a lot of confirmation. 13 may or may not exist, and often there are house names. I also like to record if the house number was not visible (but I can perhaps guess it).
I do get excited surveying the roads that would be trickier to predict even though they require more work. The UK loves to build housing developments without using straight roads, or has old streets with combined/demolited houses.
This is typical OpenStreetMap benefit/ethos. I didn’t care for mapping speed limits and lane markings, leave that to the commercial maps. Until I started driving confusing places where I wasn’t sure where to be or how fast to go and the navigation app I used could’ve been using that information.
I was not suggesting that the addresses don’t get recorded, just that for the (overwhelming) majority typing is not necessary and the value can be easily predicted, just as we’ve (Vespucci) have been doing for more than a decade. Yes, that doesn’t work in “some” cases, say the few places here that still use chronological numbering, but the whole point is that is not the rule.
The problem with all AI/image/voice recognition things is that they have to be reliable enough out in the field to “nearly” always work without retrying and/or corrections.
That “nearly” needs to be better than 9 out of 10, IRL I’ve never been able to reach that, even though behaviour has massively improved, to reach that, typically it is more like 1 out of 10, aka party trick.
One gets a grant for an inspiring story. In my case, I explained how this would help humanitarian operations and multiple local community movements, based on discussions I had in the past. Bugfixes get grants only for core infrastructure like Curl and OSM website, which Every Door is absolutely not. Maybe if an NGO or a private company would chime in…
This project is big enough to allow me working on it full-time. For small changes, it would be either too hard to justify, or the sum would be so small, I would have to multiply the load by working a regular job.
But also I should mention, I tend to fix things around as I go. And — plugins are just the first part of four. Another three are offline work, documentation, and better UI. So we’ll get to this!
I see a good point in your discussion with Gregory. When making the address input form in 2022, I thought of adding “-2” and “+2” buttons, for example, to make entering house numbers quicker. But to do that properly, I would need to look at map data and the building location, and it gets more and more complex with every thought.
But if I make that form extensible, then people can try writing this prediction logic or better input controls themselves.
Speed is important for that mode, because I envisioned users riding around in a car (as passengers!) and collecting addresses for entire villages. So yes, this is an important problem to consider.
I wouldn’t call UX improvements bugfixes and I believe EveryDoor quickly became one of the most used mobile OSM editors precisely because the potential was there for an easy-to-use editor that allows actually adding stuff to the map, whereas StreetComplete filled that space for, well, completing the data on existing elements (they have since been adding more and more ways to add new elements, but EveryDoor remains more powerful in that it allows e.g. arbitrary editing of tags).
EveryDoor feels really well poised to become an exceptionally accessible editor that would rival the approachability of iD (and therefore also its prevalence and centrality in the OSM ecosystem), which would then enable an even more vibrant ecosystem of contributors enriching OSM with data captured on the ground. Indeed, I —and many others I know— have embraced ED as a surveying tool where before we did a lot more armchair mapping, or less mapping overall. This seems like an inspiring enough vision to me.
Many of the friction points I had with the app since I started using it remain there (I mentioned a couple in my previous message) so I’m not sure I’d call those easy fixes — they would likely require some investment in UX design and exploration of different approaches, maybe some user testing, interviews, beta testing, A/B trials, etc. It’s not just about doing the change itself (that much might be easy/quick) but about figuring out what UI changes would make the app more intuitive for the majority of users and I can totally see that kind of work justifying full-time employment.
In any case, I’m glad to read that UI improvements are on the roadmap! Keep up the great work!
TBH the text recognition in iOS is very good in my experience - with non-handwritten signs it gets it right 90% of the time (I implemented it in an experimental surveying app for a different project). Android may be different though!
I should mention that in my experience, typing is usually much faster than turning on the camera, pointing it at something, waiting for some engine to parse the image, and validating the output, possibly fixing typos.
My recent mapping experience (with Every Door! thank you again ) gave me this idea: One useful use case for more automatic things, even if slower and not able to capture some things, is when it’s cold out. I can hold my phone up to take pictures and press the occasional on-screen button to confirm using touchscreen gloves, but things like full-on typing or several precise taps in a row don’t really work when it’s -10 out.
Thank you for your awesome work.
I translated to Hebrew using Weblate, can’t wait to test it
In Israel we have a special time frame limitation, that is “before Shabbat enters” and “after Shabbat ends”, currently there’s some bypass by using “sunset-sunrise” but it’s not super accurate (The Hebrew calendar is lunar instead of solar so instead of observing the sun Judaism observes the stars) but adding a sunrise-sunset option should suffice.
Wow, we’re getting to something here! OSMTracker-like quick amenity creation, winter mapping with big buttons and minimal interaction… Feels like a new mode tbf, or a new app even (reminiscent of my OpenSurveyor project from 2013), but maybe there’s a chance for Every Door flow here. I envisioned ED as a one-step mapping app, and this would need reviewing, but also it could be done on the device itself, so… idk, I’ll think on it, thanks!
Yaron, good idea, I’ve created a ticket. And thanks for translating!
One thing with the biggest slowdown wholr editing for) in EveryDoor for me has been, when I am adding multiple objects of the exact same thing. For example trees
It would be great to have a mode where you can input the tags once and then just place one point after the other. Just confirming position of each point
Isn’t it already there? Try placing e.g. a bench with given attributes: material, backrest, seats. Then add another bench and notice how all the attributes are pre-filled. I used that to map parks full of benches, lamps, and trashcans.
Yes it does keep the tags but it still take quite a few steps to create a new point.
click on +
confirm location
choose preset
confirm tags
I think it would be speed up the process quite a lot if you could just do the first one like normal but then
1 move location
2 confirm
1 move location
2 confirm
.
.
.
I really would love the option to customize the templates, minimally by reordering the entries, as well as which and how many entries appear in the top part of the properties, which among “more fields”. For example, bicycle parking outdoors is always free in my area (could be prefilled) but the query is on top, whereas “lit” is more relevant yet only among “More Fields”.
While slow, GPS point averaging would be interesting for situations where few satellites are visible but more accuracy is needed. Too bad Garmin doesn’t provide an API.
Along with this new period of development - is there any chance of revisiting some of the older requested features and improvements that were previously closed as wontfix?
For example, this issue where imagery layers disappear when zooming in is actually the single highest-priority thing I’d most want fixed to improve the EveryDoor experience.
This could be attached as part of the “Big UI” plugin work - just yesterday I wanted to quickly add some nodes while walking, and the expected workflow would be to switch to the Bing imagery layer and zoom all of the way in (a relatively “coarse” gesture input) but then the imagery disappears and I have to just barely zoom out (a relatively “fine” gesture - which is quite difficult with frozen fingers when it’s cold out!) so I often just give up and make a mental note which I will never remember to add the feature when I get home.
Here is some recent discussion in SCEE about that very idea, with real-life picture examples and various methods of averaging. Unfortunately, it turns out that it doesn’t work all that well in practice (same building gets values all over the color spectrum depending on the phone, zoom/camera used, time of day, weather conditions / cloudness etc.)
Agreed, I’d also love an quick and easy way to add to panoramax ! (there is already an issue about that at Take picture · Issue #184 · Zverik/every_door · GitHub) (I currently use mapcomplete.org, but that is web app and thus quite slow; and I already switch which editing app I use while on the go way too much for my tastes - SCEE, EveryDoor, Commons app, Panoramax, OsmAnd, and occasional Vespucci)
I love Wikimedia Commons! Note however that they have quite a big emphasis on metadata and not only picture itself (i.e. depicts, description, categories, caption, etc. - try official commons app to see what would need to be re-implemented)
Would it really help that much, though? (without the rest of AI picking up phone, opening hours etc)? I find current solution of typing “sta” and having it NSI-autocomplete “Starbucks” for me be probably quicker that pointing the camera and taking a picture and than having the phone analyze it…