Hi, with our On Wheels app we have collected a huge dataset of around 18 000 wheelchair accessible parking spaces in Belgium. We are struggling to find the correct way of importing this data in OSM and make it visible in the app. Our dataset is node based and OSM uses nodes and ways for parking spaces. For wheelchair accessible spaces this is hard to implement ways into an app. I see the advantages of using ways to draw the parking spaces, but when we add new parking spaces with our app this will be as a node. The same applies for the info we collect about buildings. Here the buildings are drawn as a way, but the information about the function of the building (restaurant, shop, …) is in a separate node. We are wondering if this concept also can be used for wheelchair parking spaces?
So this would mean parking spaces are drawn as a way (amenity=parking_space), but also a node (parking_space=disabled) is added on each parking space with all the accessibility information:
Hi, thanks for your input. I understand the reason for drawing it as a way (which I still support and don’t want to change), but if we want people/apps to use this data, nodes work much better. Since accessible parking spaces don’t always have the same surface as the parking lot, it seems better to give this info in a node. Also the width and length are crusial info wheelchair users need to know if they can park there or not. We do this already with buildings, so it makes sense (at least for disabled parking spaces) to also do this. Drawing a parking space on your pc as a way seems easy, but doing this with an app is not. Also the rendering icon for accessible parking is only visible on nodes and not the way.
This is also a first way of asking the community and we will post this question in several communities/platforms. Only when we have a consensus, will we import our data with the guiding rules.