Simple 3D Buildings "Outline": Necessity & Representation

I’ve just started with mapping a few “Simple 3D Buildings” (S3DB) and struck a couple of more-or-less severe problems concerning the involvement of multipolygons and the outline role, which I would like bring to discussion:

If we consider S3DB from the purely 3D-Buildings perspective of renderers which fully know how to parse them (i.e. F4Map and Streets.Gl), the set of building:parts, grouped by a building=* relation seems to be fully sufficient to represent a building.

Only, if we add renderers which don’t know how to interpret that construct into the equation or if we want to expressly indicate that a different 2D outline than the natural “union of all parts” should be used, we need a simple (multi-)polygon. Coincidentally, at least F4Map also requires an outline to be the union of all parts; it does not deal well with parts which protrude beyond the outline, despite that being allowed by S3DB.

Here comes the problem with adding an outline:

Consider a S3DB which is made from a 10x10 square on level 1, centered on the top sits a 5x5 square on level 2, and on level 3 there is a 20x5 long rectangle which protrudes beyond the 10x10 square at both ends:

                                    +-----------+
#########L3#########                |           |
       ##L2##               +--------------------------+
     ####L1####             +--------------------------+
                                    |           |
                                    +-----------+

z |__                     y |__
     x                         x

Note how L3 and L1 share no vertices. The 4 intersections seen in the (x,y)-plane don’t have vertices in the corner. How can I obtain an outline?

  1. If I want to create an outline for consumers which are not able to automatically infer it as “the union of the footprints in the (x,y)-plane”, I would optimally – mathematically – create the outline as the explicit union of the building:part polygons. But OSM doesn’t have a Union relation, only a Multipolygon relation, and that won’t accept overlapping areas!
  2. I could break the parts down by splitting them such, that afterwards I can choose a set of split building:parts which do not overlap, and put that into a multipolygon. But splitting the parts – which are already what they should be – will create its own bunch of difficulties.
  3. I could manually create a polygon for the outline, but if I place that where it is supposed to be, it will have 4 additional vertices in the convex corners where originally no vertices are! On the one hand, the separate outline polygon should exactly match the outline – which suggests they should snap together on vertices – on the other hand, I musn’t introduce vertices in the building:parts which aren’t there. Plus, if I don’t share these vertices, they will be sitting on the corners and any editor like iD will try to snap them to the lines of L3 and L1 first thing.

The only solution which is proper seems to be composing the outline from the parts through a Boolean union; but for that we currently have no tool.

But would there be anything which speaks against Multipolygons supporting overlapping areas? The current rules for validity say that they must not, but is that really necessary?

It is true that an outline polygon would not be required most of the time. But only if all data consumers, even those with no interest in 3D, would be updated to be able to handle such mapping.

Because this seemed too much to ask, we intentionally designed Simple 3D Buildings to be compatible with 2D applications. Those should be able to ignore the building:part polygons and consider only the building outline.

Additionally, we didn’t want to require too many relations (which many mappers consider challenging to edit). This is where the idea of 3D renderers using the building outline to collect building parts in the absence of a relation comes from.

As far as I can see, this solution works for both 2D and 3D applications.

You reject it because it violates a “I mustn’t introduce vertices” rule, but isn’t that just a self-imposed constraint? From a practical point of view, I can’t see why creating those vertices or sharing them between polygons should result in incorrect outcomes.

4 Likes

Yes, it’s true that it would render the correct results. But I can only consider it a good solution if it represents the data in a correct way, too!

L3 and L1’s geometries in the example are independent and, as such, should not be represented through common vertices. That seems like a very basic assumption of how we should model geometries.

If we add new vertices on the edges and make them affect eachother, that is a misrepresentation and we should try to find a better solution, if it exists.

And I think generalizing multipolygons to arbitrary boolean operators (union, intersection, difference) – or introducing a new relation of that type – can be such a solution. The current rules for multipolygons – which ones are valid and which aren’t – are, although not arbitrary, unnecessarily limiting from a semantic point of view. These rules cause this problem and they solve no problem for the user - they only seem to make it easier for implementations (and often I think not even that is the case).

They solve (partially) problem of avoiding extreme complexity.

I would prefer complete elimination of 3D building mapping over making multipolygons significantly more complex.

Or making impossible to ignore building:part data.

Why you think that this is forbidden?

You can do this.

If they represent single building then they are not independent.

1 Like

There are situations in the OSM data model in which the presence or absence of shared nodes is considered to make a semantic difference, but this is not one of them.

I fully agree that a correct rendering result isn’t sufficient if the data is incorrect. But there would not be incorrect data in this example because the OSM data model does not share your basic assumption.

I’m afraid this is how I would characterize your “mustn’t introduce vertices” rule – without it, this problem vanishes.

3 Likes

The thing is, there isn’t really any such thing as the OSM data model. There is only what floats around the wiki and the community, and what people choose to do or not to do.

So let’s take a step back and look at this from an axiomatic point of view – rather than taking what is as a reference for what should be!

What we all want is quality and quantity, which means correctness and participation in OSM. Correctness and participation require simplicity, because not everyone can become a hightly motivated expert. Indeed:

We make it simple for the user to input data correctly, by using natural and minimal semantics, which they can easily understand and internalize.

“Natural” means that the way how a situation would be described without giving it much thought is also the way how we choose to describe the data. “Minimal” means that we do that in the easiest way possible.

For example, it is simple for the user to describe something which is characterized by three vertices as: A triangle with three vertices. If, for any reason, we demand that the user describe that triangle through more vertices or with a different kind of representation (“a rectangle with one of its vertices located on a straight edge”), then we make it complex and often complicated.

Let’s take another, more sophisticated example: A soccer pitch, a square forest, a square lawn in a larger grassland, an access road, and a parking lot together form a recreational park. The soccer pitch is located somewhere on the edge of the forest. The road cuts through the forest and across the grassland, where also the parking lot is located.

What’s the natural, minimal way to describe the data?

Well, we just did – in the paragraph above! Now, the objects which we are about to lay down on the map should be a verbatim representation of the above, meaning:

  • 1 polygon each for forest, grassland, lawn, soccer pitch, and parking lot
  • 1 line for the road
  • 1 relation for the recreational park

More precisely, we said that “the soccer pitch is located on the edge of the forest” and “the road cuts through the forest”!

We did not say “the soccer pitch is where the edge of the forest would be without the soccer pitch, and the forest extends only to the edge of the soccer field” (naturally, there are no trees growing in the middle of the soccer pitch!).

We did not say “the road cuts through the forest and the grassland where the latter would be, but now they extend only to the sides of the road”.

And again, the same for the square lawn in the larger grassland.

Also, note how the natural way of describing the recreational park is saying that the park’s grounds are where any of forest, soccer pitch, parking lot, or the road are located.

The simple (and elegant) choice of objects is therefore:

  • a square polygon for the forest
  • a rectangular polygon, with its center on the edge of the forest, for the soccer pitch
  • a large polygon for the grassland
  • a square polygon for the lawn
  • a polygon for the parking lot
  • a line for the road
  • a relation expressing a logical OR, or geometrical union connecting, forest, soccer pitch, lawn, parking lot, and road.

We just made it simple for the user to naturally map the data correctly!

If you still think there is complexity in this example, then indeed: We didn’t make it easy for the renderer (consumer)!

The lazy renderer would prefer if it had not to consider the semantics of “a soccer pitch, partially overlapping with a forest and grassland”, “a lawn entirely contained within grassland”, “a road overlapping with forest and grassland”, or “the boolean union between several, overlapping areas”.
The lazy renderer would have preferred if we decomposed and split our polygons such that there is no overlap. The lazy renderer would have expected that we implicitly do the complex work which it could, but doesn’t want to do!

Luckily, consumers like mapnik/Carto et al do a good job at modelling the semantics of things like “a road overlapping with grassland” and correctly render the road on top of the grassland, etc. – all despite OSM’s lack for formal semantics.

But that does not free us of our responsibility to remain vigilant and rigorous w.r.t. modelling the correct semantics:

  • Don’t mistake complexity in the tools/consumers for complexity in the data model. Further, even: a simple, elegant data model will often enable better, more efficient tools, despite our first impression that the tool would have to become more complex.
  • Don’t take any tool/consumer or its capabilities as a reference for how things should be. Practical considerations are fair and necessary, but we should openly express the optimum and say it aloud if a tool/consumer needs to be improved for a larger goal.

Specifically – and coming back to the thread – I think generic Boolean operations are a basic component of simple, sustainable semantics and an improvement which is needed. S3DB are only one example where this has an immediate application, but the possibilties reach far beyond that (like in the example above), even if you disagree with 3D mapping! :wink:

area covering its area is much simpler, especially for future edits, and strongly preferred

(you can also phrase it by “area with embedded geographic information expressing its relation with objects mapped as within it” if you prefer, expressing this relation manually is a poor idea)

And you casually propose introducing highly complex objects in

1 Like

While I agree that there are cases where the OSM model could benefit from more “natural” ways to express thing (I’d especially love for nested multipolygons to express things like “this tent pitch site and this caraval parking together form a camping site”, not to different from your example), I think introducing operations like union, difference, intersection, etc. would be going to far, or possibly rather going in the wrong direction. I personally have no issues think in those terms, thanks to years of working in GIS and software engineering, but for someone not familiar with them those terms are as unnatural as the terms road and parking lot are to a fish (I’m actually serious here; I’ve taught a few SQL courses, in which set operations are an important part to understand the more advanced uses, and set operations are often among the hardest part for students to grasp).

Also, as mentioned, relations are something many existing mappers find hard to work with and try to avoid, and something that presents a significant hurdle for new mappers. So much that there are movements to reduce the number of multipolygons to the bare minimum. Better tooling could/would likely improve this situation, but developing good tools is no walk in the park.

Consuming data from OSM is already very hard (compared to other data models), and while “making it easier for producers/mappers” is a valid argument, it always must be weighed against “making it easier for consumers/renderers”. Otherwise we’d end up with a data model that makes it supremely easy for mappers, but impossible to the point of extinction for renderers. A balance is what is needed.

Regarding having to add vertices in the locations you want to avoid, maybe it would feel better/more natural to consider them as actually existing in the form of the locations where the parts meet each other?


(yellow circles highlighting the added vertices)

v.v. those yellow vertices, on buildings there’s QA warnings if corners of touching buildings are not connected, either gap or overlap.

These are all good, practical points.

As for whether it’s difficult to grasp the concept of a geometrical union for some, more than just re-expressing the union manually as a separate polygon, I think that merely providing the option to do it elegantly (using a union-relation) doesn’t stop anyone from using the less-elegant/less-natural method, if they so want. Of course, I admit that the correlation between “being unexperienced” and “not grasping set operations” weakens my point about this being actually necessary/beneficial.

Concerning the balance between ease for the producer vs. that of the consumer, I understand your concern, but I think in reality the additional requirements for the renderer are rather easy. Being able to consistently combine areas in a mathematically complete and consistent fashion should integrate well with any remotely reasonable architecture. In fact, I think that

  • current limitations, by which tools refuse to process multipolygons which violate “the rules”, are often artificial and unnecessary specializations.
  • forcing the tools to handle polygons in a mathematical consistent, generalized manner would eventually benefit them and create nicer, more powerful source code.

You inadvertendly modified the premise which I think makes a difference. The blue block (5x5) must not be as wide as the red one (10x10). Then there are no naturally occurring vertices at the projected intersection.

In theory: Sure, it’s all just code using algorithms present in any competent GIS library.
In practice: Nope, not at all.

I’m currently working on a project where I’m downloading OSM data and pushing it into a database, and because I have some rather special requirements I decided against using an existing tool such as osm2pgsql. The way I construct multipolygons is using the ST_BuildArea function, which conveniently works in such a way that I can just give it all the members of a multipolygon relation and it will give me a multipolygon (it does ignore the roles of the relation members, which lucky should not be an issue provided the multipolygon relation is valid). Introducing set geometric operations would require recursive SQL, at least 4-5x the amount of code and likely 10-100x higher runtime. Sure, I could move all that code out of SQL and possibly even go back to using a tool such as osm2pgsql and let it handle all that for me, but I’m just gonna leave it at there are really good reasons for why I’m doing it as I am without going into the details.

Sorry, misread your initial description. But I still think that for practical reasons it’ll be easier if you add those vertices.


These kind of discussions are a huge part in the fields information and data modelling. And you’ll never ever be able to model reality in every little detail, unless you want a completely useless model. Sometimes you have to add some artificial/synthetic information (such as your vertices) and sometimes you have to drop some real information.

Forgive me for being blunt, but I think you can also relate if I respond that none of these are good reasons. Practical, powerful reasons, yes; and real counter-examples against my hypothesis that common tools could readily be improved. But not good.

You’re working with what you have, and what you have is designed in a way which apparently goes against good principles. Every intersecting set of polygons can programatically be transformed into non-intersecting polygons; by a preprocessor – and orthogonal to any later code – if you so must. That you’re using libraries which can’t seem to accomodate for such a simple concept does not invalidate the latter [1].

I’m not implying anything about PostGIS either – they may have had their reasons for these design choices. But I want to be clear and explicit about what’s good and who’s the culprit, if we assess the situation with an optimum in mind.

[1] That said, I’m not convinced we couldn’t find a PostGIS compliant way to deal with this, but without the code I can’t tell.