The site is also much slower to load on mobile than previously.
This isnât caching, as I am seeing this per link folllowed.
The site is also much slower to load on mobile than previously.
This isnât caching, as I am seeing this per link folllowed.
Loading this page made about 60 requests and transferred a total of 1.3 MB for me. With or without cache doesnât seem to matter. The biggest transferred chunks is (oh wonder) the javascript application itself.
It seems to be that this is something that should be cached / be cachable? Even the downloaded images (OSM logo etc) are not cached.
Paying attention to bandwidth needs is a critical thing to do, now and going forward. It very likely can be better managed, perhaps starting with an urgent feature request ticket. Smarter caching can be a âquick tune-upâ amount of effort (not much) and itâs 99% or 100% done (for now).
As far as @SK53 's refreshing notifications, I donât doubt it! Not just flushing of old caches, but downloading of both code to run things and data which donât appear to be well-cached. That really should be tuned up.
I, too, have noticed significant differences (sidebar included) with Safari (browser) on macOS, but Iâll reserve further comment for now. Iâm on a fast, all-I-can-drink network, so I donât see / pay attention to download, speed or caching / buffering issues (directly), though I can do some âtraffic analysis across the wiresâ if need be.
Iâm glad to see people talking about things like âperformance and network bandwidth on mobile devicesâ as that is truly critical for Discourse 3 (and beyond) to work (in a global project, on a billions-of-mobile-devices planet).
Iâve heard it said before (about software âtuningâ): âthis is manageable.â We talk about it a bit (like this), the right people are tapped on the shoulder with a small list of possible tune-ups and âthings improve.â
It should be said at least once (again?) that âan initial downloadâ is simply going to be a bandwidth hit. So for quite a few (including those who are only just now reading this using 3.0) they might have experienced the worst of it, and it might be better, or sluggish ahead, thatâs still yet to be determined.
So, eyes open, everyone. If you see something, say something. We can turn off that spigot as needed.
Steady ahead as she goes, Captain. (Captains, maybe).
Caching is done automatically by the browser unless it is prevented for one reason or another. However, I donât understand why it doesnât seem to be cached. Maybe a web developer can chime in here?
Maybe you accidentally activated âDeactivate Cacheâ in DevTools? For me Caching works, only about 700kb are transferred
Okay, nevermind. I found the documentation for the Transferred column - in a nutshell âservice workerâ means that the caching is handled by a service worker and that no bytes were transferred. The developer tools in Edge are clearer in that it also displays the bytes transferred
In conclusion, it doesnât seem to be a caching issue at all. What is always transferred without cache is the html page itself, but this page for example is just 40kB.
No, I have not activated that option. Curious:
What page were you testing on? I just used this topic page. Maybe there were differences in how much new content was there at reload time?
But I wouldnât worry too much about it, 500kb to 1.5 MB transfer size are really not much compared to most other websites.
I measured the amount of data actually received by my computer. Using Firefox.
First load: 1,77 MB
Second load: 32 kB
So the caching works fine.
Moving the performance conversation to a new topic for better tracking.
@Firefishy bringing this to your attention
Depending on where you are, that might be the case. In some places that might be enough to make using this site financially impossible.
Separately from that I find that even on a fast mobile broadband connection (31Mbps down) it takes about 5 seconds to load the first page, and 4-5 seconds to load subsequent ones**, following a link from an email. A snappy conversation it is not - if any of the systems I used to work on back in the 80s were that slow on every page load theyâd have been thrown out.
** Firefox, Nokia 6.1, Android 10. On a newer phone (Nokia X10, Android 13) itâs about 3 and 2 seconds respectively. better; but still not great.
While I sympathise with anyone having issues with the caching / download size issues with the website, we are running a fairly standard install of discourse. Any issue will likely need to be fixed upstream.
Modern browsers use the brotli compressed assets. Older browsers use gzip compressed assets. Make sure to use network transferred bytes, rather than uncompressed asset size when using web developer tools.
I have briefly tested using the latest Chrome and Firefox, after an initial large set of JS/CSS etc downloads, the subsequent pages load using less than 100KB of new downloads.
I canât reproduce that though. It takes about 1 second on an Android 11, Firefox, with 1 of 4 bars of 4G connectivity. Which is plausible given that on reload, something like 40kB or so are actually transmitted.
I will agree here. In fact, OSM instance opening random thread even seems to load noticeably faster (2.5s largest contentful paint) then discourse own instance (4.2s largest contentful paint).
Thus I would suggest that people who are having performance issues on community.openstreetmap.org try meta.discourse.org too. If that instance is noticeably faster then OSM one, than there are things that can be addressed here. If it is about the same slowness, than any request for performance improvements should be taken with Discourse developers.
This webpagetest.org waterfall graph shows about half the time of that 8 seconds is just javascript pegging up the CPU without any data even attempting to load. You may also wish to test and compare your hardware+browser rendering speed in Speedometer 2.0 (e.g. my laptop gets 112
runs/min in Firefox, 178
in Chromium; P30Pro only 35
in IceCat and 40
in Firefox Klar; and Galaxy S II only manages 7
runs per minute).
Discourse is unfortunately Javascript-based beast, which means that it offloads much of the work to the client, which means the slower the device (read: mobile phones, especially the ones that are not newest top-of-the-line ones) are likely going to suffer.
For example:
on my Huawei P30 Pro, clicking on random no-yet-visited thread on OSM instance on 50Mbps Internet link, takes about 2.2s
to fill the screen with data (i.e. largest contentful paint).
on my old Samsung Galaxy S II, using that same Internet link, that same page shows in about 3.8s
. (and complains about unsupported browser, likely meaning not all functionality has been loaded. It displays article just fine, though).
on my 11th Gen i5-1135G7 @ 2.40GHz laptop, again on the same internet link, it shows in about 1.2s
.
Grim statistic offered by Chrome Lighthouse (available via F12
keypress) while it evaluates performance say than almost 70%
of mobile pages take almost 7s
to load (and then proceed to say how much conversions you are likely to lose because of that).
Itâs the beast part I have trouble with here. Itâs loading and displaying text and a few sporadic images, a thing browsers already do and have done for the last 25 years. Most of the fancy functions donât have to do anything until you click on them. Why must everything get bloated until it feels like dialup?
I wonder how hard it would be to crawl the site or part of it periodically to provide static HTML for a few key areas? A wget of this thread definitely looks like an improvement - relative links to e.g. https://map.atownsend.org.uk/categories obviously donât work on that page but I find the presentation there vastly superior to the presentation here (all the text is there, the only pictures are the ones that matter, no silly boxes asking âhas your question been answeredâ, â2 Likesâ instead of silly symbols, etc.).
Well, me too. Unfortunately, that discussion either:
Yeah⊠The problem is developers usually have beast of computers, and make programs which work OK-ish on them (which means they work quite bad for most everybody else). My solution was that it should be required by law that any software developer must be forced to use at least 10 year old technology (or older) and be denied access to newer tech. That way that specific problem with bloat would solve itself automagically (although it might induce other tiny problems, like nobody wanting to be a developer )
I wonder how hard it would be to crawl the site or part of it periodically to provide static HTML for a few key areas?
Not too hard I guess, but looking at that example of yours seems to completely break Discourse navigation (which, in any thread with more than dozen messages, is absolutely vital to be able to make sense od Discourse discussion at all, much less participate in it). While who replied to what could be simulated with regular HTML anchors, it would need something little more advanced than wget.
Perhaps additional extra-light Discourse theme (akin to mbasic.facebook.com) would be better suited (have anybody looked if something like that exists?)
(for those wanting that pute wget behaviour, then can have it already by simply disabling javascript for this site, for example with uBlock-origin, NoScript or similar browser add-ons).
Howâs performance now that we have updated to 3.1?
For me itâs still struggling on a combination of poor wifi plus poor cellular coverage, but I suspect thatâs a fundamental design issue of the âeverything in Javascriptâ design.
No, seems to be âabout the sameâ. Tapping a link on a phone with good 4G coverage still takes about 5-7 seconds for the page to appear. Same phone as above.