Page MenuHomePhabricator

Image lightbox perceived slow due to always creating new urls
Closed, DuplicatePublic


As example:

  1. View
  2. Click the first image to open in the lightbox.
  3. Navigate to the next images (second and third).
  4. Navigate back to the previous images.

The initial opening is slow because the image follows two redirects:

Each subsequent navigation is slow for the same reason. However the worst part is that when navigating backwards or re-opening the image a moment later it goes through the same redirects again, and ends up on a different (unique) url. Thus going through several client-server roundtrips every time *and* re-downloading the same file again because it's at a different url.

The end result is that it takes about 1-2 seconds between images which is sufficient cognitive time to lose the mental short-term imprint of the visual and not be able to tell the exact difference between them. Something that makes going through a few dozen design iterations very irritating.

For now I workaround this by opening them in tabs instead, and switching between those.

Event Timeline

Krinkle updated the task description. (Show Details)
Krinkle added projects: Maniphest, Wikimedia.
Krinkle added a subscriber: Krinkle.

See also T8359, which might just be the same issue.

This is largely because of specific requirements from WMF, see T5685. We can reduce the number of redirects, but per requirements in that task we can not reuse URIs and can not make these resources cacheable.

Have WMF requirements relaxed so that it is acceptable to serve files from undiscoverable URIs and allow them to be cached?

(We can cache actual image objects in the browser, although this may create other problems, and is generally a mess that needs to be individually hard-coded in every case where we show images.)

This also affects task editing and comment preview.

When writing a comment or editing a task description that contains something like {F1234 height=300}, I'm continuously re-downloading and re-rendering that image every few keystrokes in the preview below. Every time it goes through a bunch of redirects, ends up on a new unique url and re-downloads the whole image.

I'm pretty sure the preview case is recent (last couple weeks) and I don't think we changed anything, but I haven't hunted down what actually did change. My theory was that it was a change in Safari behavior, since I don't see this behavior in Chrome and I first observed it after applying an OSX update. Are you using Safari in OSX?

I'm fairly sure that I've observed that behavior in Chrome.

Oh, sorry:

  • With height=300, yeah, it's going to be a flickery mess.
  • Without height=300, it's a flickery mess in Safari only, in the last few weeks, as far as I can tell.

Ah OK... I can't rememberif I had set a height when I observed this behavior.

@Krinkle: I can't come up with a reasonable way to fix the height=300 comment preview case unless we render the element entirely on the client or relax the WMF constraints. I think the WMF constraints are excessively paranoid, but it's moot since you're the only person complaining about this and you're on the WMF install, so adding a "[X] Use less paranoid caching that doesn't flicker all the time." option wouldn't fix the problem for you.

To start with, height=300 is ultimately equivalent to the raw image, because even if we generated a thumbnail at that height and made it cacheable you could use height=400 or height=500 until the thumbnail was the same size as the original. To prevent this, we'd have to remove either remove height or limit its maximum value to no larger than the default thumbnail. This problem would still exist for size=full.

The WMF constraint is essentially that it must be insufficient to know a secret URI to download the image data.

We can not serve this resource with cache headers if it's going through a CDN, because then the CDN will cache it, and a user knowing the secret URI can retrieve it.

We can not serve this resource from some image-unique or user+image-unique URI because knowing the secret URI allows any user to retrieve it.

We could perhaps designate a non-CDN userdata domain, separate from CDNs, which serves with cache headers but which we instruct users not to put a CDN in front of. This would probably work, except that I don't think we have any way to version the URI properly. That is, if you're typing a comment preview in browser A, and then open browser B and preview the same comment, it will break if we're giving you a one-time-use URI and trusting that your browser cached it.

So maybe we could version the URI per session, and then if you ever clear your browser cache you just have to log out and log back in and we just hope users figure this out.

Alternatively we can start sending psuedomarkup over the wire and passing it through some pre-filter that mucks with it to replace the <img /> nodes with clones of other <img /> nodes we previously built and are hiding in process memory, but this is complex and I'm not sure it'll even really work.