- Browser makers Apple, Google, Microsoft, and Mozilla have announced Interop 2024, a project to promote web browser interoperability.
- JPEG XL, a potential replacement for JPEG and PNG image formats, was not included in Interop 2024.
- The rejection of JPEG XL has been blamed on Google, with the Google Chrome team deciding not to support the image compression technology.
Archive link: https://archive.ph/nulY6
Am I having a stroke, or is this headline horrendously written?
I read it four times and I still don’t understand what love-in means.
Dumb way of saying orgy.
I still don’t understand. WTF are we talking about. This is tech news, not a celeb scandal. Why can’t we just use simple words !
Why say lot word when few word do trick?
What’s the smart way?
Orgy.
deleted by creator
I did decline a event because it said “Dinner Party” in quotes.
When they explained, they meant because it’s not really dinner but snacks and board games. Shame. Was expecting orgy
deleted by creator
I believe you just responded to a message with the answer to your question.
Horrible headline.
Browser maker love-in
Chromium (used by most browsers)
snubs
doesn’t support
Google-shunned JPEG XL
JPEG XL (because Google doesn’t like it)
It was chrome and Firefox both who were against the format, both saying too expensive to implement for too small a benefit
Thank you. I legitimately could not understand the title.
Something about love in subs for Google ? And also JPEG?
I think these days that’s the rule, if it piques your interest but you have some trouble understanding headline, you may just click on the article
The register is doing this shit for years. They are trying to sound smart…
What is this bullshit titlegore
Right? I read it like three times thinking I was just missing an inflection or something. Jesus
I’m a photographer; AVIF and WebP do not serve my needs, JPEG-XL does.
I run my own website down to the hardware in my living room; I will not store 5 variations of any 1 picture just so I can serve the best available to clients when JPEG works everywhere and JPEG-XL offers me a lossless transition from JPG to JXL.
Chromium is literally the only reason JPEG-XL isn’t being adopted right now, and it’s so obvious that Google is pulling those strings.
JPEG-XL Ride or Die.
Mozilla has not jumped on the JPEG XL bandwagon either: The Firefox maker said it’s neutral with regard to the technology, citing cost and lack of significant differentiation from other image codecs.
Two browser orgs.
Not arguing just pointing it out
Mozilla are also dumb, yes, but they aren’t the one in control of 90% of the browser marketshare.
There is a difference between indifference and actively working against something.
I saw a web warning saying “if you cant see {x} then consider upgrading to Firefox” today and it fill me with joy
GIF is the future Mr Cameraman
Why do you need to transition from jpeg to anything else? Just keep using jpeg for old files.
Chromium is literally the only reason jpeg-xl isn’t being adopted right now
That’s not a “reason” it’s a “decision”. Their actual reason is pretty good — they don’t want to support every image format that comes along. That’s a slippery slope, there are several hundred image formats - should they all be supported? How many of them have security flaws? How much work is it to check for security flaws even if none exist?
The original image formats for the web, jpeg, gif, png, svg, all have major benefits compared to each other. That’s why they were successful. There used to be other widely used image formats but they all fell by the wayside because the goal is to try not to have many formats. Ideally we’d only have one.
And WebP moves a long way in that direction, it does basically everything except vector images. AVIF is still around for efficiency reasons (it’s very really easy/fast/low battery consumption for camera hardware to create an AVIF).
JPEG-XL has advantages but unlike those two they are really small and not worth the effort.
Converting from jpeg to jxl comes some serious space savings and can be done losslessly.
The original image formats for the web, jpeg, gif, png, all have major benefits compared to each other. That’s why they were successful.
We change video formats without any major benefits of one over the other. I think it’s totally reasonable to do the same with image formats. Especially the data can be losslessly compressed even more.
I wouldn’t call speed a major factor for image processing anyway. It’s hugely important for movies, where AVIF is coming from, but much less so when there is no hard 30x2160x3840 pixels/s benchmark you need to reach.
Just keep using jpeg for old files.
And bloat up my codebase with support for a new file extension every 2-5 years? I’ll just keep using jpg, then, like the rest of the sane internet, and the format will never die. JXL offered an actual upgrade path, webp and avif doesn’t.
Someone is adamant.
deleted by creator
I still won’t get over it and will keep fighting for JPEG XL. It would fix so many issues and greatly reduce the bandwidth need of the internet while not either having weird licensing or royalties and / or being a „what if we just took one frame from a video“ picture format. Also it can encode back to JPEG lossless for legacy uses. What more could one want?
Well… Google wants weird licensing or royalties, that’s why they keep stamping it down.
I mean there are advantages to using AV1 for photos… Hardware accelerated decoding being one.
Decoding a large AVIF image grid should in theory work on a GPU and happen faster with less power than any software based image format implementation.
AV1 is also just an awesome format that’s entirely free to use out of the gate.
Well yes, however without acceleration JPEG XL is many times faster. Also if you only have a CPU for example.
It’s also highly parallelizable compared to AVIF which also matters a lot considering the amount of cores is growing with the likes of ARM and hybrid architecture CPU.
AVIF also fairs badly with high fidelity and lossless encoding, has 1/3 the bit depth and pretty small dimension limits for something like photography.
I don’t think AVIF is per se a bad format. I just think if I want to replace a photo oriented format I’d like to do that with one that’s focused on „good“ photos and not just an afterthought with up- and downsides.
Also if you only have a CPU for example.
I thought even mobile-tier integrated GPUs can decode AV1 extremely quickly.
Well yes sure, but remember AV1 decoding only became standard like 1-2 GPU generations ago. Encoding only this generation. iPhones only got support with the 15 Pro so it will be another generation before it trickles down to the base models. And what about the hundreds of millions of Android phones in Asia and the likes with dirt cheap SoCs. Pretty sure they don’t have dedicated AV1 decoding hardware for a long time.
So that’s a TON of hardware being made slow and inefficient if everything were to be AVIF tomorrow. Not saying AVIF decoding will be a big hurdle in the future but how long until all this hardware browsing the web has been replaced? That’s why I think somethings that’s efficient and fast on CPUs without any specialised hardware is more suited for a replacement.
Servers often come without GPU, and they’re usually the ones encoding image formats.
I don’t think we should worry about servers meant for image transcoding not having the proper hardware for image transcoding. The problem with the GPU requirement starts and ends with consumer devices imo
Isn’t AV1 exclusively for video recoding? I haven’t heard of it being used for photos.
Thanks to wasm, you don’t have to bow to Google’s whim and can choose to include jpeg xl support on your websites if you want: https://github.com/niutech/jxl.js
Do you know if it uses the native decoder if available (so, in Safari I guess)? Doesn’t say in the readme.
I believe so. This line in the source code means it’ll only attempt the decoding if an
img
element for a.jxl
image url fails to load.If you’re on safari, you can verify it by going to the demo page at https://niutech.github.io/jxl.js/ and inspect the image element. If the
src
attributes contain blob, then it’s decoded using the wasm decoder. If thesrc
attribute contains url to a.jxl
file, then it’s decoded natively.Very cool, thanks. Will keep this in mind.
I read “wasm” as per “wasp” – white, Anglo-Saxon – and then my brain create “men” because Protestant didn’t make sense. And I continued to read the sentence until context didn’t make sense.
But it still kind of does.
(Yes, I know web assembly is a thing. Just making conversation.)
Wasps are also a type of insect.
I expected Mozilla to implement this, I don’t know how they expect to get marketshare by just following in Google’s footsteps every step of the way.
Is Firefox it’s own browser or just Chrome with a different engine? Even Apple support jxl, well the decoding anyway.Because Mozilla really doesn’t care about what people think anymore. They’re an incredibly bureaucratic group dealing with a lot of red tape placed as a force for good that doesn’t always meet the mark. It’s mainly the reason Firefox doesn’t have a lot of things (that it honestly should have)
Also, Firefox is a completely original browser but it doesn’t have a “chromium” version the browser like Google Chrome does. Both of the Firefox commercial product and the source code compile to the same thing.
I know, it was a rhetorical question given the stance they take on a lot of things always aligning with what Google wants.
Hey friend, for what its worth when i read your question, i was very much channeling this Garth Algar
But with your question about it being its own browser
Firefox is its own.
Follow the funding
Is Firefox it’s own browser
Its own browser using the Gecko rendering engine.
It was a tongue in cheek, rhetorical question, regarding what I said before it.
“Overall, we don’t see JPEG-XL performing enough better than its closest competitors (like AVIF) to justify addition on that basis alone,” said Martin Thomson, distinguished engineer at Mozilla, last year. “Similarly, its feature advancements don’t distinguish it above the collection of formats that are already included in the platform.”
So is this a legit take on the technology? Sounds like an expert in the field is pretty convinced that this file format isn’t really worth it’s weight. What does JXL give the web that other file formats don’t?
Perhaps true from his… perspective. I’ve found JXL surprisingly awesome and easy to use (size, quality, speed, intuitive encoding options with lossless, supported in XnView & XnConvert for easy batches). AVIF was terrible in real-world use last I tried (and blurs fine details).
I’m still a big Mozilla & Firefox fan, but a few decisions over past few years seem like they’re being dictated or vetoed by a few lofty individuals (while ignoring popular user requests). Sad.
I’ve read a comparison of several newer file formats (avif, heic, webp) with jpeg-xl. The conclusion was that jpeg-xl was on par in terms of compression, sometimes better and very fast. also it can re-compress jpgs directly.
here’s an article describing it https://cloudinary.com/blog/the-case-for-jpeg-xl
The big thing, to me, is that it can losslessly encode JPEGs, the dominant format for allllll sorts of archived images. That’s huge for migration of images that don’t necessarily exist in any other format.
Plus, as I understand it, JPEG XL performs better at those video-derived formats at lossless high resolution applications relating to physical printing and scanning workflows, or encoding in new or custom color spaces. It’s designed to work in a broader set of applications than the others, beyond just web images in a browser.
If Google says chromium won’t support a feature it won’t be used. The majority of browsers are Chromium under the hood.
A third party adaptation of Chromium could add support for other formats, the ones we know about right now just don’t bother.
This is the best summary I could come up with:
The process began last year by gathering proposals for web technologies that group members will try to harmonize using automated tests.
The goal is to ensure browser implementations of these technologies match specifications in order to make the web platform better for developers.
Mozilla has not jumped on the JPEG XL bandwagon either: The Firefox maker said it’s neutral with regard to the technology, citing cost and lack of significant differentiation from other image codecs.
“Overall, we don’t see JPEG-XL performing enough better than its closest competitors (like AVIF) to justify addition on that basis alone,” said Martin Thomson, distinguished engineer at Mozilla, last year.
And it has since resisted entreaties to reconsider – despite Apple’s endorsement last year and recent support from Samsung and apparent interest from Microsoft.
“Chrome is ‘against’ because of ‘insufficient ecosystem interest’ and because they want to promote improvements in existing codecs,” said Sneyers, pointing to JPEG, WebP, and AVIF.
The original article contains 907 words, the summary contains 155 words. Saved 83%. I’m a bot and I’m open source!
From Wiki:
JPEG XL supports lossy compression and lossless compression of ultra-high-resolution images (up to 1 terapixel), up to 32 bits per component, up to 4099 components (including alpha transparency), animated images, and embedded previews.
Why 4099 components? Why so many? And why 4099 in particular? 4096+3 with 3 being RGB?
On a side note, 1 Terapixel is just crazy. A square with 1 million pixels has this number of pixels. So, about 1000 of 1080p will fit into this square vertically and about 500 horizontally. How has such eyes to see this all pixel perfectly?
On a side note, 1 Terapixel is just crazy. A square with 1 million pixels has this number of pixels. So, about 1000 of 1080p will fit into this square vertically and about 500 horizontally. How has such eyes to see this all pixel perfectly?
If you zoom in on it (a pretty common thing to do with pictures) enough, most people.
That would be SCI: Miami show zoom, where they can identify a yawning killer by his teeth fillings which image was reflected in the window which image was reflected in the eye of a random person far in the background of a shot.
Kuala Lumpur 846 gigapixels (2014) (may or may not load because their api server’s ssl certificate expires today. if you can’t get it to load, open beta-api.panaxity.com and whitelist it, then reload https://www.panaxity.com/ ). And yes, you can zoom in and see people hanging about in their room on distant apartments.
Would be cool if it can be saved as a single gigantic image instead of tiles of multiple images.