If only this meant the removal of the annoying tiles for games that show up in the app above everything else (often using up the entire screen) even though I’ve never tapped on them once.
I don’t want your games Netflix. I barely want your shows.
If only this meant the removal of the annoying tiles for games that show up in the app above everything else (often using up the entire screen) even though I’ve never tapped on them once.
I don’t want your games Netflix. I barely want your shows.
Probably the best idea I guess as long as you can set the TV up without Internet.
I’m pretty happy with Chromecast currently for its simplicity. I meant to try and replace the TV firmware so it’s more or less a dumb TV that just displays its inputs without having ads and other gimmicks.
The TV I currently have is Android OS but the built in Chromecast is noticeably lower quality. Not sure if it’s an older version or what.
Regardless, IMO the displays themselves outlast their software support, and I prefer to just plug in whatever the latest device.
I’ll also mention Android OS on my TV takes a full minute to “boot” and that itself makes me want to yeet it out the window.
My TV is probably going to kick the bucket in a year or two at most. Filtering “non smart TVs” on a site like BestBuy shows only commercial display options at this point.
Are there any well maintained projects out there that are able to replace the firmware on newer smart TVs to get rid of these features? I really just want a dumb display with an input for a Chromecast with CEC support (or similar device if Google decides to enshittify that platform with screensaver ads too).
This is giving me 1998 MS Publisher vibes and I’m here for it.
“Prompt engineering” must be the easiest job to replace with AI. You can simply ask an LLM to generate and refine prompts.
That’s correct, it is just plain text and it can easily be spoofed. You should never perform an auth check of any kind with the user agent.
In the above examples, it wouldn’t really matter if someone spoofed the header as there generally isn’t a benefit to the malicious agent.
Where some sites get into trouble though is if they have an implicit auth check using user agents. An example could be a paywalled recipe site. They want the recipe to be indexed by Google. If I spoof my user agent to be Googlebot, I’ll get to view the recipe content they want indexed, bypassing the paywall.
But, an example of a more reasonable use for checking user agent strings for bots might be regional redirects. If a new user comes to my site, maybe I want to redirect to a localized version at a different URL based on their country. However, I probably don’t want to do that if the agent is a bot, since the bot might be indexing a given URL from anywhere. If someone spoofed their user agent and they aren’t redirected, no big deal.
User agents are useful for checking if the request was made by a (legitimate self-identifying) bot, such as Googlebot.
It could also be used in some specific scenarios where you control the client and want to easily identify your client traffic in request logs.
Or maybe you offer a download on your site and you want to reorder your list to highlight the most likely correct binary for the platform in the user agent.
There are plenty of reasonable uses for user agent that have nothing to do with feature detection.
I’m not sure how true this perception is in more recent years. Many popular sites, with enormous traffic volumes that could drive digital impression ad revenue, are instead pushing subscriptions or other monetization models.
For instance, the New York Times makes — by far — more money on digital subscriptions than digital advertising. Digital advertising revenues are also declining for them.
Another example is Spotify, where ad revenue from their ad-supported tier did not cover their operational costs and now represents around only a tenth of their revenue compared to subscriptions.
The exceptions to this are generally search and social media sites, where the product for sale on these sites are the users themselves. They’re just advertising platforms, which of course make their money from digital advertising.
So I’d say one issue with digital advertising is that it often does not pay the bills for the site owner. Its value is tied to its ability to convert visitors to buyers, but it has to be ramped up to such an extreme level it instead only creates bad experiences.
I go through significant efforts to block digital advertising at multiple levels. Yet, I do not find it difficult to discover new things to buy (from both small and large businesses).
For myself, I suspect most of that is supported through online communities related to my interests and hobbies. Those purchases feel more informed and often more intentional too.
What if we just got rid of digital advertising altogether in the US? How many issues of privacy, health and personal finance would disappear or be greatly reduced?
It’s hard for me to imagine what that would look like or the downsides other than to the digital advertising industry itself.
JSON Problem Details
https://datatracker.ietf.org/doc/html/rfc9457
This specification’s aim is to define common error formats for applications that need one so that they aren’t required to define their own …
So why aren’t you using problem details?
Its use looks contrived to me on the linked GitHub page. The comparison with @ and # is flawed because those symbols are part of the resource name, whereas here the symbol is superfluous. It’s like adding a 🌐 in front of every web URL.
I remember when I was growing up
You can basically stop right there. You were young and naive, viewing the world through the rose colored glasses of youth.
The context is not the same. A snippet is incomplete and often lacking important details. It’s minimally tailored to your query unlike a response generated by an LLM. The obvious extension to this is conversational search, where clarification and additional detail still doesn’t require you to click on any sources; you simply ask follow up questions.
With Gemini?
Yes. How do you think the Gemini model understands language in the first place?
If it’s just that and links to the source, I think it’s OK.
No one will click on the source, which means the only visitor to your site is Googlebot.
What would be absolutely unacceptable is to use the web in general as training data for text and image generation.
This has already happened and continues to happen.
This isn’t the evolution of C at all. It’s all just one language and you’re simply stuck in a lower dimension with a dimensionally compatible cross-section.
I assume these people are Trumpers.
That’s a pretty bad assumption.
Calculating the digits of pi seems like a poor benchmark for comparing various languages in the context of backend web application performance. Even the GitHub readme points out the benchmark is entirely focused on floating point performance.