Because there are so many small parts to a processor you need 99.99+% at most stages to stand any chance of mass production. In this context 60-70% is seriously impressive. Millions of things have to be done right to get this type of yield.
Because there are so many small parts to a processor you need 99.99+% at most stages to stand any chance of mass production. In this context 60-70% is seriously impressive. Millions of things have to be done right to get this type of yield.
In this situation a hub is still better. You can pack all the stuff away plugged into the hub for easier set up. If your plugging that all into your laptop, you’ll need to plug it all back in again when you move.
RS, not the same breath but the pricing is usually good.
It was very likely a designers decision. It forces the use the use case they wanted; wireless mice should be used wirelessly. I would bet they fought marketing and management to get this on the final product.
Marketing would want the mouse they can advertise as being useable with and wireless. Female ports are easier to mount and manufacture with they have depth to set the socket. So a plug on the front is much cheaper and easier to manufacture.
The fact the charging cable doesn’t get used in motion means it will last longer and you wouldn’t have people useing fraying cables on the front of their mouse.
People with exploits available that are unpatched are waiting for that end of support. It increases the value of their unreleased exploit.
A small computer, large capacity ssd and two WiFi interfaces (2x usb dongles, or dongle plus usb).
Small computer could be anything: raspberry pi (or generic and), nuc mini pc or laptop. If you want to use it without a plug you’ll need to add a battery, usb c powered devices could be more convent to power from a battery.
A ssd is better for this use case. Not because it’s faster, but they are more resilient to being knocked about and dropped. They are also much smaller, especially M.2, and aren’t fussy about how they are mounted.
The two WiFi interfaces would allow you to create a WiFi bridge to access the internet through a WiFi network and access your media server. It would need some configuration, you may also need to have the computer act as a router if you want to use multiple devices without reconfiguring.
It may be easier to have your device act as a WiFi hotspot and have the media centre automatically connect to it. This would make it difficult for multiple devices to use it simultaneously, and you could accidentally allow the media centre to do all its updating and downloading over your mobile connection.
This type of thing is going to be expensive and troublesome to configure unless your already experienced with that sort of thing.
I think a better solution, especially if you already have a media server. Is to set your media server for external access.
To get media when you don’t have internet, buy a large capacity flash drive (or external ssd/hdd). When you have access to your media server download all the content you want on to the drive. I think iOS jellyfin can do this without much modification.
Once out of range of your media server. Delete the content you’ve watched on your device (iPad) to free up space. Connect the external drive through the usb port on the iPad, copy over the next lot of content you want to watch. Disconnect and then watch the content.
Jellyfin can download the content, but you may need another app to play it when you don’t have access to the media server.
This approach lets multiple people access a much larger amount of media, effectively simultaneously. It doesn’t require a large amount of often expensive local device storage - you use cheap external storage. It much less expensive if it breaks or gets lost and has very little configuration -if you already have a media server running jellyfin.
It was 12 years ago he said he would put a man on mars in 10 years.
Probably not much for people on a self hosting community, but those that want to get away from subscriptions and steal your data as a service cloud providers that might need some reassurance that they’ll have a working system.
Nixos is an os that’s defined by its config stored in .nix files. Everything is defined here all the software and configurations. Two people with the same script will have the exact same os.
Any changes you make that aren’t in the scripts won’t be present when you reboot.
You could maintain a very custom linux distribution (kinda) by just maintaining these config scripts.
So a user wouldn’t need to install all required software and dependencies. They could get a nixos and the self-host config and adjust some settings and have a working system straight after install.
You know it’s a thunderbolt connection on a MacBook. They stopped using the USB symbol when they used the usb for thunderbolt and stopped using the mini display port.
This is going to be tracking customers location in supermarkets.
An undeterministic system is dangerous. A deterministic with flaws can be better, the flaws can be identified understood and corrected. The flaws are more likely to be present in testing.
Machine learning is nearly always going to be undeterministic. If they then use continuous training, the situation only gets worse.
If you use machine learning because you can’t understand how to solve the problem, then you’ll never understand how the system works. You’ll never be able to pass a basic inspection test.
When you automate these processes you lose the experience. I wouldn’t be surprised if you couldn’t parse information as well as you can now, if you had access to chat GPT.
It’s had to get better at solving your problems if something else does it for you.
Also the reliability of these systems is poor, and they’re specifically trained to produce output that appears correct. Not actually is correct.
You need software support to use them. But, it’s already common to support this. But it does take time to develop test and deploy this software.
The software will exist in kernels, drivers and libraries. Intel already supports things like this.
You may need to wait or use a bleeding edge version of your os to support these extra features.
Yeah. I think they will struggle to match apple. By the time they do apple will have progressed further.
Another big issue, is these features need deep and well implemented software. This is really easy for apple, they control all the hardware and software. They write all the drivers and can modify their kernel to their hearts content. A better processor is still unlikely to match apples overall performance. Intel have to support more operating systems and interface with more hardware of which they have little control over. It won’t be until years after release that these processors even realistically reach their potential. By which time intel and apple with both have newer releasesed chips with more features, that intel users won’t be able to use for a while.
This strategy has intel on the back foot and they will remain their indefinitely. They really need a bolder strategy if they want to reclaim best desktop processors. It’s pretty embarrassing apple laptop and integrated GPU completely wipe the floor of intel desktop cpus with dedicated gpus in certain workflows, it can often be the cheaper option to buy the apple device if your in a creative profession.
Qualcomm will have similar issues, but they won’t be limited to inferior x86 architecture. x86 only serves backwards compatibility and intel/amd. Arm is used on phones because with the same fab and power restrictions it makes better processors. This has been know for a long time, but consumers would accept this till apple proved it.
I wouldn’t be surprised if these intel chips flop initially, intel cuts their losses and stops developing new ones. Then we will see lots of articles saying intel should never have stopped developing these, there really competitive relativel to their contemporaries, not realising the software took that much time to effectively utilise them.
Extra components mean more specific hardware to complete each task. This more specific hardware can process the same data often faster and with less power consumption. The drawback is cost, complexity and these compose are only good for that one task.
CPUs are great because they are multipurpose and can do anything, given infinite time and storage. This flexibility means it isn’t as optimised.
People are not creating custom code to solve their own problems. They are running very common applications, using very common libraries for similar functions. So for the general user specific hardware for encryption, video codecs, networking etc will reduce power consumption and increase processing speed in a practical way.
I don’t think he’s suggesting it isn’t open source, just we need more open source engines.
Long term servo is unlike to be another web engine. It’ll just replace Firefox. Firefox’s old engine won’t get as much development. Then we’ll be left with Safari, Firefox (servo) and googles web manipulation vehicle Chrome safari with more tracking and higher battery/ram consumption.
It will be driven my minimal viable product and running to a release at the end of every x sprints.
They don’t have the time or structure to build long term plans and well considered features.
A glass may have a spec that allows it to be filled to the brim. Doesn’t mean it’s a good idea, especially when you want to run up stairs with it.
Your going to spill water everywhere.