This happens to me when using VGA and the connector isn’t well seated. Are you using an analog connector like VGA? Can you double check that the connector is well seated on both ends?
Just a stranger trying things.
This happens to me when using VGA and the connector isn’t well seated. Are you using an analog connector like VGA? Can you double check that the connector is well seated on both ends?
This would make sense as the ente server doesn’t do much given all the photos are encrypted. All the intelligence is in the client apps.
Thanks for sharing your experience. Was XCP-ng considered as a migration target? Would you have some feedback to share on what made it unsuitable for you? Thank you!
They have a special migration tool from VMWare: https://docs.xcp-ng.org/installation/migrate-to-xcp-ng/#-from-vmware
Thank you, I will dig into this to see if there’s something I’m missing, but I did use the same resources the poster did, but the thread may provide more information.
Thanks for the reply! Can you tell me more about what you mean with “check the efi grub install”?
Edit: to be clear, I have a vanilla initramfs booting properly, which is the one automatically built. I’m just trying to replicate it myself.
This is the way.
I explored whether this was a permission issue, but the permission is the same on the default and my initramfs:
mytestalpine:~# ls -l /boot/initramfs-*
-rwxr-xr-x 1 root root 10734241 Nov 27 22:56 /boot/initramfs-6.6.58-0-lts.img
-rwxr-xr-x 1 root root 17941160 Nov 3 17:39 /boot/initramfs-lts
Thorn, the company backed by Ashton Kutcher and which tried to get its way to monitor all messages in the EU via Chat Control. No thanks.
I hear you, but how much time was Synology given? If it was no time at all (which it seems is what happened here??), that does not even give Synology a chance and that’s what I’m concerned with. If they get a month (give or take), then sure, disclose it and too bad for them if they don’t have a fix, they should have taken it more seriously, but I’m wondering about how much time they were even given in this case.
It’s about online games and anti cheat. Many companies will not allow anti cheat to work on Linux because they “require” kernel level anti cheat, a big security and privacy concern.
You can read more about anti cheat games and their compatibility with Linux here: https://areweanticheatyet.com/
Was it that the talk was a last minute change (replacing another scheduled talk) so the responsible disclosure was made in a rush without giving synology more time to provide the patch before the talk was presented?
If so, who decided it was a good idea to present something regarding a vulnerability without the fix being available yet?
I’m not sure, I read that ZFS can help in the case of ransomware, so I assumed it would extend to accidental formatting but maybe there’s a key difference.
I think these kind of situations are where ZFS snapshots shine: you’re back in a matter of seconds with no data loss (assuming you have a recent snapshot before the mistake).
Edit: yeah no, if you operate at the disk level directly, no local ZFS snapshot could save you…
This. I will resume my recommendation of Bitwarden.
I didn’t say it can’t. But I’m not sure how well it is optimized for it. From my initial testing it queues queries and submits them one after another to the model, I have not seen it batch compute the queries, but maybe it’s a setup thing on my side. vLLM on the other hand is designed specifically for the multi co current user use case and has multiple optimizations for it.
I run the Mistral-Nemo(12B) and Mistral-Small (22B) on my GPU and they are pretty code. As others have said, the GPU memory is one of the most limiting factors. 8B models are decent, 15-25B models are good and 70B+ models are excellent (solely based on my own experience). Go for q4_K models, as they will run many times faster than higher quantization with little performance degradation. They typically come in S (Small), M (Medium) and (Large) and take the largest which fits in your GPU memory. If you go below q4, you may see more severe and noticeable performance degradation.
If you need to serve only one user at the time, ollama +Webui works great. If you need multiple users at the same time, check out vLLM.
Edit: I’m simplifying it very much, but hopefully should it is simple and actionable as a starting point. I’ve also seen great stuff from Gemma2-27B
Edit2: added links
Edit3: a decent GPU regarding bang for buck IMO is the RTX 3060 with 12GB. It may be available on the used market for a decent price and offers a good amount of VRAM and GPU performance for the cost. I would like to propose AMD GPUs as they offer much more GPU mem for their price but they are not all as supported with ROCm and I’m not sure about the compatibility for these tools, so perhaps others can chime in.
Edit4: you can also use openwebui with vscode with the continue.dev extension such that you can have a copilot type LLM in your editor.
I wouldn’t assume this is done with malice in mind, but maybe this is someone unaware of the importance of a formal license.
I’m wondering, the integrated RAM like Intel did for Lunar Lake, could the same performance be achieved with the latest CAMM modules? The only real way to go integrated to get the most out of it is doing it with HBM, anything else seems like a bad trade-off.
So either you go HBM with real bandwidth and latency gains or CAMM with decent performance and upgradeable RAM sticks. But the on-chip ram like Intel did is neither providing the HBM performance nor the CAMM modularity.
Oh sorry it was not obvious to me that this was a crosspost so I didn’t see the lengthy explanation provided! Indeed, my comment makes little sense, apologies.