• 0 Posts
  • 266 Comments
Joined 1 year ago
cake
Cake day: June 18th, 2023

help-circle
  • So it’s possible to have two different GPUs rendering content which is then output to a single monitor at the same time? Could I have a game rendered by a discrete GPU running in a window being handled by an Xorg session rendered by an integrated GPU? Do I understand this correctly? Would it matter if the video output was physically connected to the discrete GPU or the motherboard, or is that configurable?






  • NaibofTabr@infosec.pubtoSelfhosted@lemmy.worldAm I being held back by using casaos?
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    19 days ago

    I second this - virtualization is the easiest way to branch out and try new things. You can keep the working system you already have, and also experiment with other systems.

    A further advantage is that you can run services in separate VMs, which helps if you need isolated contexts for security, privacy, or stability reasons. And, if you break something while you’re learning you can just delete that VM and start over without affecting your other working services.









  • AI learning isn’t the issue, its not something we will be able to put a lid on either way.

    So… there is no Artificial Intelligence. The AI cannot hurt you. It is just a (buggy) statistical language parsing system. It does not think, it does not plan, it does not have goals, it does not understand, and it doesn’t even really “learn” in a meaningful sense.

    Either it destroys or saves the world.

    If we’re talking about machine learning systems based on multi-dimensionl statistical analyses, then it will do neither. Both extremes are sensationalism and arguments based on the idea that either such outcome will come from the current boom of ML technology is utter nonsense designed to drive engagement.

    It doesn’t need to learn much to do so besides evolving actual self-agency and sovereign thought.

    Oh, is that all?

    No one on the planet has any idea how to replicate the functionality of consciousness. Sam Altman would very much like you to believe that his company is close to achieving this so that VCs will see the public interest and throw more money at him. Sam Altman is a snake oil salesman.

    What is a huge issue is the secretive non-consentual mining of peoples identity and expressions.

    And then acting all normal about It.

    This is absolutely true and correct and the collection and aggregation of data on human behavior should be scaring the shit out of everyone. The potential for authoritarian abuses of such data collection and tracking is disturbing.







  • NaibofTabr@infosec.pubtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    1 month ago

    I see, so your argument is that because the training data is not stored in the model in its original form, it doesn’t count as a copy, and therefore it doesn’t constitute intellectual property theft. I had never really understood what the justification for this point of view was, so thanks for that, it’s a bit clearer now. It’s still wrong, but at least it makes some kind of sense.

    If the model “has no memory of training data images”, then what effect is it that the images have on the model? Why is the training data necessary, what is its function?