• 3 Posts
  • 77 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle
  • In this scenario, I think the pronouns should be changed regardless of the gender of the devs. Here’s a screenshot of the suggested changes, which are quite minimal. The reason why I think this should be changed is because this is in the build instructions for a project that many devs are needed. Hence, they should at least be open to discussion rather than shutting it off completely. And honestly, this is a small change. Their reaction to this made it more political than the commit itself, and honestly the commit was not political in my mind. Their reaction also demonstrated how they respond to contributions, and an ambitious project like this will need a lot of contributors. If their leadership keeps this up, it is very off-putting for people to collaborate with them.


  • inspxtr@lemmy.worldtoSelfhosted@lemmy.world2024 Self-Host User Survey Results
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    2
    ·
    2 months ago

    Wonder how the survey was sent out and whether that affected sampling.

    Regardless, with -3-4k responses, that’s disappointing, if not concerning.

    I only have a more personal sense for Lemmy. Do you have a source for Lemmy gender diversity?

    Anyway, what do you think are the underlying issues? And what would be some suggestions to the community to address them?















  • yeah I guess maybe the formatting and the verbosity seems a bit annoying? Wonder what the alternatives solution could be to better engage people from mastodon, which is what this bot is trying to address.

    edit: just to be clear, I’m not affiliated with the bot or its creator. This is just my observation from multiple posts I see this bot comments on.



  • Thanks for the suggestions! I’m actually also looking into llamaindex for more conceptual comparison, though didn’t get to building an app yet.

    Any general suggestions for locally hosted LLM with llamaindex by the way? I’m also running into some issues with hallucination. I’m using Ollama with llama2-13b and bge-large-en-v1.5 embedding model.

    Anyway, aside from conceptual comparison, I’m also looking for more literal comparison, AFAIK, the choice of embedding model will affect how the similarity will be defined. Most of the current LLM embedding models are usually abstract and the similarity will be conceptual, like “I have 3 large dogs” and “There are three canine that I own” will probably be very similar. Do you know which choice of embedding model I should choose to have it more literal comparison?

    That aside, like you indicated, there are some issues. One of it involves length. I hope to find something that can build up to find similar paragraphs iteratively from similar sentences. I can take a stab at coding it up but was just wondering if there are some similar frameworks out there already that I can model after.