• 4 Posts
  • 50 Comments
Joined 1 year ago
cake
Cake day: June 30th, 2023

help-circle


  • UTF-8 is an encoding for unicode, that means it’s a way of representing a unicode string as actual bytes on a computer.

    It is variable length and works by using the first bits of each byte to indicate how many bytes are are needed to represent the current character.

    Python also uses an encoding, as you describe in the article, but it’s different to UTF-8. Unlike unicode, all characters in Python’s representation of the unicode string use the same number of bytes, which is the maximum that any individual unicode character in the string needs.

    I’d probably mess up a more detailed explanation of UTF-8 or Python’s representation, so I’ll let you look into how they work in more detail if you’re interested.






  • It probably really depends on the project, though I’d probably try and start with the tests that are easiest/nicest to write and those which will be most useful. Look for complex logic that is also quite self-contained.

    That will probably help to convince others of the value of tests if they aren’t onboard already.



  • I think calling it just like a database of likely responses is too much of a simplification and downplays what it is capable of.

    I also don’t really see why the way it works is relevant to it being “smart” or not. It depends how you define “smart”, but I don’t see any proof of the assumptions people seem to make about the limitations of what an LLM could be capable of (with a larger model, better dataset, better training, etc).

    I’m definitely not saying I can tell what LLMs could be capable of, but I think saying “people think ChatGPT is smart but it actually isn’t because <simplification of what an LLM is>” is missing a vital step to make it a valid logical argument.

    The argument is relying on incorrect intuition people have. Before seeing ChatGPT I reckon if you’d told people how an LLM worked they wouldn’t have expected it to be able to do things it can do (for example if you ask it to write a rhyming poem about a niche subject it wouldn’t have a comparable poem about in its dataset).

    A better argument would be to pick something that LLMs can’t currently do that it should be able to do if it’s “smart”, and explain the inherent limitation of an LLM which prevents it from doing that. This isn’t something I’ve really seen, I guess because it’s not easy to do. The closest I’ve seen is an explanation of why LLMs are bad at e.g. maths (like adding large numbers), but I’ve still not seen anything to convince me that this is an inherent limitation of LLMs.




  • Yeah, my experience with docker on windows has been pretty bad, uses high CPU and RAM at the best of times, at the worst completely hangs my computer on 100% CPU usage forcing a restart as the only fix.

    I really don’t understand why people are overcomplicating this. You can install multiple Python versions at once on Windows and it just works fine (you can use the py command to select the one you want).

    Virtual environments are designed exactly for this use case. They’ve got integrations for pretty much everything, they’re easy to delete/recreate, they’re really simple to use, they’re fast, and they just work.

    If virtual environments alone aren’t quite enough you can use something like poetry or pipenv or the many other package management options, but in many cases even that is overkill.




  • Thanks for the info on crossposting! I thought I’d seen someone mention a cross posting feature but couldn’t see any button to do it. I’m using the Jerboa app on Android which I guess doesn’t have that button, but I see it on the website now as you say.

    It’s also good to know that linking to the original URL is generally better and the rest can be handled by the UI - that does seem nicer.



  • My general opinion for libraries is that it’s fair to stop supporting Python versions as soon as they’re EOL. It’s unfair to ask maintaners to have to juggle supporting 6 or more Python versions at once, mostly for the benefit of a few companies who haven’t updated yet.

    I think it’s also fair here, you’ll still be able to use older versions, you just won’t get the newest features, which clearly isn’t your number 1 priority if you’re still using Python 3.7.




  • Same, I think it’s more common only to use when necessary.

    The main case I can think of to use it more is for performance to save an import at runtime, but I don’t think that’s really valid, especially since the fact you’re using the type annotation suggest the module would have been used elsewhere anyway so the import would be cached.

    The argument against using anywhere is that it could be misleading as your editor may indicate that the import is definited even if it wouldn’t be at runtime. Not sure if things like pylance have special handling to avoid this, would have to check…