• 0 Posts
  • 23 Comments
Joined 1 year ago
cake
Cake day: June 20th, 2023

help-circle








  • I probably would have gone into tech earlier if I’d had a female role model in tech. When my (male) friends started programming in high school, I was very interested and wanted to learn it too. But it literally didn’t occur to me that I could, until ten years later, when I was already far along in a study in the humanities. I ended up in data/ software development in the end, but it took me ten years longer because I didn’t realise earlier that it was a field I could get into if I wanted.
    So long story short, it’s not just a matter of interest, there are societal factors that play a role too.










  • Yeah, if you already have it then it’s not really an extra cost. But the smaller models perform less well and less reliably.

    In order to write a book that’s convincing enough to fool at least some buyers, I wouldn’t expect a Llama2 7B to do the trick, based on what I see in my work (ML engineer). But even at work, I run Llama2 70B quantized at most, not the full size one. Full size unquantized requires 320 GPU vram, and that’s just quite expensive (even more so when you have to rent it from cloud providers).

    Although if you already have a GPU that size at home, then of course you can run any LLM you like :)