• 0 Posts
  • 26 Comments
Joined 2 years ago
cake
Cake day: July 14th, 2023

help-circle
  • Please, enlighten me - how do you propose we use the term “AI” in a way that’s more useful than a definition that includes machine learning, large language models, and computer vision?

    I doubt I’ll agree with your definition, but I’m curious to see how you would exclude machine learning, computer vision, LLMs, etc., from your definition. My assumption is that your definition is going to be either a derivative of “AI is anything computers can’t do yet” or based on pop culture / sci fi, but maybe you’ll surprise me.

    To be clear, I’m a software engineer; I’m not speaking in sales speak. I’ve derived my understanding of the term from a combination of its historical context and how it’s used in both professional and academic contexts, not from marketing propaganda or from sci fi and pop culture. I’m certainly aware of the hype machine that’s ongoing, but there are also tons of fascinating advancements happening on a regular basis, and the term “AI” is at minimum a useful term to refer to technologies that leverage similar techniques.


  • it’s not ‘ai’, it’s just a poorly trained voice recognition system that’s trying to decipher any random person’s voice.

    I’m baffled that you can say “It’s not ‘AI,’ it’s a machine learning powered speech to text system” with a straight face.

    Even if we were to agree that ML-powered speech to text isn’t AI (and I don’t agree to that premise, for the record), there’s still the matter of processing the transcription to transform it into something that can be understood by the point of sale system - aka natural language processing. And while that NLP could be implemented without use of an LLM, given LLM’s current level of hype and the ease with which they can be shoved into any given product, I wouldn’t bet on Taco Bell execs approving such an approach, much less asking for it.



  • I’m a professional software engineer and I’ve been in the industry since before Kubernetes was first released, and I still found it overwhelming when I had to use it professionally.

    I also can’t think of an instance when someone self-hosting would need it. Why did you end up looking into it?

    I use Docker Compose for dozens of applications that range in complexity from “just run this service, expose it via my reverse proxy, and add my authentication middleware” to “in this stack, run this service with my custom configuration, a custom service I wrote myself or forked, and another service that I wrote a Dockerfile for; make this service accessible to this other service, but not to the reverse proxy; expose these endpoints to the auth middleware and for these endpoints, allow bypassing of the auth middleware if an API key is supplied.” And I could do much more complicated things with Docker if I needed to, so even for self-hosters with more complex use cases than mine, I question whether Kubernetes is the right fit.


  • Summary of my comment: the study showed that the AI tool in question was an effective tool for the task, nothing more.

    I didn’t read this particular article, but I recently read a different one about the same study. I also clicked into the study itself and read the abstract and everything else that was freely available. The study was paywalled, but as far as I could tell:

    • Performance immediately displayed a sustained increase of 24% relative to baseline while using the AI tool in question
    • Immediately after the tool was taken away (after using it for three months), performance was 20% lower than the baseline
    • The study did not check to see what level performance returned to after three months without it, nor when it returned to baseline levels
    • The study also did not compare performance drops after returning from a three month vacation
    • The study did not compare performance drops when losing access to other tools

    This outcome is expected if given a tool that simplifies a process and then losing access to it. If I were writing code in Notepad and using _v2, _v3, etc for versioning, was then given an IDE and git for three months, then had to go back to my old ways with Notepad, I’d expect to be less effective than I had been. I’ve been relying on syntax highlighting, so I’m going to be paying less attention to the specific monochrome text than I used to. I’ll have fallen out of practice from using the version naming techniques that I used to use. All of the stuff that I did to make up for having worse tooling, I’m out of practice with.

    But that doesn’t mean that I should use worse tools.


  • This is what I would try first. It looks like 1337 is the exposed port, per https://github.com/nightscout/cgm-remote-monitor/blob/master/Dockerfile

    x-logging:
      &default-logging
      options:
        max-size: '10m'
        max-file: '5'
      driver: json-file
    
    services:
      mongo:
        image: mongo:4.4
        volumes:
          - ${NS_MONGO_DATA_DIR:-./mongo-data}:/data/db:cached
        logging: *default-logging
    
      nightscout:
        image: nightscout/cgm-remote-monitor:latest
        container_name: nightscout
        restart: always
        depends_on:
          - mongo
        logging: *default-logging
        ports:
          - 1337:1337
        environment:
          ### Variables for the container
          NODE_ENV: production
          TZ: [removed]
    
          ### Overridden variables for Docker Compose setup
          # The `nightscout` service can use HTTP, because we use `nginx` to serve the HTTPS
          # and manage TLS certificates
          INSECURE_USE_HTTP: 'true'
    
          # For all other settings, please refer to the Environment section of the README
          ### Required variables
          # MONGO_CONNECTION - The connection string for your Mongo database.
          # Something like mongodb://sally:sallypass@ds099999.mongolab.com:99999/nightscout
          # The default connects to the `mongo` included in this docker-compose file.
          # If you change it, you probably also want to comment out the entire `mongo` service block
          # and `depends_on` block above.
          MONGO_CONNECTION: mongodb://mongo:27017/nightscout
    
          # API_SECRET - A secret passphrase that must be at least 12 characters long.
          API_SECRET: [removed]
    
          ### Features
          # ENABLE - Used to enable optional features, expects a space delimited list, such as: careportal rawbg iob
          # See https://github.com/nightscout/cgm-remote-monitor#plugins for details
          ENABLE: careportal rawbg iob
    
          # AUTH_DEFAULT_ROLES (readable) - possible values readable, denied, or any valid role name.
          # When readable, anyone can view Nightscout without a token. Setting it to denied will require
          # a token from every visit, using status-only will enable api-secret based login.
          AUTH_DEFAULT_ROLES: denied
    
          # For all other settings, please refer to the Environment section of the README
          # https://github.com/nightscout/cgm-remote-monitor#environment
    
    

  • To run it with Nginx instead of Traefik, you need to figure out what port Nightscout’s web server runs on, then expose that port, e.g.,

    services:
      nightscout:
        ports:
          - 3000:3000
    

    You can remove the labels as those are used by Traefik, as well as the Traefik service itself.

    Then just point Nginx to that port (e.g., 3000) on your local machine.

    —-

    Traefik has to know the port, too, but it will auto detect the port that a local Docker service is running on. It looks like your config is relying on that feature as I don’t see the label that explicitly specifies the port.





  • The products currently on the marketplace have architectures that are far more sophisticated than just an LLM. Even something as simple as “Deep Research,” which both Anthropic and Claude have available, is using multiple interconnected systems to provide a single response.

    Consider Agentic AI, like Claude Code, where they’re using tools, analyzing the results of those tools, iterating, possibly calling out to MCP servers to do other things, etc… The tools allow them to do things like read or modify files in the working directory, execute programs (i.e., your linter, installing dependencies, running your app), querying against your app itself, and so on.

    And of course note that the single “Claude” box in that diagram has an architecture that’s more sophisticated than just being an LLM. At minimum, consumer facing LLMs generally have a supervisor that censors problematic inputs and outputs; this doesn’t make the system more competent but the same concept can be applied to any other sort of transparent wrapper.

    It seems to me that we already have consumer systems that are doing what you described, and we’re already working on enhancing their architectures further.




  • You can run a NAS with any Linux distro - your limiting factor is having enough drive storage. You might want to consider something that’s great at using virtual machines (e.g., Proxmox) if you don’t like Docker, but I have almost everything I want running in Docker and haven’t needed to spin up a single virtual machine.


  • I think the better question than “Does the experience system sound like it has potential,” then, is “Does the overall concept / system have potential?”

    My gut is probably, but it depends a lot more on what you’re willing to put into it and what you want out of it. What’s your metric for success? If it’s something you want to run yourself and to share online to have a few groups use it, then that’s a lot more achievable than being able to get a publishing deal, for example. And in-between, publishing on drivethrurpg or something similar, at a nominal cost (like $2-$5), would take more effort than the former and less than the latter; and the higher the cost and the higher the number of players you’d want, the higher the effort you need to put in (and a lot of that isn’t just in system building, but in art, community building, marketing, etc.).

    From what you’ve shared, it sounds like an interesting system. I could especially see it working in an academy setting where grinding skills to be able to pass practical exams is one of the players’ goals. I also could see it working well by a loosely GMed play by post system, with the players self-enforcing (or possibly leveraging some tools built into the site to track resource pools, experience, rolling, etc.), though I haven’t played in a forum game myself, so I might be way off-base.

    Did your system have classes or was it completely free-form in terms of gaining access to those skill trees?


  • I run a Monster of the Week game and my players get experience throughout sessions, as well as at the end. The mechanics are basically:

    • It takes 5 experience points to level up.
    • If you fail a roll, you get an experience point.
    • If you level up, you get the benefit immediately.
    • At the end of the session, everyone gets 0-2 experience points.

    I think other PbtA (Powered by the Apocalypse - systems inspired by Apocalypse World) systems do something similar.

    I grew increasingly frustrated with the system of only distributing advancement/experience points at the end of a session.

    Isn’t the simple fix to this to just distribute experience points as soon as they’re earned?

    At some point, I started to divise a play system that relied on a split experience atribution system, with players being able to automatically rack experience points from directly using their skills/habilties, while the DM would keep a tally of points from goals/missions achieved, distributable at session end.

    Your system sounds like the way that skill-based video game RPGs (Elder Scrolls games and Arcanum come to mind) handle experience.

    In a lot of games I’ve played, I’d rather get experience for in-game accomplishments immediately and to be able to train skills like this during downtime - generally between games.

    To those with more experience in TTRPGs: would this be feaseable? Or enticing? Interesting?

    I could see people being interested in it. You get instant gratification and a bit of extra crunchiness. A lot of players enjoy that.

    With the right skill system I could see this being useful. My main concern is that if you put this on top of a system with relatively few skills, it could encourage people to game it by grinding. There are ways to mitigate that, though.

    In a system with fewer skills, instead of just being experience points, the “currency” you earned this way could be used for temporary power ups related to the skill in question.

    You could also limit it so you only rewarded players for story-related tasks.


  • Wow, there isn’t a single solution in here with the obvious answer?

    You’ll need a domain name. It doesn’t need to be paid - you can use DuckDNS. Note that whoever hosts your DNS needs to support dynamic DNS. I use Cloudflare for this for free (not their other services) even though I bought my domains from Namecheap.

    Then, you can either set up Let’s Encrypt on device and have it generate certs in a location Jellyfin knows about (not sure what this entails exactly, as I don’t use this approach) or you can do what I do:

    1. Set up a reverse proxy - I use Traefik but there are a few other solid options - and configure it to use Let’s Encrypt and your domain name.
    2. Your reverse proxy should have ports 443 and 80 exposed, but should upgrade http requests to https.
    3. Add Jellyfin as a service and route in your reverse proxy’s config.

    On your router, forward port 443 to the outbound secure port from your PI (which for simplicity’s sake should also be port 443). You likely also need to forward port 80 in order to verify Let’s Encrypt.

    If you want to use Jellyfin while on your network and your router doesn’t support NAT loopback requests, then you can use the server’s IP address and expose Jellyfin’s HTTP ports (e.g., 8080) - just make sure to not forward those ports from the router. You’ll have local unencrypted transfers if you do this, though.

    Make sure you have secure passwords in Jellyfin. Note that you are vulnerable to a Jellyfin or Traefik vulnerability if one is found, so make sure to keep your software updated.

    If you use Docker, I can share some config info with you on how to set this all up with Traefik, Jellyfin, and a dynamic dns services all up with docker-compose services.


  • Look up “LLM quantization.” The idea is that each parameter is a number; by default they use 16 bits of precision, but if you scale them into smaller sizes, you use less space and have less precision, but you still have the same parameters. There’s not much quality loss going from 16 bits to 8, but it gets more noticeable as you get lower and lower. (That said, there’s are ternary bit models being trained from scratch that use 1.58 bits per parameter and are allegedly just as good as fp16 models of the same parameter count.)

    If you’re using a 4-bit quantization, then you need about half that number in VRAM. Q4_K_M is better than Q4, but also a bit larger. Ollama generally defaults to Q4_K_M. If you can handle a higher quantization, Q6_K is generally best. If you can’t quite fit it, Q5_K_M is generally better than any other option, followed by Q5_K_S.

    For example, Llama3.3 70B, which has 70.6 billion parameters, has the following sizes for some of its quantizations:

    • q4_K_M (the default): 43 GB
    • fp16: 141 GB
    • q8: 75 GB
    • q6_K: 58 GB
    • q5_k_m: 50 GB
    • q4: 40 GB
    • q3_K_M: 34 GB
    • q2_K: 26 GB

    This is why I run a lot of Q4_K_M 70B models on two 3090s.

    Generally speaking, there’s not a perceptible quality drop going to Q6_K from 8 bit quantization (though I have heard this is less true with MoE models). Below Q6, there’s a bit of a drop between it and 5 and then 4, but the model’s still decent. Below 4-bit quantizations you can generally get better results from a smaller parameter model at a higher quantization.

    TheBloke on Huggingface has a lot of GGUF quantization repos, and most, if not all of them, have a blurb about the different quantization types and which are recommended. When Ollama.com doesn’t have a model I want, I’m generally able to find one there.


  • I recommend a used 3090, as that has 24 GB of VRAM and generally can be found for $800ish or less (at least when I last checked, in February). It’s much cheaper than a 4090 and while admittedly more expensive than the inexpensive 24GB Nvidia Tesla card (the P40?) it also has much better performance and CUDA support.

    I have dual 3090s so my performance won’t translate directly to what a single GPU would get, but it’s pretty easy to find stats on 3090 performance.