

Holy shit how can you read that? For a minute I thought the text was encoded until maybe I clicked a button, then I thought it might be Arabic, now I see that’s it’s English but I can’t even make out all the letters. No thanks.


Holy shit how can you read that? For a minute I thought the text was encoded until maybe I clicked a button, then I thought it might be Arabic, now I see that’s it’s English but I can’t even make out all the letters. No thanks.


I’m pretty sure he knew why.


I’m my experience, running Ollama locally works great. I do have a beefy GPU, but even on affordable consumer grade GPUs you can get good results with smaller models.
So it technically works to run an AI agent locally, but my experience has been that coding agents don’t work well. I haven’t tried using general AI agents.
I think the amount of VRAM affordable/available to consumers is nowhere near enough to support a context length that’s necessary for a coding agent to remain coherent. There are tools like Get Shit Done which are supposed to help with this, but I didn’t have much luck.
So I’m using OpenCode via OpenRouter to use LLMs in the cloud. Sad that I can’t get local-only to work well enough to use for coding agents, but this arrangement works for me (for now).


Rules 1, 2 and 6


When you say “in Lemmy communities”, do you mean “on the internet”?


But it also enforces the “keep everything just in case” mentality/habit. This situation feels good precisely because it beat the odds. 🙂


I don’t care if AI was used in its creation. I do care if it’s FOSS/libre.
And also, it’s a bit weird to me that copying YouTube’s UI is considered good. I havent used YouTube in a long time, but I recall there being some good aspects and some bad. Why not create your own vesion of a UI?


I agree that more options is a good thing, and that activitypub would be a plus. But FYI, I wont be using it because of the license. I use only FOSS whenevr possible.


Can I?


Server01: 64 Server02: 19 Plus a bunch of sidecar containers solely for configs that aren’t running.


Not positive, but I think you left in a reference to real info (twilightparadox.com) instead of “example-fying” it (mydomain.com), in the paragraph just before section 4:
For example say I have home-assistant running on a Pi with the local address 192.168.0.11, I could create a subdomain named ha that has the value mysub.twilightparadox.com then create the following nginx config
server{ listen 80; server_name ha.mydomain.com; resolver 192.168.0.1; location / { proxy_pass http://192.168.0.11/; } }When nginx sees a request for ha.mydomain.com it passes it to the address 192.168.0.11 port 80.


I self-host a CA server with [step-ca](https://github.com/smallstep/certificates], and I also use it to create my mTLS certs.
It does seem like what I’m seeing would be someone’s idea/joke of a drunk mode, but it’s how the page loads for me - I didn’t click anything. And I can’t read the buttons to see if one days “turn off drunk mode”.