

Glad I started using Vaultwarden a while back. Just need to find better apps for android and Firefox I guess because I’m guessing they’re going to try to break compatibility.


Glad I started using Vaultwarden a while back. Just need to find better apps for android and Firefox I guess because I’m guessing they’re going to try to break compatibility.


What do you use for repeatable recovery and deployment of systems?
I’ve looked at ArgoCD and FlexCD. ArgoCD was too flaky. When I made changes to helm files it would often fail to deploy them and the UI often wouldn’t really show the detailed errors from things like helm syntax errors, so it was a pain to troubleshoot.
FlexCD was just really a pain to configure in the first-place and I didn’t want to learn kustomize when I already have helm charts.
And neither really supported staged deployments or dealt with dependant services well. So I couldn’t get it to deploy the infrastructure level helm charts like PostgreSQL before deploying the services that depend on it. Technically, with Kubernetes it shouldn’t matter about the order of deployment but in reality when ArgoCD would deploy the other stuff first and wait for it to come up and it never came up because the dependencies weren’t there, it caused it to choke a lot.
Just an example of the issues I’ve had. But I really want an easy way to make lots of small changes to charts and deploy them quickly as well as being able to quickly recover the cluster from backups if something catastrophic happens like a fire without having to manually deploy each chart. Just curious how others handle it or if it’s always manual deployment of charts via CLI only.


But why bother if you get ULAs. It doesn’t enhance anything and adds complexity if you use NAT or other routing as you need to add rules for both IPv4 and IPv6. Most ISPs, in the US anyway, don’t offer true IPv6 only what was supposed to be transitional technology decades ago like 6rd. I hate to say anything good about Comcast, but it’s the single thing I miss from that they actually do. But having such limited upstream speeds on cable just isn’t reasonable for much of anything these days, but definitely not when self-hosting. 1-10Mbps up on Cable or most DSL just doesn’t cut it.
If you’re starting from scratch implementing IPv6 on your LAN might be worthwhile if you dont mind the limitations of or don’t require the transitional technologies on your LAN like NAT64 and the hit to performance from the translations/tunneling when accessing the internet doesn’t bother you (it sure annoyed the hell out of me every time I accessed a website, among other things).
But dual stack, seems like it’s not worthwhile. Just choose one or the other. Few software applications or modern hardware are going to have an issue with IPv6. But if you’re using both ULAs and IPv4 private addresses, it seems like a lot of extra hassle to write duplicate routing rules for everything.


I can’t get IPv6 in any worthwhile form from my ISP. IMHO IPv6 isn’t any more useful than IPv4 if you only have ULA. And NAT is not as well supported since it wasn’t intended to even be really necessary for example. So even if you are starting from scratch or just using it internally, there are some disadvantages to implementing it over just sticking with IPv6. But if your ISP actually provides IPv6 it might be worth it as long as your devices all support it. But otherwise you’re going to need to set up IPv4 in addition, anyway, so you’re just going to create problems for no good reason, IMHO.


I mean, most people dont really understand what a reverse proxy is doing, and with dynamic IP addresses and other complications that residential customers often can’t control, it can be a challenge to configure properly. Not to mention if you want to use Jellyfin on a device that travels between home and outside you need to either modify the domain or IP Address each time you enter or leave the home Otherwise you just end up routimg all the traffic over the internet and back losing the advantage of LAN speeds and sucking down your ISP traffic quotas. Or you need to configure something much more robust like a local DNS server to properly route traffic to the LAN IP address instead of your WAN IP address. That might not be an issue if you’re lucky enough to have an IPv6 block of addresses from your ISP and assign one to your server, but at least in the US most ISPs still use IPv4 with workarounds like 6rd for a single dynamic external IPv6 address with all of the same issues of the dynamic IPv4 addresses. Anyway, for most users hosting a Plex server is simple (unless they have double NAT kinds of issues) compared to setting up everything correctly to TailScale.


Yeah cgnat is such a stupid thing. Why can’t we get IPv6 already and avoid all of the headaches of NAT and dynamic IP addresses and such. None of that stuff should be so complicated in a residential environment.


Plex prices are expensive just to access your own media. Tailscale can do it for free.
Tailscale isn’t exactly free. It requires a lot more knowledge, configuration, maintenance, etc, than Plex alone.
Sure, many self-hosters have the ability to figure it out and the proper networking and/or server hardware to implement it. But many Plex users aren’t really self-hosters in that sense. Hosting a local media server that deals with all of the networking stuff for you is much easier than maintaining a tailscale or similar setup on top of the media server stuff. I mean for me, if I hadn’t gotten a lifetime Plex Pass early on for cheap, I probably would have put more effort into my Jellyfin setup. But Plex mostly just works and I have other bigger priorities. I hate the functionality they’ve removed that makes things more difficult than it should be, or I wouldn’t be switching, but it’s not all that bad. So if I didn’t have the expertise and hardware already, I could see it being worth the money to stick with it.


Ok, so short, wide bus from CPU to memory? Makes sense. I didn’t really mean the CPU so much as the main board is very laptop like. Very little expansion capabilities other than external connectors like audio, Ethernet, etc., but no ability to add functional or incremental upgrades like a GPU or an additional stick of memory respectively.


So just glancing at the site, are these basically laptop CPU and RAM parts just packaged in a desktop form-factor case and that’s why they’re soldered? Seems like they also don’t have much expansion capability much like a laptop such as only having a single PCI-E x4 slot with a proprietary connection interface, so I couldn’t later add a graphics card for example. Unless, I’m just missing something, and if so please let me know.
Either way thanks for letting me know about the option.


I didn’t realize they were making desktops. I almost bought a laptop from them a few years ago but ended up finding an ASUS laptop that worked well with Linux and was significantly cheaper which fit my needs better for that. I’ll check them out.


People were harassing me for trying to supporting the person in question. However, they were using information from my other posts to use hate speech to do it. Their arguments against my points had no real content beyond “I’m right and I’m mad you keep making points that question that rightness.” Removing those hopefully will leave others with similar belligerent opinions from digging into my post history to harass and try to bring harm to me because of my gender in retaliation for debating their rightness.


Not much right now due to LLM training hogging all of the memory across the industry. Best bet is lightly used.


They were DMed to me, and was disagreeing with me on this subject and then using anti-trans slurs. I got a bunch all in a short period, so I’m guessing a single actor or anti-trans group using multiple accounts. But I decided it wasn’t worth my emotional energy to keep the comments up and have others do the same. Sucks that some people can’t have discussions without finding something they hate about you and using that to make themselves feel superior when they dont have any real argument to make other than “you’re wrong”.
Honestly, I abandoned my lemmy.world account last year when Serinus changed the moderation policy that the “narrative” that trans people are not real or are mentally ill or whatever to not be considered hate speech and thus not to be removed automatically, following the similar change at Facebook. I guess I should continue to steer clear of the server since it seems it has given bigots a feeling that they aren’t the bad guy as I argued it would just like it did with Facebook, X, and the others. Sucks that we can’t live in peace. And that’s all I’ll say on the matter as I dont have the energy to convince anyone that trans people do exist, it’s widely accepted medical science from all unbiased medical organizations and it is definitely as much hate speech so say a trans person isn’t their gender as it is to say a black person is a non-human primate.


Did you read the policy and how complex it is? Did you look at the fixes they submitted and how simplistic they were that were rejected for not following a super complex policy meant for major issues in proprietary software? If an expert submits a fix with little to no risk and lots of potential for harm, why not have a simple process or just accept the fix? I wouldn’t want to follow that complex process and wait for embargos to pass before being allowed to suggest the fix for each of those issues.


I believe they were already frustrated by the responses to the fixes they did submit.
I get the frustration. It is how many big companies avoid responsibility, but that’s usually to avoid cost on actually fixing stuff. In a FOSS project, what’s the point of rejecting a simple fix because some complex process meant for complex issues in proprietary software that the security researcher can’t suggest specific fixes for wasn’t followed. Why fill out a bunch of “paperwork” and initiate a long embargo period before a fix is considered when the fix is already submitted and is simple enough and low risk and impact enough to not require more that a cursory review. It’s like asking a road engineer who sees a small pothole that only damages a few cars a year and offers to fill it because they are often affected by it to file a superior court case in order to report it, much less fix it.
So, it’s a matter of, give up because it’s too much of a burden to report, or announce in the most ethical way possible to incentivize fixes actually happening.


They explained that due to the systemic nature of the issues, many of which are across all forks of gitea, and the complex nature of the policy meaning disclosing each one individually and following the separate policies depending on the specifics of each issue, would require a very significant amount of time. Probably a day job worth for a while.
So, they could either drop it and give up, spend all of their free time for the foreseeable future properly disclosing each defect, or use the method they chose to get some level of attention on it without exposing details or breaking the security policy, but still letting both developers and users that there are issues.


Most white-hat security researchers seem like dicks until you realize they are doing most of this research for free and have few ways to get groups to fix the issues beyond spending lots of time doing it themselves or exposing the vulnerabilities in some manner that doesn’t make exploits easy to create for the black-hats.


What’s a good alternative? I’m not 100% sold on forgejo either due to some bugs and security issues I’ve run into as well, but I’ve found few self-hosted alternatives that are a good fit. I need very little beyond a git repo. But I like having the web UI for reviewing code and pull requests and basic issue tracking and need that to support OIDC, but otherwise I’m open. I want something relatively lightweight, fully FOSS, and telemetry and all other external communication can be disabled.


Definitely not needing something that high-end. It’s just me and maybe one other person using it periodically for voice commands that needs to be realtime. The rest is background processed stuff like Immich image recognition and Jellyfin audio/video processing. Nothing fancy is needed. I mention motherboard because the system I’m thinking of using is currently running Plex which I’m in the process of replacing with Jellyfin on my Kubernetes cluster of minipcs and Raspberry pis that runs most of my stuff pretty well, but could benefit from dedicated LLM/ML. So, that machine will be freed up, but it’s nearly a decade old and not up to the task as it is.
As for specific budget, I don’t have specifics in mind. My Kubernetes cluster is super energy efficient since it’s all small systems that only spin up when needed. So thinking about overall cost of ownership vs benefit. Having something too high end would just waste energy as well as the initial investment.
I mean at this point in its evolution, what parts does vaultwarden rely on from bitwarden? The clients, but there are alternatives like keyguard for Android devices. What other layers does it rely on, I actually have been trying to figure this out myself.