• 0 Posts
  • 16 Comments
Joined 2 years ago
cake
Cake day: July 9th, 2023

help-circle

  • Before you can decide on how to do this, you’re going to have to make a few choices:

    Authentication and Access

    Theres two main ways to expose a git repo, HTTPS or SSH, and they both have pros and cons here:

    • HTTPS A standard sort of protocol to proxy, but you’ll need to make sure you set up authentication on the proxy properly so that only only thise who should have access can get it. The git client will need to store a username and password to talk to the server or you’ll have to enter them on every request. gitweb is a CGI that provides a basic, but useful, web interface.

    • SSH Simpler to set up, and authentication is a solved problem. Proxying it isn’t hard, just forward the port to any of the backend servers, which avoids decrypting on the proxy. You will want to use the same hostkey on all the servers though, or SSH will refuse to connect. Doesn’t require any special setup.

    Replication

    Git is a distributed version control system, so you could replicate it at that level, alternatively you could use a replicated file system, or a simple file based replication. Each has it’s own trade-offs.

    • Git replication Using git pull to replicate between repositories is probably going to be your most reliable option, as it’s the job git was built for, and doesn’t rely on messing with it’s underlying files directly. The one caveat is that, if you push to different servers in quick suscession you may cause a merge confict, which would break your replication. The cleanest way to deal with that is to have the load balancer send all requests to server1 if it’s up, and only switch to the next server if all the prior ones are down. That way writes will alk be going to the same place. Then set up replication in loop, with server2 pulling from server1, server3 pulling from server2, and so on up to server1 pulling from server5. With frequent pulls changes that are commited to server1 will quickly replicate to all the other servers. This would effectively be a shared nothing solution as none of the servers are sharing resources, which would make it easier to geigraphically separate them. The load balancer could be replaced by a CNAME record in DNS, with a daemon that updates it to point to the correct server.

    • Replicated filesystem Git stores its data in a fairly simple file structure, so placing that on a replicated filesystem such as GlusterFS or Ceph would mean multiple servers could use the same data. From experience, this sort of thing is great when it’s working, but can be fragile and break in unexpected ways. You don’t want to be up at 2am trying to fix a file replication issue if you can avoid it.

    • File replication. This is similar to the git replication option, in that you have to be very aware of the risk of conflicts. A similar strategy would probably work, but I’m not sure it brings you any advantages.

    I think my prefered solution would be to have SSH access to the git servers and to set up pull based replication on a fairly fast schedule (where fast is relative to how frequently you push changes). You mention having a VPS as obe of the servers, so you might want to push changes to that rather than have be able to connect to your internal network.

    A useful property of git is that, if the server is missing changesets you can just push them again. So if a server goes down before your last push gets replicated, you can just push again once the system has switched to the new server. Once the first server comes back online it’ll naturally get any changesets it’s missing and effectively ‘heal’.


  • Parks are great, but unless they’re directly outside the houses where I can keep an eye on what’s happening they’re not as safe or convenient. Being able to send the kids into the garden to run off some energy whilst I’m in the house doing something, and being reasonably confident that they’re safe is a huge benefit.

    That’s certainly not impossible with a bit of sensible planning around how housing is laid out, putting clusters of housing directly around a shared green space, but it is rather challenging to retrofit in existing conurbations, and impossible in more spread out communities. The American style of huge featureless lawns surrounding the house right up to the property boundary are pretty awful, but the more European style of a bit of lawn surrounded by flower beds and maybe trees is rather better.



  • notabot@lemm.eetoSelfhosted@lemmy.worldTesting vs Prod
    link
    fedilink
    English
    arrow-up
    3
    ·
    13 days ago

    I manage all my homelab infra stuff via ansible and run services via kubenetes. All the ansible playbooks are in git, so I can roll back if I screw something up, and I test it on a sacrificial VM first when I can. Running services in kubenetes means I can spin up new instances and test them before putting them live.

    Working like that makes it all a lot more relaxing as I can be confident in my changes, and back them out if I still get it wrong.


  • Even in cases like this justice must not just be done, but be seen to be done. It seems her guilt has been established, which is good; her sentencing comes next. It seem unlikely that there are any mitigating circumstances to reduce the punishment, but that judgement must be seen to be fair. The French citizenry are not renouned for their forebearance in the face of injustice, so I would be tempted to trust their system for now.

    ETA: In fact, it seems like the punishment has already been decreed: five years ineligibility to run for office, four years in prison (two suspended), and a fine. That puts her out of tbe running for president, and likely tarnishes her enough to keep her down even after 2030.




  • From the article:

    The 10-person team is trapped at the remote Sanae IV base, which is on a cliff edge about 105 miles inland from the ice shelf, by encroaching ice and weather as the southern hemisphere winter sets. Teams overwintering at the base are typically cut off for 10 months at a time. Sources told South Africa’s Sunday Times that the only way to leave the base now was via emergency medical evacuation to a neighbouring German base about 190 miles away.

    As far as I can see it’s currently the end of the Antarctic summer, winter is just starting, and will likely last until October. It sounds like something went badly wrong with both the psychological screening of the team members, and the decision for the ice breaker that delivered them to leave before the situation was resolved.




  • I have a strong suspicion that if I was suddenly attacked, my brain would dump all ideas of fighting back and just freeze, which of course allows the violence to happen.

    Find, and take, local self defence classes. Not necessarily martial arts classes (though they may be involved), but real world self defence. It’ll be grittier, nastier and much better practice. Get used to grappling and fighting in a controlled environment, and you’ll be much less likely to freeze if you need it in an emergency.

    You’re right that’ll it’ll take a long time to change at a cultural level, but that needs to start somewhere, and obe person doing it and then encouraging others could be a local catalyst.



  • In this, trump’s acting in the same way he did in his first term. He believes and promotes the opinions of the last person who spoke to him. Thus time Starmer was that person, and he’s been trying to get trump to maintain support for Ukraine for a while. This new stance will last until someone else comes along and persuades him otherwise. I would expect to see various E.U. leaders falling over themselves to say how great trump is for restarting support to avoid this happening.


  • Ah, ok. You’ll want to specify two allowedip ranges on the clients, 192.168.178.0/24 for your network, and 10.0.0.0/24 for the other clients. Then your going to need to add a couple of routes:

    • On the phone, a route to 192.168.178.0/24 via the wireguard address of your home server
    • On your home network router, a route to 10.0.0.0/24 via the local address of the machine that is connected to the wireguard vpn. (Unless it’s your router/gateway that is connected)

    You’ll also need to ensure IP forwarding is enabled on both the VPS and your home machine.


  • The allowed IP ranges on the server indicate what private addresses the clients can use, so you should have a separate one for each client. They can be /32 addresses as each client only needs one address and, I’m assuming, doesn’t route traffic for anything else.

    The allowed IP range on each client indicates what private address the server can use, but as the server is also routing traffic for other machines (the other client for example) it should cover those too.

    Apologies that this isn’t better formatted, but I’m away from my machine. For example, on your setup you might use:

    On home server: AllowedIPs 192.168.178.0/24 Address 192.168.178.2

    On phone: AllowedIPs 192.168.178.0/24 Address 192.168.178.3

    On VPS: Address 192.168.178.1 Home server peer: AllowedIPs 192.168.178.2/32

    Phone peer: AllowedIPs 192.168.178.3/32