

Well no. Initially i had the storage set on the VM where its running. I wasn’t expecting it to download all that data.
Well no. Initially i had the storage set on the VM where its running. I wasn’t expecting it to download all that data.
Well, its ticked but not working then because I found duplicate links. Maybe it only works if you try to store the same link twice but it doesn’t work on the imported bookmarks
I was using floccus, but what is the point of saving bookmarks twice, once in linkwarden and once in browser
Looks very interesting. But as others noted, still too young, only two releases in 3 months and 1 person. Certainly to keep an eye out. The MIT licence worries me too. I always add the licence in the criteria ;-)
absolutely, none of that is going past my router.
Interestingly, I did something similar with Linkwarden where I installed the datasets in /home/user/linkwarden/data. The dam thing caused my VM to run out of space because it started downloading pages for the 4000 bookmarks I had. It went into crisis mode so I stopped it. I then created a dataset on my Truenas Scale machine and NFS exported to the VM on the same server. I simply cp -R to the new NFS mountpoint, edited the yml file with the new paths and voila! It seems to be working. I know that some docker container don’t like working off NFS share so we’ll see. I wonder ho well this will work when the VM is on a different machine as the there is a network cable, a switch, etc. in between. If for any reason the nas goes down, the docker containers on the Proxmox VM will be crying as they’ll lose the link to their volumes? Can anything be done about this? I guess it can never be as risilient as having VM and has on the same machine.
The first rule of containers is that you do not store any data in containers.
Do you mean they should be bind mounts? From here, a bind mount should look like this:
version: ‘3.8’
services: my_container: image: my_image:latest volumes: - /path/on/host:/path/in/container
So referring to my Firefly compose above, then I shoudl simply be able to copy over the /var/www/html/storage/upload
for the main app data and the database stored in here /var/lib/mysql
can just be copied over? but then why does my local folder not have any strorage/upload
folders?
user@vm101:/var/www/html$ ls index.html
Here is my docker compose file below. I think I used the standard file that the developer ships, simply because I was keen to get firefly going without fully understanding the complexity of docker storage in volumes.
The Firefly III Data Importer will ask you for the Firefly III URL and a "Client ID".
# You can generate the Client ID at http://localhost/profile (after registering)
# The Firefly III URL is: http://app:8080/
#
# Other URL's will give 500 | Server Error
#
services:
app:
image: fireflyiii/core:latest
hostname: app
container_name: firefly_iii_core
networks:
- firefly_iii
restart: always
volumes:
- firefly_iii_upload:/var/www/html/storage/upload
env_file: .env
ports:
- '84:8080'
depends_on:
- db
db:
image: mariadb:lts
hostname: db
container_name: firefly_iii_db
networks:
- firefly_iii
restart: always
env_file: .db.env
volumes:
- firefly_iii_db:/var/lib/mysql
importer:
image: fireflyiii/data-importer:latest
hostname: importer
restart: always
container_name: firefly_iii_importer
networks:
- firefly_iii
ports:
- '81:8080'
depends_on:
- app
env_file: .importer.env
cron:
#
# To make this work, set STATIC_CRON_TOKEN in your .env file or as an environment variable and replace REPLACEME below
# The STATIC_CRON_TOKEN must be *exactly* 32 characters long
#
image: alpine
container_name: firefly_iii_cron
restart: always
command: sh -c "echo \"0 3 * * * wget -qO- http://app:8080/api/v1/cron/XTrhfJh9crQGfGst0OxoU7BCRD9VepYb;echo/" | crontab - && crond -f -L /dev/stdout"
networks:
- firefly_iii
volumes:
firefly_iii_upload:
firefly_iii_db:
networks:
firefly_iii:
driver: bridge
Yeah, seems like the registration to IPinfo is required so that you can download a token which then allow pfBlockerNG to download the ASN database. I’ve just registered to IPinfo and it seems like (unless its a false alarm) that it now works.
However, I’ve also learned that all the ARUB ASNs I had didn’t include the SMPTS server I was using.
Basically, I did an nslookup smtps.aruba.it, got the IP and then did a search for the ASN using Team Cymru IP to ASN Lookup v1.0 here https://asn.cymru.com/cgi-bin/whois.cgi to find the ASN. I then copied the ASN in the WAN_EGRESS list and bingo its working.
They are mostly cash. On average 5-10/day over a 5 hrs day.
Ah right. Docker seems to have gained more ground than LXC if its the first time I come across it. I hadn’t realised they were similar, especially after I discovered that people are running docker in LXC …
OK, I should have been clearer. With “community LXC repository on github” I actually meant that I used the LXC scripts. It did go through a few questions at the start but nothing relating to storage and camera setup.
Immich and Radicale definitely recommended. I’ve still got paperless-ng and plan to move to paperless-ngx as soon as I find the time. I’ve also got firefly-iii which is a big revolution to how I manage personal finance. Even my 17 old son has got into it … He couldn’t understand where all his hard earnings were going.
Looks like I have two options for Proxmox + Frigate:
a) full VM via a QEMU VM that then has Frigate as app container (Frigate website is not recommending this approach from what I understand)
b) Virtual environment (VE) thgrough the “Proxmox Container Toolkit” where Frigate is as a system container (i.e. docker container directly in the Proxmox environment, which eliminates the VM overhead. See here: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_pct
Looks like someone has got it up and running in the PCT environment https://www.homeautomationguy.io/blog/running-frigate-on-proxmox
Also, I need to get my hands on a Micro desktop with a PCIe slot so that I can stick the Coral unit in it. Any thoughts for cheap solutions on eBay?
I just find nextcloud bloated for my use case.
Mate, it was a sarcastic statement 😉