I fixed it. It seems to work now. For future reference:
Due to my reverse proxy setup the php container identified it’s own URL as being accessed via HTTP. Settings HTTPS=on in the config did the trick. This will force symfony to assume HTTPS for all communication.
Edit: it seems my comments are not being federated so I could still use some help. Edit 2: it seems all it needed was some patience
I’ve used NameCheap for about 5 years now and it’s alright. I don’t have to deal with them so much since I only have one domain and my setup is pretty simple.
Same. I've been using NameCheap for years. I have 3 or 4 domains for different things. I really just need a registrar to hold my names and DNS which rarely changes. My domains auto-renew every year and I barely have to think about it. They're fine for my needs, no complaints.
I've been using a Synology NAS for years. I use file and photo sync and it works pretty well. I just set up Surveillance Station (their security camera recording software) and connected an inexpensive camera to it. It works really well although it took me a while to figure everything out.
If you're concerned about data loss, you can set up a backup to Amazon S3 or another backup/storage service or you can put a Synology NAS somewhere else and have it back up to that. You could also have it sync with a service like One Drive.
I like to tinker so I've set up a couple other things on it including a Plex server, pi hole, and a VPN Server. It's a pretty versatile device.
Synology in an offsite location because it just works, and I have no complaint about their mobile app. At home I’m running proxmox with truenas and raided drives. A nightly script will copy all new files from synology to a truenas share.
Sounds similar to my needs. My solution was a self hosted instance of own cloud on a raspberry pi 4 (nextcloud was good but too too many bells and whistles, and was unstable on my system). The owncloud android app automatically transfers my photos and videos, which are then automatically downloaded to my main PC. Important: This is not a backup solution, if I delete a photo from one instance, it will be be deleted on all instances. This system had physical redundancy, as all photos are on at least 2 separate devices at a time.
What was the rationale behind basically using a pi to just forward the files to your main PC? Wouldn't it be more efficient to just do everything on the main PC - using task scheduler or something?
I'm also concerned about putting all eggs in one basket. Ive experienced a shutdown ruining the formatting on a drive and losing everything on it. That, and the possibility of theft (PC is more attractive than a little hard drive) or water spilling on the comp and frying something, is what's preventing me from the simple option of just putting everything on the PC.
You need to investigate owncloud, it is a self hosted cloud drive, think Dropbox, the pi is the machine in my house which is already internet facing, and has a wordpress blog running, with a domain name attached, so it made sense to use that. The owncloud android app sends all my photos and videos to the pi, and then when I am at my pc, the owncloud windows app pulls the files from the pi,so all files are synced. Once set up correctly, all of this happens without any manual intervention, and files are stored in 3 physical discs, my phone, my pi ext hard drive,and my PC. I also have an off site backup on oracle S3 Archive.
My network mostly uses NPC and summon names from Final Fantasy XI, because I played that game for many, many years and can associate the personalities of those characters with specific roles the host needs to have. I've also considered using Pokemon names for similar reasons, and with over 1000 current Pokemon species it'd be hard to max out in a home environment.
I did the hawaiian islands because my wife is from Hawaii. I regret it, it was cute and clever, now it is harder to troubleshoot and I max out at 9 (if you count vegas), and I forget which “island” is attached to which, that now I just use IPs which then defeats the purpose. I’m starting to switch back to functional naming, I’m about to destroy and rebuild everything so it’ll be a good chance to start over and get it “right”.
Nothing wrong with practical. I'll often name a VM by what service is going on to it as a temporary measure until I'm sure it's going to work out and give it its final name.
There's been some work getting CLIP to run in pure C++ with quantization in GGML, and there's a curious FasterViT model I've seen months ago, so hopefully this can be made faster to inference and easier to host as one binary soon enough.
If you want to host a capable pretrained model, feel free to check out LLaMA, especially the LLaMA.cpp since it allows for speedy inference. For the front-end, there's text-generation-webui, official web UI, Serge, XInference, or chatbot-ui with LocalAI (a server that makes LLaMA.cpp use OpenAI's schema).
For the model fine-tunes, I'd personally recommend WizardLM. It's not perfect, far from it, but it seems the closest to GPT-3.5 in my experience. Be sure to never trust what it says though, it does hallucinate less then other fine-tunes I saw, but still does so frequently enough.
There isn't really much of a need to train a model on a particular community. If you need it to work with changing facts, just throwing results from the search engine into the context window. Most of these models were already trained on huge datasets including Reddit, so...
If you want to fine-tune it on most helpful comments to make sure it generates more consistent advise, I'd recommend QLoRa and a 1k instruction dataset like in LIMA paper. Though again, I'm not sure there's any use for that.
selfhosted
Activo
Esta revista es de un servidor federado y podría estar incompleta. Explorar más contenido en la instancia original.