I fixed it. It seems to work now. For future reference:
Due to my reverse proxy setup the php container identified it’s own URL as being accessed via HTTP. Settings HTTPS=on in the config did the trick. This will force symfony to assume HTTPS for all communication.
Edit: it seems my comments are not being federated so I could still use some help. Edit 2: it seems all it needed was some patience
Synology in an offsite location because it just works, and I have no complaint about their mobile app. At home I’m running proxmox with truenas and raided drives. A nightly script will copy all new files from synology to a truenas share.
My network mostly uses NPC and summon names from Final Fantasy XI, because I played that game for many, many years and can associate the personalities of those characters with specific roles the host needs to have. I've also considered using Pokemon names for similar reasons, and with over 1000 current Pokemon species it'd be hard to max out in a home environment.
Sounds similar to my needs. My solution was a self hosted instance of own cloud on a raspberry pi 4 (nextcloud was good but too too many bells and whistles, and was unstable on my system). The owncloud android app automatically transfers my photos and videos, which are then automatically downloaded to my main PC. Important: This is not a backup solution, if I delete a photo from one instance, it will be be deleted on all instances. This system had physical redundancy, as all photos are on at least 2 separate devices at a time.
What was the rationale behind basically using a pi to just forward the files to your main PC? Wouldn't it be more efficient to just do everything on the main PC - using task scheduler or something?
I'm also concerned about putting all eggs in one basket. Ive experienced a shutdown ruining the formatting on a drive and losing everything on it. That, and the possibility of theft (PC is more attractive than a little hard drive) or water spilling on the comp and frying something, is what's preventing me from the simple option of just putting everything on the PC.
You need to investigate owncloud, it is a self hosted cloud drive, think Dropbox, the pi is the machine in my house which is already internet facing, and has a wordpress blog running, with a domain name attached, so it made sense to use that. The owncloud android app sends all my photos and videos to the pi, and then when I am at my pc, the owncloud windows app pulls the files from the pi,so all files are synced. Once set up correctly, all of this happens without any manual intervention, and files are stored in 3 physical discs, my phone, my pi ext hard drive,and my PC. I also have an off site backup on oracle S3 Archive.
I did the hawaiian islands because my wife is from Hawaii. I regret it, it was cute and clever, now it is harder to troubleshoot and I max out at 9 (if you count vegas), and I forget which “island” is attached to which, that now I just use IPs which then defeats the purpose. I’m starting to switch back to functional naming, I’m about to destroy and rebuild everything so it’ll be a good chance to start over and get it “right”.
Nothing wrong with practical. I'll often name a VM by what service is going on to it as a temporary measure until I'm sure it's going to work out and give it its final name.
There's been some work getting CLIP to run in pure C++ with quantization in GGML, and there's a curious FasterViT model I've seen months ago, so hopefully this can be made faster to inference and easier to host as one binary soon enough.
If you want to host a capable pretrained model, feel free to check out LLaMA, especially the LLaMA.cpp since it allows for speedy inference. For the front-end, there's text-generation-webui, official web UI, Serge, XInference, or chatbot-ui with LocalAI (a server that makes LLaMA.cpp use OpenAI's schema).
For the model fine-tunes, I'd personally recommend WizardLM. It's not perfect, far from it, but it seems the closest to GPT-3.5 in my experience. Be sure to never trust what it says though, it does hallucinate less then other fine-tunes I saw, but still does so frequently enough.
There isn't really much of a need to train a model on a particular community. If you need it to work with changing facts, just throwing results from the search engine into the context window. Most of these models were already trained on huge datasets including Reddit, so...
If you want to fine-tune it on most helpful comments to make sure it generates more consistent advise, I'd recommend QLoRa and a 1k instruction dataset like in LIMA paper. Though again, I'm not sure there's any use for that.
If you don’t have much time, I would keep it as simple as possible. Just put Fedora on it, administer it through Cockpit if you like a web-gui and run the software via Podman self-updating containers. Storage on btrfs raid1.
Thanks! I have heard of Cockpit and Podman but never used it. I do use Fedora Workstation on my main laptop and find it quite reliable. Can you share a few pros or cons?
Cockpit is not the most advanced in regards to monitoring but it keeps it simple and manageable.
Podman runs all Docker containers (at least in rootful mode), but you are better off turning the usual docker-compose scripts into systemd service files via the built in Quadlet system. A bit more work initially, but then all the containers are nicely managed like any other service via systemd.
selfhosted
Más reciente
Esta revista es de un servidor federado y podría estar incompleta. Explorar más contenido en la instancia original.