In reply to: https://programming.dev/comment/22213947
Ollama is now also possible.
Ollama is now also possible.
I set up a quick demonstration to show risks of curl|bash and how a bad-actor could potentially hide a malicious script.
It’s nothing new or groundbreaking, but I figure it never hurts to have another reminder.
Hahahaha noticed this too. 1.5 was where it was at tho
I think the general response is from confusion over what you could possibly have been using the url bar for in your browser if you didn’t know you could put urls there.
Hey y’all, this actually isn’t self hosting related, but who have you had good luck with for paid matrix hosting?
Right now, I do enough tinkering with everything that I would be willing to just pay to host a matrix server for my friends.
Unless it really is easy enough to do it on a synology nas for text/voice/screen share…but do I need to pay for a domain still?
We are (like everyone) on matrix.org now but realize we need to move eventually.
If you have your own VPS anyway, there is the Matrix Ansible Playbook which makes the setup with docker containers very easy. But I also get the sentiment that you don’t want to tinker around all the time and just want stuff to work.
Kudos to you for using Matrix in the first place, I hope you can bring a lot of your friends and family to switch over to it. So far this has been the biggest hurdle on my journey 😅
If you’re Canadian, you can get free and cheap .ca domains https://www.cira.ca/en/why-choose-ca/
Ever since Readarr was officially discontinued, many forks and replacements have popped up. I’m currently running pennydreadful/bookshelf, which seems to be chugging along. Faustvii/Readarr is also around but seems to not be actively meaintained??
There’s also Chaptarr, which looks promising, but I’ve heard concerns about it being vibe-coded and such (see rreading-glasses: “I do not endorse the vibe-coded Chaptarr project.”). Does anybody know to what extent this is true, and what the code quality is like?
??
Caliber web isn’t two separate applications, it’s a calibre-compatible database served via http. There is no desktop “calibre” involved.
There is integrated koreader sync, though.
Yep! for a while I deployed Calibre-Web alongside Calibre in a ‘books’ compose.yaml stack using Docker. I used volume mounts to expose my library to both containers. The main thing to be cautious of is that you don’t write to the db from both C and CW at the same time (which could result in corruption). Some folks spin up/down Calibre as-needed, but I had them both running and was just mindful. I personally ended up switching from C+CW to Calibre-Web Automated and fully removing Calibre. I’m able to do everything from CWA that I was doing in both previously. FWIW if you are managing devices (e.g., family, etc.), Kobo devices + Kobo sync via CW/CWA is wonderful for usability (books show up on devices ‘natively’).
I am currently looking for Thin Clients on ebay to use as my main server instead of the RPi 4 with an external USB drive.
I found decent offers for: - Dell Optiplex 3020M with i5-4590T 4GB RAM 120GB SSD - Dell Wyse 5070 with Celeron J4105 or Pentium Silver J5005 both with 8GB RAM 64GB SSD
Given the current prices of new hardware my questions are: - Should I go for 8GB RAM? - Or are 4GB RAM fine and I should take double the storage?
Things I want to run on this server: - Karakeep - FreshRSS - Paperless-NGX or Papra - Immich - Booklore
Because I plan to mostly use podman I tried to check for virtualization and all three suppoert Intels VT-x technolgy, will that be fine for my use case?
Advanced Vector Extensions instruction set; introduced with Sandy Bridge in 2011, but not included in Pentium/Celeron branded processors even after then for reasons best known only to Intel.
Mongo is the application that has most irritated me by requiring it, but I doubt it’s the only one.
Correct (which is why I mentioned Kata, as that’s a container runtime backed by microvms, sort of like how AWS uses firecracker to run lambdas and “serverless” container workloads)
Another swing and a miss?
CISA flagged two Roundcube Webmail vulnerabilities as actively exploited in attacks and ordered U.S. federal agencies to patch them within three weeks.
thanks just updated then
fuck
My current internet setup is like this (which is common for most people).
fiber line from ISP <-> ISP fiber modem <-> Personal wifi router <-> switch
This is working fine with no issues. But I need to power two devices. I want to reduce this to a single device.
fiber line from ISP <-> Modem+Firewall PC <-> Switch <-> AP1,AP2...
From my initial research, what I need is an SFP module which can be attached to a PC which supports SFP. OPNsense should be able to handle most SFP modules.
What is the community’s take on this? Is this worth the effort? Can I find a mini-PC which supports SFP? Will it be cost effective?
Older 10G SFP+ models were definitely power hungry. I think they’ve gotten better since then, but I haven’t really looked into how much better.
Yeah. I ended up getting a couple ms of latency back when i pulled the isp router too.
thank you for sharing my product here! I grew up on forums, it’s somewhat of a love letter to the mid 2000s I spent many hours of my youth with, happy to answer any questions on the project!
sure, so I use a ton of codegen and hand-write the openapi schema, jsonschemas and database model, I use my butt often to write the mapping/binding boilerplate that goes between the outside-to-inside world (database stuff to queriers/writers and http handlers to actual logic) then I write the logic itself as well as the end-to-end tests. I find language models work very well once you have a clear set of constraints/boundaries such as a clear api contract + generated types or a set of tests that define the behaviour. I use a mix of claude and codex. Claude I find works well for exploratory/experimental work (a ton of the new plugin system was R&D so claude helped set up and tear down a bunch of potential implementations and ideas) codex is a lot less interactive and doesn’t seem to play well with creative r&d style exploratory workflows, so I use that one more for well planned out features using the codegen mentioned before.
while I somewhat understand the “sticking point” it allows me to work faster and focus on the more enjoyable side of the craft i’ve honed for almost 20 years. While it’s still not a super popular project and a couple of friends sometimes help, it’s just me doing it so the my butt helps a lot when I only have a couple of hours a night to work on it!
outside of pure code, I used a combination of very early generative imagery models (circa 2022 I think) for the hero art, which started life as a sketch, scanned in, with some iterations on Dall-E (back when it was an app before chatgpt absorbed it) and a few hours painting and expanding in photoshop with my wacom. for future art on blog posts and such, I’m keen to commission a human artist for future marketing assets (in case you know anyone!)
and I think finally, lots of exploratory discussion with chatgpt on api design, http semantics, cross-browser cookie behaviours, boring stuff like that… very useful!
do you think it would be worth including some blurb about how ai tools are used in the readme for this kinda crowd who are understandably skeptical of many open source projects now due to irresponsible usage of ai?
I think a blurb would be a great idea, especially for your project.
I feel like the biggest hurdle for your project is that the people it speaks to, especially the way you market it (analogues to natural, organic things like plants, the purposeful methodology intrinsic to gardening, as well as the nostalgic throwback to a simpler time of the internet when everything was more hand-made and deliberate) are the same people that will be put off by AI, being that it’s the antithesis of those things.
Making your case for why and how it’s used, (IE Not just vibe-coded slop but something that matters a little more to you) might be enough to keep people on board.
This should be excellent for selfhosters that have all their services in one VM. I haven’t tried this myself, but I think this means you can:
- you can create memorable links instead of memorizing port numbers: jellyfin.foo-bar.ts.net
- share one service from a machine instead of all of them in a more intuitive way
If you’re new to Tailscale Services, it lets you publish internal resources like databases, APIs, and web servers as named services in your tailnet, using stable MagicDNS names. Rather than connecting to individual machines, teams connect to logical services that automatically route traffic to healthy, available backends across your infrastructure. This decoupling makes migrations, scaling, and high availability far easier, without reconfiguring clients, rewriting access policies, or standing up load balancers. Our documentation has details on use cases, requirements, and implementation.
Just minor issue that maybe I’m not configuring correctly but when I use private resources I have to use the Ip instead of the alias. Looked online and it seemed other users were experiencing the same issue of not being able to use the alias. At this point I’m almost thinking it might be easier set up a second traefik container that just handles all the local connections and configure manually. Would love to just type my *.local address and have it be simple like that. Otherwise I love it and everything else it comes with! An alternative could be netbird, but want to see if I can figure out that small tid bit of pangolin first.
Just tried it, Services doesn’t work with funnel. You need to be on the tailnet.