Oh I still tinker, but as far as deploying something big, I got what I need/want. I am looking for a couple things, but so far have come up empty handed.
Oooh thanks for the tip. Just learned about ^typo^correction that way!
I know nobody at Netbird will see this but I just finished setting up netbird via podman quadlets with traefik and its absolutely amazing!! I was using headscale before. I’m getting near at home speeds and I’m using the stun server as I can’t get a direct connection because of firewalls. The dashboard and documentation is mint and I can’t thank the people at netbird enough for the recent huge update that makes setting it up so much easier.
Next to my Home Assistant this is my second favourite and important piece in my homelab.
This is why I love open source!
Thank you!!
Ip addresses. Netbird uses the same ip when external, while tailscale has a different internal one. So I need to reconfigure the home assistant client differently for each.
It was relatively easy once I figured out that netbird doesn’t support a wildcard certificate. They just released and update that fixed that and they now support subdomains. I also do still have headscale as a backup if I need it.
TL:DR; Has anyone here successfully migrated their data & workflow from Logseq to Silverbullet?
… wall of text follows …
I’ve been using Logseq for a few years and it has been a life saver at work, trying to track the stuff going on - honestly, I’d have burned out if I hadn’t found it.
However, I still haven’t quite got all the things organised and I feel Logseq’s development is taking a different track that I don’t want to go down (db, collab, etc)
SilverBullet.md appears to be developing into the solution I’m looking for… although I don’t want a server-client architecture, so I’m running it standalone at the moment.
But, the learning curve feels so steep it’s tending to curve back on itself… or… I’m just too busy to focus on learning it.
I see how the file structure works, but I don’t understand how the templates, journals, etc work (really simple.in Logseq)
It appears to be 1 person developing this with lots of helpers who all seem happy to chip in with some my butt generated code in the forum, but no meaty documentation, examples, etc.
If you’ve read this far… is it worth sticking with? Is there an FAQ I’ve missed? Any pointers or encouragement…?
Logseq has how many? And it’s stalled really… from an external viewpoint.
https://github.com/logseq/logseq/graphs/contributors
About 4. Plus the people who maintain plugins.
https://github.com/logseq/logseq/graphs/commit-activity
They didn’t stop working on the code. But you’re right, it doesn’t feel like it UI wise. Which can be a good thing if the status quo is fine.
Promising. I will take a look.
Yeah, personally, I think they’d gradually make it db first and then markdown will gradually become an import / export function.
Can be. One of the reasons I like md.
Still. At one point I will look at silverbullet.
Vscode, despite its name, is primarily a text editor, and is well suited for writing notes in markdown. I’d give it a go if you haven’t.
A very jovial greeting to all,
About 20 minutes ago, I started the build for Hubzilla 11.2; as usual, it will be available for all to enjoy, and update their own instances after about an hour or so, so please don’t update until then.
If you’re curious about the code, you are most welcome to check out the Hubzilla code at: https://framagit.org/hubzilla/core/-/releases
and, of course, the docker image code at: https://github.com/dhitchenor/hubzilla
Questions, issues and PRs are all welcome; I’m looking forward to speaking with you.
What is hubzilla?
The feature should have been released in 0.67. Does anyone figure out how to set it up?
GitHub link
Just to update that the feature is now available on the dashboard in 0.67.1. I wonder why 0.67 said it’s out.
Hello everyone!
It’s been about 3 months since the last release, and this one took a bit longer than usual. A lot of work went into polishing and refining both the web and mobile apps to make sure it was worth the wait.
Today, we’re excited to announce Linkwarden 2.14!
For those who are new to Linkwarden, it’s a tool for collecting, organizing, reading, and preserving webpages, articles, and documents in one place. Linkwarden is available as a Cloud offering, or you can self-host it on your own server.
This release focuses on performance, usability, security, and platform upgrades.
Collections and subcollections got some important improvements.
Members and their permissions can now be propagated to subcollections, and collection admins can now create subcollections as well.
Tags now support pagination, making large tag lists easier to browse.
This helps keep things faster and more manageable, especially in places like the sidebar and tags page.
We added optimistic rendering to some of the slower parts of the app, especially around links and collections.
That means actions like updating or deleting items can now feel much more immediate, since the UI updates right away instead of waiting for the full request to finish.
Linkwarden now runs on newer foundations across both web and mobile:
These upgrades improve compatibility and give us a stronger base for future improvements.
This release brings a number of user experience improvements across the app, especially around search and settings.
Search is now more helpful and easier to discover, while settings are cleaner and easier to navigate.
We improved how submitted links are validated on the server for safer and more reliable processing. We recommend updating to 2.14 as soon as possible.
As always, this release also includes smaller fixes, UI cleanups, dependency updates, and under-the-hood improvements across the app.
Full Changelog: https://github.com/linkwarden/linkwarden/compare/v2.13.5...v2.14.0
Thanks to everyone who’s been using Linkwarden, reporting bugs, suggesting improvements, contributing, and supporting the project along the way.
This release took a little longer than usual, but a lot of care went into making sure it was worth the wait. It also gives us a much stronger foundation for what’s coming next, and we’re looking forward to sharing more with you in the coming months.
If you’re interested in trying Linkwarden without dealing with server setup and maintenance, our Cloud offering is the easiest way to get started.
We hope you enjoy Linkwarden 2.14!
Agree, I don’t blame the people becoming cynical and distrustful, I think that’s a totally rational and valid response to the current situation. I blame the people (government representatives and companies, more to the point) who are making this the situation in the first place.
I don’t want to live in a world where we trust no-one and nothing. But it’s delusional not to see that we do live in a world that is rapidly moving that direction. We need to do something (a lot of things) to stop it, but we also can’t pretend it’s not happening.
Could add better support read later
the FCC ruling yesterday got me thinking about my router, it’s probably due for a replacement by the time the theoretical end of firmware updates baked into that (natural evil is likely around the same time) takes effect. I’m having trouble finding good options particularly in regards to openwrt at least.
We currently use two asus rt-ax3000 routers in mesh mode. One attached to the modem because it’s in a really shitty location, and one attached to our home server. I have 3 items that need 2.4ghz for smart home automation and everything else runs 5ghz, 2 laptops phones etc.
Everything I can get in local stores isn’t supported by openwrt (neither are the current routers). Looking at using older hardware we have spare (a MacBook Pro 2012 or rpi4) seem to have a track record of underperforming. What are the recommendations for upgrades from here?
Follow up question is am I overthinking it? Would the MacBook Pro or rpi4 with a second Ethernet nic running a firewall before the routers also fix the issue of not getting security updates?
Mikrotik is a great budget friendly option too. It’s pretty simple to do a standard home setup in RouterOS
Hello,
I am writing cause I wanted to get some opinions from folks here that have actually built and shipped with Electron (or Tauri).
Background: Building an API IDE on Electron. Not really “just an API client”, and not a(nother) thin wrapper around a webapp either. It’s a pretty original desktop tool with a lot of editor/IDE-like behavior - not the typical form centric behavior that postman or others have: local workflows, richer interactions, and some things that honestly would have been much harder for us to build and iterate on this quickly in a more constrained setup. Thats why Electron.
this is the tool: github.com/voidenhq/voiden
Now, as adoption is growing, we are starting to get the usual questions about memory footprint and app size.
https://lemmy.world/pictrs/image/78745019-8a0a-4fcb-91b1-210efb417a66.webp
The (slightly) frustrating part is this:
When the app is actually being used, the app-side memory is often pretty reasonable. In many normal cases we are seeing something like 50–60 MB for the actual usage we care about (even added it in the app itself for people to check it out).
But then people open Activity Monitor, see all the Chromium/Electron-related processes, and the conversation immediately becomes:
“yeah but Tauri would use way less”
And then, without realizing, I suddenly end up talking and philosophizing about Electron, instead of discussing the tool itself (which is what I am passionate about :)
And of course, I get it. The broader footprint is real. Chromium is not free. Electron has overhead. Pretending otherwise would be foolish. So we are constantly optimizing what we can, and we will keep doing so…
At the same time, I do feel that a lot of these comparisons feel weirdly flattened. For example people often compare:
full Electron process footprint VS the smallest possible Tauri/native mental model
…without always accounting for development speed, cross-platform consistency, ecosystem maturity, plugin/runtime complexity, UI flexibility, and the fact that some apps are doing much more than others. Which is by the way the reason that we went with Electron.
So all this context to get to my real question, which is:
And also, for those of you who have had this conversation a hundred times already:
What do you say when people reduce the whole discussion to “Electron bad, Tauri good”?
Have you found a good way to explain footprint in practical terms?
Where do you think optimization actually matters, vs where people are mostly reacting to the idea of Electron?
Mostly trying to learn how others think about this , especially those who have built more serious desktop products and had to answer these questions in the wild.
Would love your thoughts and advice!
cynically true :)
Hey,
Person here who despises electron apps in part because of the memory footprint and in part because I don’t like neither chromium nor node.js - personal preference mainly.
From your description I have the feeling that it’s unclear to your user base if electron is set or up to debate. There is only a thin line between “explaining” and “defending”.
In terms of communication: “We’re using electron as foundation because it allows us to focus on development. We’ve considered alternatives like Tauri and XYZ and opted in favor of electron.”
If there are situations that might make you rethink state those as well (“if someone provides a proof of concept via XYZ that an alternative is faster by y% while enabling us to still use (your core libraries and languages) we might consider a refactor.”
If you’d engage with me after an electron rant on your codebase you’d just raise my hope that I might change your mind! Don’t give people hope, don’t feed the trolls and do your thing!
Just please be honest with yourself: your app doesn’t use “50 to 60 MB”, it uses 500MBish on idle because of your choice. And that’s okay as long as you as developer say that it is.
I have a firewalla purple. it’s idiot mode networking and I love it, but I have never been too thrilled with it’s cloud shit and really don’t to rely on it as my only option right now.
A while back I tried spinning up a VM with opnsense and never got good performance off my home ryzen server. I tried multiple NICs and even bare metal installs and while bare metal was a little more performant, it was never able to reach gigabit on WAN. the firewalla falls just a hair short of gigabit WAN but its still way ahead of my more muscular server. I notice the CPU load spikes high. it seems nothing I do can bring down that CPU load for opnsense. openwrt performed a bit better but still never hit gigabit speeds and was still below the firewalla’s performance. bare metal was again a bit better but still not matching the firewalla.
The firewalla is a heavily optimized amlogic based pi. it’s not special. but it works right and my crap doesnt. I have other SBCs I can use if folding into the home server as a VM just isnt practical but the server is always on anyway and already has extra resources I can throw into this so I’d like to just throw it all in there, snapshot a working config and be done with it if I can.
I walked away from this a while back thinking I would have a fix if I took a break and came back to it later but I’m still stumped. How are other people doing this?
I do use VLANs. But in testing even without them going laptop->server->WAN and nothing else it could not do it.
My bad you are correct.
Looks good, and love the idea of it, but YouTube download clients don’t work worth a damn on a VPN any longer in my experience.
It’s been in developpment for a long time, deployed it on my docker 3 years ago.