What about solvespace web build: https://github.com/solvespace/solvespace/tree/emscripten#building-for-web
What about solvespace web build: https://github.com/solvespace/solvespace/tree/emscripten#building-for-web
I don’t know a lot about tailscale, but I think that’s likely not relevant to what’s possible (but maybe relevant to how to accomplish it).
It sounds like the main issue here is dns. If you wanted to/were okay with just IP based connections, then you could assign each service to a different port on Bob’s box, and then have nginx point those ports at the relevant services. This should be very easy to do with a raw nginx config. I could write one for you if you wanted. It’s pretty easy if you’re not dealing with https/certificates (in which case this method won’t work anyway).
Looking quickly on google for npm (which I’ve never used), this might require adding the ports to the docker config and then using that port in npn (Like here). This is likely the simplest solution.
If you want hostnames/https, then you need some sort of DNS. This is a bit harder. You can take over their router like you suggested. You could use public DNS that points at a private IP (this is the only way I’m suggesting to get public trusted ssl certificates).
You might be able to use mdns to get local DNS at Bob’s house automatically, which would be very clean. You’d basically register like jellyseer.local
and jellyfin.local
on Bob’s network from the box and then setup the proxy manager to proxy based on those domains. You might be able to just do avahi-publish -a -R jellyseer.local 192.168.box.ip
and then avahi-publish -a -R jellyfin.local 192.168.box.ip
. And then any client that supports mdns/avahi will be able to find the service at that host. You can then register those names nginx/npn and I think things should just work
To answer your questions directly
I’d be happy to try and give more specifics if you choose a path similar to one of the above things.
Yeah openwrt should be great. It uses nftables as a firewall on a Linux distribution. You can configure it through a pretty nice ui, but you also have ssh access to configure everything directly if you want.
The challenge is going to be what the ISP router supports. If it supports bridge mode then things are easy. You just put your router downstream of it and pretend like it’s a modem. Then you configure openwrt like it’s the only router in the network. This is the opposite of what you’ve suggested, using the upstream ISP router in pass through and relying on the openwrt router to get the ipv6 GUA prefix. (You might even be able to get a larger prefix delegated if you set the settings to ask for it)
If you don’t have bridge mode then things are harder. There’s some helpful information here https://forum.openwrt.org/t/ipv6-only-slaac-dumb-aps/192059/19 even though the situation is slightly different since they also don’t want a firewall. But you probably need to configure your upstream side on the openwrt router similarly.
Also looking more, the tplink ax55 isn’t supported by openwrt. If you don’t already have it, I’d get something that does. (Or if the default software on the ax55 supports what you want, that’s fine too. I just like having the full control openwrt and similar gives)
I’d recommend something that you can put openwrt or opnsense/pfsense on. I think the tplink archers support openwrt at least.
The ISP router opening things at a port level instead of a host level is kinda insane. Do they only support port forwarding? Or when you open a port range can you actually send packets from the WAN to any LAN address at that port.
Can you just buy your own modem, and then also use your own router? (If the reason you need the ISP router is that it also acts as a modem).
Does the ISP router also provide your WiFi? If it does you should definitely go with a second router/access point and then disable the one on the ISP router.
Docker desktop is not what most people on Linux are using. They’re using docker engine directly, which doesn’t run in a vm, and doesn’t require virtualization if you use the same kernel inside the containers.
You have two options for setting up https certificates and then some more options for enabling it on the server:
1: you can generate a self signed certificate. This will make an angry scary warning in all browsers and may prevent chrome from connecting at all (I can’t remember the status of this). Its security is totally fine if you are the one using the service since you can verify the key is correct
2: you can get a certificate to a domain that you own and then point it at the server. The best way to do this is probably through letsencrypt. This requires owning a domain, but those are like $12 a year, and highly recommended for any services exposed to the world. (You can continue to use a dynamic DNS setup, but you need one that supports custom domains)
Now that you have a certificate you need to know, Does the service your hosting support https directly. If it does, then you install the certificates in it and call it a day. If it doesn’t, then this is where a reverse proxy is helpful. You can then setup the reverse proxy to use the certificate with https and then it will connect to the server over http. This is called SSL termination.
There’s also the question of certificate renewal if you choose the letsencrypt option. Letsencrypt requires port 80 to do a certificate renewal. If you have a service already running on port 80 (on the router’s external side), then you will have a conflict. This is the second case where a reverse proxy is helpful. It can allow two services (letsencrypt certificates renewal and your other service) to run on the same external port. If you don’t need port 80, then you don’t need it. I guess you could also setup a DNS based certificate challenge and avoid this issue. That would depend on your DNS provider.
So to summarize:
IF service doesn’t support SSL/https OR (want a letsencrypt certificate AND already using port 80)
Then use a reverse proxy (or maybe do DNS challenge with letsencrypt instead)
ELSE:
You don’t need one, but can still use one.
Reverse proxies don’t keep anything private. That’s not what they are for. And if you do use them, you still have to do port forwarding (assuming the proxy is behind your router).
For most home hosting, a reverse proxy doesn’t offer any security improvement over just port forwarding directly to the server, assuming the server provides the access controls you want.
If you’re looking to access your services securely (in the sense that only you will even know they exist), then what you want is a VPN (for vpns, you also often have to port forward, though sometimes the forwarding/router firewall hole punching is setup automatically). If the service already provides authentication and you want to be able to easily share it with friends/family etc then a VPN is the wrong tool too (but in this case setting up HTTPS is a must, probably through something like letsencrypt)
Now, there’s a problem because companies have completely corrupted the normal meaning of a VPN with things like nordvpn that are actually more like proxies and less like VPNs. A self hosted VPN will allow you to connect to your hone network and all the services on it without having to expose those services to the internet.
In a way, VPNs often function in practice like reverse proxies. They both control traffic from the outside before it gets to things inside. But deeper than this they are quite different. A reverse proxy controls access to particular services. Usually http based and pretty much always TCP/IP or UDP/IP based. A VPN controls access to a network (hence the name virtual private network). When setup, it shows up on your clients like any other Ethernet cable or WiFi network you would plug in. You can then access other computers that are on the VPN, or given access to to the VPN though the VPN server.
The VPN softwares usually recommended for this kind of setup are wireguard/openvpn or tailscale/zerotier. The first two are more traditional VPN servers, while the second two are more distributed/“serverless” VPN tools.
I’m sorry if this is a lot of information/terminology. Feel free to ask more questions.
How will a reverse proxy help?
Things that a reverse proxy is often used for:
Do any of these match what you’re trying to accomplish? What do you hope to gain by adding a reverse proxy (or maybe some other software better suited to your need)?
Edit: you say you want to keep this service ‘private from the web’. What does that mean? Are you trying to have it so only clients you control can access your service? You say that you already have some services hosted publicly using port forwarding. What do you want to be different about this service? Assuming that you do need it to be secured/limited to a few known clients, you also say that these clients are too weak to run SSL. If that’s the case, then you have two conflicting requirements. You can’t simultaneously have a service that is secure (which generally means cryptographically) and also available to clients which cannot handle cryptography.
Apologies if I’ve misunderstood your situation
Could you post the specific output of the commands that don’t work? It’s almost impossible to help with just ‘It doesn’t work’. Like when ping fails, what’s the error message. Is it a timeout or a resolution failure. What does the resolvectl command I shared show on the laptop. If you enable logging on the DNS server, do you see the requests coming in when you run the commands that don’t work.
Does it resolve correctly from the laptop or the server. What about resolvectl query server.local
on the laptop?
Isn’t .local a mdns auto configured domain? Usually I think you are supposed to choose a different domain for your local DNS zone. But that’s probably not the source of the problem?
You definitely use a firewall, but there’s no need for NAT in almost all cases with ipv6. But even with a firewall, p2p becomes easier even if you still have to do firewall hole punching
I’ve setup okular signing and it worked, but I believe it was with a mime certificate tied to my email (and not pgp keys). If you want I can try to figure out exactly what I did to make it work.
Briefly off the top of my head, I believe it was
I can’t remember if there was a way to do this with pgp certificates easily
From looking at the github, I think you don’t need to/want to host this publicly. It doesn’t automatically get and store your information. It’s more a tool for visualizing and cross referencing your takeout/exported data from a variety of tech platforms. It’s just developed as a web app for ease of UI/cross platform/ locally hostable.
Borg append only seems like the way to do this easily
I feel like this really depends on what hardware you have access too. What are you interested in doing?How long are you willing to wait for it to generate, and how good do you want it to be?
You can pull off like 0.5 word per second of one of the mistral models on the CPU with 32GB of RAM. The stabediffusion image models work okay with like 8-16GB of vram.
Second this router! It had the fastest CPU and antenna vs price when I last looked. I run zerotier as a VPN on it an it works great. Plenty of ram and flash for packages too.
Your ISP knows where you’re going anyway. They don’t need DNS for that. They see all the traffic.
As far as I’m aware, what you cited only proves that there is no ether that acts on light in a way such that the round trip time in the direction of ether travel is different from the round trip time in the direction perpendicular to ether travel.
It’s not merely that:
somehow the movement of this medium caused the speed of light in one direction to be faster than another due to the movement of this medium, measuring the speed in two directions perpendicular to each other would reveal that difference.
Instead, it’s that the speed of light must be different in the two directions in a way such that their round trip times don’t average out to the same average as in the other direction.
The theories of ether at the time predicted such a round trip difference because of the wind like interactions that you say.
I believe that this in no way proves anything about the one way speed of light. The Michaelson Morley inteferometer only measures difference in round trip time.
(Insert comment about the irony of your last statement). See https://en.m.wikipedia.org/wiki/One-way_speed_of_light
This is a really fantastic explanation of the issue!
It’s more like improv comedy with an extremely adaptable comic than a conversation with a real person.
One of the things that I’ve noticed is that the training/finetuning that’s done in order to make it give good completions to the “helpful ai conversation scenario” is that it flattens a lot of the capabilities of the underlying language model for really interesting and specific completions. I remember playing around with gpt2 in it’s native text completion mode, and even with that much weaker model, it was able to complete a much larger variety of text styles without sliding into the sameness and slickness of the current chat model fine-tuning.
A lot of the research that I read on LLMs is using them in the original token completion context, but pretty much the only way people interact with them is through a thick layer of ai chatbot improv. As an example for code, I imagine that one would have more success using an LLM to edit your code if the context that you give it starts out written like it is a review of a pull request for the code, or some other commentary of a form that matches the way that code is reviewed in the training data. But instead of having access to create that context directly, we have to ask for code review through the fogged window of a chat between an AI assistant and a person discussing code. And that form of chat likely isn’t well represented in the training data.