Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

local services accessable with/without ncat in public-bridge #53

Open
einsiedlerkrebs opened this issue Feb 8, 2024 · 21 comments
Open

Comments

@einsiedlerkrebs
Copy link
Contributor

I wonder if I have the correct understanding at all;

for public-bridgemode, I expect the created nomad-pot to be reachable via the <POT.PUPLIC.BRIDGE.IP>:<PORT>. Looking at the code this would be realized via a deamonized ncat, which should be copied to the pot's filesystem during start.

To me it looks that does not work. I can not see the daemonized process nor can I find the binary in the jails root. (the required setting localhost-tunnel is correctly set to yes.

Do I assume correctly, that the services should be reachable via the public bridge? And if not, how can for instance consul check on them, when the IP's a varying

thanks for help!


pot 0.15.6
nomad-pot-driver 0.9.1
freebsd 13.2-RELEASE-p6

@einsiedlerkrebs
Copy link
Contributor Author

einsiedlerkrebs commented Feb 21, 2024

I think this issue is more of a Layer8 problem. Therefore I am closing it. Still I would be interested, how I can oversee portmappings or redirects nomad-pot-driver commanded to pf with FreeBSD board tools. I expected a listing in e.g.

pfctl -snat
pfctl - 'pot-nat' -s nat
pfctl -a 'pot-rdr/*' -s rules
pfctl -a 'pot-rdr/*' -sr`

edit:
I seems, that none of the above queries would deliver the result of rules. The name of the exact anchor need to be known:

pfctl -a 'pot-rdr/<POTNAME>_<POTTAG>' -sn
pfctl -s Anchors -a pot-rdr # lists the rdr anchors

works.

@einsiedlerkrebs einsiedlerkrebs closed this as not planned Won't fix, can't repro, duplicate, stale Feb 27, 2024
@grembo
Copy link
Contributor

grembo commented Feb 27, 2024

@einsiedlerkrebs Sorry for not being responsive, busy days around here. You are correct that the tooling for anchors in pfctl is a bit annoying.

What you can do is:

pot show

which will list port redirects of known running pots to you (making use of pfctl).

If you want to see pot's anchors (e.g., for redirects) without using pot itself, you can do

pfctl -a 'pot-rdr/*' -s Anchors

(case sensitive).

And then poke into each of them, e.g.:

pfctl -a 'pot-rdr/*' -s Anchors | xargs -n1 pfctl -s nat -a

@einsiedlerkrebs
Copy link
Contributor Author

einsiedlerkrebs commented Feb 29, 2024

Hi @grembo,

No problem. I am seeing my issue here more as a support case, not a bug any more. Still I wanted to keep it in order to document a little.
Beside your helpful hints how to read the firewall anchors I also learned, that there is no easy way to circumvent the ncat process running, since redirects are origin from localhost and only in FreeBSD 13.3 pf rules for this local interface can be applied (https://www.freebsd.org/releases/13.3R/relnotes/).
Is this the reason why ncat is used in the first place?
In the minipot setup (having nomad/consul/pot on the same host) the port checks are failing when localhost_tunnel is disabled. There is no way, to provide the services locally without ncat, right?

and BTW:

When ncat process should be found via ps aux one should have either a large screen or small text size in terminal to be able to grep for the process.

@einsiedlerkrebs einsiedlerkrebs changed the title ncat in public-bridge mode? local services accessable with/without ncat in public-bridge Feb 29, 2024
@grembo
Copy link
Contributor

grembo commented Feb 29, 2024

Hi @grembo,

No problem. I am seeing my issue here more as a support case, not a bug any more. Still I wanted to keep it in order to document a little. Beside your helpful hints how to read the firewall anchors I also learned, that there is no easy way to circumvent the ncat process running, since redirects are origin from localhost and only in FreeBSD 13.3 pf rules for this local interface can be applied (https://www.freebsd.org/releases/13.3R/relnotes/). Is this the reason why ncat is used in the first place? In the minipot setup (having nomad/consul/pot on the same host) the port checks are failing when localhost_tunnel is disabled. There is no way, to provide the services locally without ncat, right?

We run a cluster without ncat by using custom networking scripts - this is configured in pot.conf by setting POT_EXPORT_PORTS_PF_RULES_HOOK, see https://github.com/bsdpot/pot/blob/master/etc/pot/pot.default.conf#L53-L63

This allows us to have local consul on the host talk to services as well as services talk to each other in a well controlled way. The script I use is not published yet, as it's quite custom for what we do (it consists of creating a reflect jail on the host, which runs pf itself). If you're interested I can share more details (it's a bit on the complex side though).

and BTW:

When ncat process should be found via ps aux one should have either a large screen or small text size in terminal to be able to grep for the process.

ww (as in wide-reallywide) is your friend, see man ps:

ps -auxww
     -w      Use at least 132 columns to display information, instead of the
             default which is the window size if ps is associated with a
             terminal.  If the -w option is specified more than once, ps will
             use as many columns as necessary without regard for the window
             size.  Note that this option has no effect if the “command”
             column is not the last column displayed.

@grembo
Copy link
Contributor

grembo commented Mar 5, 2024

@einsiedlerkrebs tagging you again to make sure you won't miss my response above.

@einsiedlerkrebs
Copy link
Contributor Author

einsiedlerkrebs commented Mar 6, 2024

Hi Grembo.

ww (as in wide-reallywide) is your friend, see man ps

Yes. That seems friendly!

We run a cluster without ncat by using custom networking scripts - this is configured in pot.conf by setting POT_EXPORT_PORTS_PF_RULES_HOOK, see https://github.com/bsdpot/pot/blob/master/etc/pot/pot.default.conf#L53-L63

I have seen this hook in the code and considered indeed to use it, but later thought I might come up with a dedicated ip_tunnel patch. Idea was, to have the possibility to link traffic via ncat to one dedicated interface (not all as in localhost_tunnel) only in order to make consul happy and advertise the service in DNS. But of course consul advertises the address it checks and does not get hold of the dynamic assigned IP address, which was my goal here.

On a glance on nomad-pot-driver it looks like Nomad is not notified about the dynamic used IP address pot chooses.
So since this straw seems also like bit of work ahead; I am indeed interested.

If you're interested I can share more details

Of course I would like to archive a least proprietary solution (In a frame of using the hole cloud system on one host ;) ).
Yesterday FreeBSD 13.3 was released and that might be a possibility to avoid a pf running reflector jail. Still I am curious in your solution since I assume it is most direct as you are very into the information that a component holds at one point in time (pot, pf, nomad-pot-driver, nomad, consul.)

Maybe the POT_EXPORT_PORTS_PF_RULES_HOOK and FreeBSD new capabilities of filter on lo0-interface as well might offer a generic approach which is worth to put into appearance.

@einsiedlerkrebs
Copy link
Contributor Author

einsiedlerkrebs commented Mar 6, 2024

Another Idea that came up and could solve the issue I have is running consul in a jail with vnet. That should allow the firewall to apply the redirects as well.

@einsiedlerkrebs
Copy link
Contributor Author

The Idea above seem to lead into the same issue (of course ?) because the consul server can not be reached from localhost.

@einsiedlerkrebs
Copy link
Contributor Author

@grembo since I didn't had any success yet, would you mind, to share the script you mentioned?

@grembo
Copy link
Contributor

grembo commented Apr 30, 2024

@einsiedlerkrebs Ok, so here we go (might also be interesting to @pizzamig):

This setup scales pretty well, which means:

  1. You can have many nomad client nodes
  2. Each pot in each nomad client node can talk to every other pot running on each node
  3. Communication between pots is always over the announced service endpoint (even on the same host)
  4. No direct communication between pots on the same host (no shortcuts) => watching the service via consul works

Example is from one compute node in the cluster.

Configuration used:

pot.conf:

POT_NETWORK=10.192.0.0/10
POT_NETMASK=255.192.0.0
POT_GATEWAY=10.192.0.1
POT_EXTIF=compute
POT_DNS_IP=10.192.0.2
POT_TMP=/opt/pot/tmp
POT_EXPORT_PORTS_PF_RULES_HOOK=/path/to/bin/make-pf-rules.sh
POT_ISOLATE_VNET_POTS=true

make-pf-rules.sh:

#!/bin/sh

if [ -e /usr/local/etc/pot/pfrules.conf ]; then
  # shellcheck disable=SC1091
  . /usr/local/etc/pot/pfrules.conf
fi

if [ -z "$APP_IP" ]; then
  1>&2 echo "Error: Please configure APP_IP"
  exit 1
fi
: "${APP_BITMASK:="24"}"

if [ $# -ne 8 ]; then
  1>&2 echo "Usage: $0 POT_EXTIF _bridge _pot_net _pot_gateway _proto_port _host_port _ip _pot_port"
  1>&2 echo "Example: $0 cluster bridge1 10.192.0.0/10 10.192.0.1 tcp 32732 10.192.0.10 80"
  exit 1
fi

set -e
# shellcheck disable=SC3040
set -o pipefail

SCRIPT=$(readlink -f "$0")
SCRIPTDIR=$(dirname "$SCRIPT")

if ! jls -j pot-reflect >/dev/null 2>&1; then
	"$SCRIPTDIR/make-reflect-jail.sh" >/dev/null
fi

POT_EXTIF=$1; shift
_bridge=$1; shift
POT_NETWORK=$1; shift
POT_GATEWAY=$1; shift
_proto_port=$1; shift
_host_port=$1; shift
_ip=$1; shift
_pot_port=$1; shift

# custom redirect rules / can contain various ports depending on required payload
echo "rdr on $_bridge proto {tcp, udp} from $_ip to $POT_GATEWAY port 53 -> 127.0.0.1"
echo "rdr on $_bridge proto udp from $_ip to $POT_GATEWAY port 514 -> 127.0.0.1"
echo "rdr pass log on pot-reflect-a proto $_proto_port from 10.9.9.2 to $APP_IP port $_host_port -> $_ip port $_pot_port"
echo "rdr pass log on $POT_EXTIF inet proto tcp from $APP_IP/$APP_BITMASK to $APP_IP port $_host_port -> $_ip port $_pot_port"
echo "rdr log on $_bridge proto tcp from $POT_NETWORK to $APP_IP port $_host_port tag potnat -> $_ip port $_pot_port"
echo "nat log on $_bridge proto tcp from $POT_NETWORK to $_ip port $_pot_port tagged potnat -> $APP_IP"

Content of make-reflect-jail.sh:

#!/bin/sh

if [ -e /usr/local/etc/pot/pfrules.conf ]; then
  # shellcheck disable=SC1091
  . /usr/local/etc/pot/pfrules.conf
fi

for var in APP_BITMASK APP_IP CLUSTER_IF COMPUTE_IF NOMAD_IF UNTRUSTED_IF; do
  if [ -z "$(eval echo "\${$var}")" ]; then
    log "Error: Please configure $var"
    exit 1
  fi
done

set -e
# shellcheck disable=SC3040
set -o pipefail

SCRIPT=$(readlink -f "$0")
SCRIPTDIR=$(dirname "$SCRIPT")

fibs=$(sysctl -n net.fibs)

if [ "$fibs" -lt 2 ]; then
  sysctl net.fibs=2 >/dev/null
fi

JAILDIR="/root/reflectjail"

# cleanup existing jail if necessary
if jls -j pot-reflect >/dev/null 2>&1; then
  "$SCRIPTDIR"/destroy-reflect-jail.sh "$APP_IP/$APP_BITMASK" "$NOMAD_IF"
fi

# create jail and config
mkdir -p "$JAILDIR"
tar -C / -cf - bin etc/pf.os lib libexec sbin | tar -xf - -C "$JAILDIR/."
chflags -R noschg "$JAILDIR"
mkdir -p "$JAILDIR/dev"
cat <<EOF >"$JAILDIR/etc/pf.conf"
nat log on pot-reflect-b to $APP_IP -> pot-reflect-b:0
pass
EOF

# start jail
jail -c vnet name=pot-reflect host.hostname=pot-reflect \
  path="$JAILDIR" mount.devfs persist

# build vnet
epair_a=$(ifconfig epair create)
epair_b=$(echo "$epair_a" | sed "s/a$/b/")

ifconfig "$epair_a" inet 10.9.9.1/30 name pot-reflect-a >/dev/null
ifconfig "$epair_b" name pot-reflect-b >/dev/null

ifconfig pot-reflect-b vnet pot-reflect
jexec pot-reflect ifconfig pot-reflect-b 10.9.9.2/30
jexec pot-reflect sysctl net.inet.ip.forwarding=1 >/dev/null
jexec pot-reflect sysctl net.inet.ip.redirect=0 >/dev/null
jexec pot-reflect route add default 10.9.9.1 >/dev/null

# tune devfs and start pf
devfs -m "$JAILDIR/dev" rule apply path pf unhide
jexec pot-reflect pfctl -f /etc/pf.conf -eq

# route nomad range
route add "$APP_IP" 10.9.9.2 >/dev/null

# create pseudo interface for nomad to pickup
ifconfig lo create \
  inet "$APP_IP/$APP_BITMASK" \
  name "$NOMAD_IF" fib 1 >/dev/null

# add NAT for outbound cluster (no "sort before uniq" on purpose)
pfctl -qa reflect -F all
echo "
nat on $COMPUTE_IF to $APP_IP/$APP_BITMASK -> $APP_IP
nat on $CLUSTER_IF from 10.192/10 to ! $APP_IP/$APP_BITMASK -> $CLUSTER_IF:0
nat on $UNTRUSTED_IF from 10.192/10 to ! $APP_IP/$APP_BITMASK \
  -> $UNTRUSTED_IF:0
pass out quick log to $APP_IP rtable 0
pass from 10.192/10 to 10.192/10
pass from 10.9.9.2/32 to 10.192/10
pass from $APP_IP/32 to 10.192/10
" | uniq | pfctl -qa reflect -f -

Example pfrules.conf:

APP_BITMASK=24
APP_IP=10.29.0.3
CLUSTER_IF=cluster
COMPUTE_IF=compute
NOMAD_IF=nomad-pseudo
UNTRUSTED_IF=untrusted

NAT/RDR rules in hosts /etc/pf.conf:

nat-anchor reflect
rdr-anchor reflect

#nat-anchor pot-nat
rdr-anchor "pot-rdr/*"
nat-anchor "pot-rdr/*"

(no additional anchors, the ones created by pot by default don't apply)

Args to nomad:

nomad_enable="YES"
nomad_args="-config=/path/to/client.hcl -network-interface=nomad-pseudo"
nomad_user="root"
nomad_group="wheel"

Additional interfaces involved in our setup:

untrusted: public uplink
   ifconfig_em0_name=untrusted
compute: private vlan interface used to serve apps
cluster: private vlan interface used to communicate with nomad/vault/consul cluster
  vlans_untrusted="cluster compute"
  create_args_cluster="vlan 1001"
  ifconfig_cluster="inet 10.1.1.11/24"
  create_args_compute="vlan 1002"
  ifconfig_compute="inet 10.2.2.11/24"

Now, the way this works is:

  • nomad-pseudo is a local interface in fib 1 (so it's not reachable from fib 0) - it only exists, so that nomad has an interface to look at (it plays no role in the setup beyond that, that's why it's in a different fib).
  • nomad uses nomad-pseudo's IP address (RDR rules are made on it) - any traffic about it is handled by pf rules though
  • when a pot is created, all the relevant pf rules are written to a hook, so that
    • external traffic coming in over the compute interface is natted correctly into the pot
    • internal traffic (e.g., coming from consul) is routed over a transfer network between the reflect jail and the host (`route add "$APP_IP" 10.9.9.2)
    • the reflect jail uses internal NAT to talk to the actual pots (see make-reflect-jail.sh)
    • the pot can reach a few services provided by the host over nat rules as well
  • Hosts participating in the cluster use static routes (e.g., route add 10.29.0.3/32 10.2.2.11)

Now, these are a lot bits and pieces, which might cause many more questions - I hope the general concept is clear. If it is, we could see how we can streamline this into an example that works out of the box. Besides all the syntactic complication, the underlying concept is actually quite simple.

Traffic flow consul:

consul -> pot-reflect-jail -> nat -> rdr -> actual pot

Traffic flow internal:

src pot -> pot-reflect-jail -> nat -> rdr -> dest pot

Traffic flow external:

external -> compute_if -> rdr -> actual pot

It's been a while since I wrote it and it has been working stable in production for about two years now. Main advantages are:

  1. You can scale services easily (x instances on y nodes)
  2. You actually detect consul issues (like, if the DNS entry in consul does not match what is real, service checks will fail - a /etc/hosts based solution would mask the failure)

All of it is part of a bigger setup, which also features inbound and outbound load balancers and inter-service authentication using rotating client certificates - all of it is left out here for simplicity.

If you could share your basic setup, we could think about how to turn it into something like this without all the "big environment" elements, which make our setup potentially too complex for your needs.

@grembo
Copy link
Contributor

grembo commented Apr 30, 2024

@pizzamig Maybe we could enable the discussions feature on the pot project, discussing this in an issue feels wrong.

@grembo
Copy link
Contributor

grembo commented Apr 30, 2024

@einsiedlerkrebs So yeah, if you could share the essence of your setup, I could see if our approach could be scaled down to meet your needs and then maybe even prepackaged, so it's useful to many.

@einsiedlerkrebs
Copy link
Contributor Author

einsiedlerkrebs commented May 1, 2024 via email

@einsiedlerkrebs
Copy link
Contributor Author

einsiedlerkrebs commented May 2, 2024

Hey @grembo,

thanks for the insights to your setup. It looks very interesting, sophisticated and comfortable. Happy to have tickle it out finally. Maybe it should get a more prominent location than this issue here.


@einsiedlerkrebs So yeah, if you could share the essence of your setup

The setup is very basic and straight forward.

  • Nomad as host service
  • consul in a manual pot (in order to meet nomad's dependency)
  • network/port based applications running in orchestered pots (via nomad jobs)
  • port service checking via consul based on hostnames
  • Since the highlevel nomad/consul stack is not aware of IP addresses chosen by the pot network when starting a pot it is no option to have the consul checks against an IP address that sits in the Nomads environment variables.
  • pots hostnames are suffixed by uuid's which are unknown as well

My setup is, that a service is reachable under the task name. The task name is set as an alias in the nomad-pot task configurations. The alias configuration is red by potnet and can as an option been included into the etc-hosts generation.
Pots can be flagged with dynamic-etc-hosts which sets their /etc/hosts file to be updated.

When Nomad/Pot starts or stops a services, it checks for all pots, if they are set to have dynamic hosts enabled and adds/removes the /etc/hosts/ entries accordingly.

When a pot-service came up, the /etc/hosts file in consuls pot is synced and it can check for the service by taskname which resolves to the IP inside the pot network.

No redirects, no extra (ncat) processes.

This approach is definitely something else, as what you described about your setup, but for a small single host setup it still brings a lot of pots/nomad/consuls comfort and to me it also looks as a clean and transparent way.


I would of course be happy, if you would see my approach useful enough (at least for me and others), to get it merged into the necessary components.

@grembo
Copy link
Contributor

grembo commented May 2, 2024

@einsiedlerkrebs It would be good if you could be more specific about your setup (like, give one or two example job configurations, how they interact etc.), as I would like to replicate your setup locally and see what we can do to make it work smoothly. The pure /etc/hosts workaround is nice, but IMHO in its current form it is too limited, as it will break a couple of nomad's features. I think your setup is a good real-life sized example we could derive from though, so maybe taking all the input from this thread + a good example setup, we could forge a solution that is more general purpose (or at least write some good howtos).

Just to confirm:
Your main concern is to be able to reach services from the jailhost (primarily, but not only, for consul checks), correct?

@einsiedlerkrebs
Copy link
Contributor Author

einsiedlerkrebs commented May 6, 2024

Okay, @grembo . I will try;

Nomad is running on the host

consul

I am running consul manually:

set sysver=13.2.2 build=117 template=consul_template ipaddr=10.192.0.254 name=consul 
pot prepare -U file:///opt/pot-images/ -p $template -t ${sysver}.${build} -c "/usr/local/bin/cook-${name}" -N public-bridge -a ${build} -i ${ipaddr}  -n ${name} -v
pot mount-in -r -d /opt/pot-mounts/${name} -m /config -p ${name}_${build}
pot mount-in  -d /opt/pot-mounts/${name}data/ -m /data -p ${name}_${build}
pot set-attribute -A dynamic-etc-hosts -V YES -p ${name}_${build} 
pot set-attribute -A no-etc-hosts -V NO -p ${name}_${build} 
pot export-ports -p ${name}_${build} -e tcp:8500:8500 -e udp:8600:8600
pot info -vp ${name}_${build}
pot run ${name}_${build}
# ...

(the cook script runs consul)

some service (http)

This runs nginx

job "dev" {
  datacenters = ["dc1"]
  type        = "service"
  reschedule {
    # this disables rescheduling
    attempts  = 0
    unlimited = true
  }
  group "dev" {
    count = 1

    restart {
      attempts = 3
      delay    = "15s"
      interval = "1m"
      mode     = "delay"
    }

    network {
      # https://developer.hashicorp.com/consul/docs/install/ports
      port "sensor_proxy" {
        static = 80
      }
    }

    task "sensorProxy" {
      driver = "pot"
      service {
        address = "${NOMAD_TASK_NAME}.local"
        tags = ["sensors", "transport"]
        name = "sensorProxy"
        port = "sensor_proxy"
        check {
          type = "http"
          name = "http for sensors"
          port = "sensor_proxy"
          path = "/"
          interval = "30s"
          timeout = "1s"
        }
      }
      config {
        image        = "file:///opt/pot-images/"
        pot          = "access"
        tag          = "13.2.2.117"
        command = "/usr/local/bin/cook-sensors"
        network_mode = "public-bridge"
        port_map = {"sensor_proxy" = "80"}
        mount_read_only = [ "/opt/pot-mounts/sensors:/config" ]
        attributes = ["localhost-tunnel:NO", "no-etc-hosts:NO", "dynamic-etc-hosts:YES"]
	aliases = ["${NOMAD_TASK_NAME}.local"]
      }
    }
  }
}

Next are application containers. That are depending on the http service and can reach it by name (alias). This can already be tested within the consul pot.

pot run consul_117 
# (inside the consul)
ping sensorProxy.local

consul can check against a name as well:

HTTP GET http://sensorProxy.local:80/: 204 No Content Output: 

@grembo
Copy link
Contributor

grembo commented May 14, 2024

@einsiedlerkrebs Thank you for sharing.

Questions:

  • So do I understand correctly, that you're running consul in a pot?
  • What does your network config look like (/etc/rc.conf should probably be sufficient)?
  • For the sake of simplicity: I assume sensorProxy could be replaced by a plain nginx server, correct?

For comparison, this is what our setup looks like:

  • Five servers running vault, consul and nomad server processes (raft)
  • N servers running the actual software, whereas consul clients and nomad clients are running on the servers directly (this allows us to lose up to two server nodes and nomad can schedule payload within the cluster depending on various conditions and also migrate payloads in case a node goes down)

The minimum setup you're looking for is:

  • web server running as a pot, controlled by nomad
  • consul running as a pot, but not controlled by nomad (note: I never mixed those two on the same host, our nodes either run pot manually or nomad-controlled)
  • consul should be able to do health checks of the web server
  • your webserver should be reachable from the outside world (?)

@einsiedlerkrebs
Copy link
Contributor Author

einsiedlerkrebs commented May 14, 2024

Hi Grembo,

thanks for your reply.

So do I understand correctly, that you're running consul in a pot?

Yes.

What does your network config look like (/etc/rc.conf should probably be sufficient)?

# cat /etc/rc.conf| grep if
ifconfig_lo10_name="stove"
ifconfig_stove="10.192.255.1/24"
ifconfig_vtnet0=DHCP

Iterface stove is where nomad binds to

For the sake of simplicity: I assume sensorProxy could be replaced by a plain nginx server, correct?

Yes, exactly. Didn't change the name.

The minimum setup you're looking for is:
web server running as a pot, controlled by nomad

Yes.

consul running as a pot, but not controlled by nomad (note: I never mixed those two on the same host, our nodes either run pot manually or nomad-controlled)

This is for agnostic reasons. In this case it helps, that pot controls its /etc/hosts file. It is not controlled by nomad because of Nomads dependency on Consul.

consul should be able to do health checks of the web server

Yes. And provide DNS.

your webserver should be reachable from the outside world (?)

The main issue I am struggling with is, that it should be reachable from another lets say application pot next to the nginx pot. Or a database Pot next to it should be reachable from within an application pot, as the application pot should be reachable from the Nginx pot. Consul -> DB <- App <- Nginx <- WWW.
Goal is to make App and DB pot IPs resolvable to actually dynamic inside pots network.

The Nginx should be reachable from the outside, but I don't mind to maintain this firewall rule.

For the sake of simplicity: I assume sensorProxy could be replaced by a plain nginx server, correct?

Yes.

@einsiedlerkrebs
Copy link
Contributor Author

Hi @grembo ,

after the time passed, I am curious if there are any news.

Wish you a pleasant day and thanks.

einsiedlerkrebs

@grembo
Copy link
Contributor

grembo commented Jul 11, 2024

HI, just as an update, I spend a few hours last week to prepare a small example, but it's not self-contained/reproducible yet.

@einsiedlerkrebs
Copy link
Contributor Author

Hi. I am happy to hear! Do you have questions or can I help out with something?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants