-
-
Notifications
You must be signed in to change notification settings - Fork 175
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fleet API self-usage / run own Fleet Telemetry Server #1401
Comments
I'm not supporting own Developer instances, but I'm happy if anybody will contribute a pull request. You just need to forward telemetry data to the class TelemetryConnection:handleMessage(string content)
Kafka is not the right dispatcher for a small environment, but there is a pull request for a MQTT dispatcher. For a small amount of cars that should be enough. teslamotors/fleet-telemetry#220 What i can do is separate the connection to my telemetry server and to the parser, so it is much more easier for you. After everything is done, I can provide you with the latest config I am sending to the vehicles to get the same result. Maybe you can find a small team and you can share todos? |
I've somebody who knows how to secure install Cloudflare Tunnel with Docker compose. He will help us to add a Tesla Porxy stuff and co. Let's see how it goes. Cloudflare is IMHO best solution for such setup, because it handles automatically certificates, no ports should be open in local router and so on. |
I can double check what you are doing |
I'll try to strip all Kafka etc. from the telemetry server and add the MQTT datastore from erwin314. I hope this will run as a service on a Raspberry Pi. It would be nice if the Teslalogger could support MQTT as source for the telemetry data. Additionally I'm planning to add the authorization callback handler and the public key url to the public port and some local Web UI to register the application and do the authorization. So everything should be contained in a single service. I'll let you know when I have something working. |
It is in fact quite easy to run the fleet-telemetry server on a Raspi:
I'm not happy with the MQTT implementation from erwin314 because he splits all received I'll create a fork on Github when I'm finished. Maybe I can finally have Github create a deb package to install it all as a Linux service. Currently I send the configuration with some quick hacks. I'll create a separate service for handling the authorization callback, token refresh and sending of the configuration to the vehicle. |
Yes, you need to setup the proxy command as well for auth token and config send |
https://github.com/mgerczuk/fleet-telemetry-raspi has the compilable source still with single value MQTT. When you use Let's encrypt with apache a valid config.json looks like: { "host": "", "port": 4444, "log_level": "debug", "logger": { "verbose": true }, "mqtt": { "broker": "<your mqtt server>:1883", "client_id": "client-1", "topic_base": "telemetry", "qos": 1, "retained": false, "connect_timeout_ms": 30000, "publish_timeout_ms": 1000 }, "records": { "alerts": ["mqtt"], "errors": ["mqtt"], "V": ["mqtt"] }, "tls": { "server_cert": "/etc/letsencrypt/live/<your domain name>/fullchain.pem", "server_key": "/etc/letsencrypt/live/<your domain name>/privkey.pem" } } Either place the config.json in the folder with Now you "only" have to open 4444 in your router and instruct your Tesla to send the telemetry data to <your-domain-name>:4444 :-) |
I'll write my own "proxy" since something must periodically refresh the tokens and I want to have a more user friendly way of updating the config. Of course that means that I will have to adapt to possible Tesla API changes, but I hope they will not happen so often. And I tasted blood coding with Go so it'll be fun to write! |
I separated the connection to my telemetry server and the parser, so it is now very eazy for you to exchange the transport protocol and just fill the parser with handleMessage() next benefit: it's now very eazy to make some unit tests. Let me know if you need more help |
I'm waiting for Tesla confirmation |
two weeks :-) |
What is the Cloudflare tunnel for? Just enables the docker to be used also behind NAT? So if I want to host it on my webserver with public IP (in docker or directly), I don't need it? |
Cloudflare Tunnel creates secure connection to your server without opening ports or creating certificates. |
For someone who has already setup Tesla HTTP Proxy in HomeAssistant, does it serve the same purpose of the Tesla-Fleet-Helper? |
to @jjjasont: If you have already gone through Tesla HTTP Proxy in HomeAssistant, you should have all the necessary keys, yes. The idea of fleet-helper is to automate the process of generating keys and registering them as a 3p application. Also HomeAssisant proxy goes further and implements the actual proxy itself. While with filet-helper this is till work in progress. Once I and @Adminius get confirmation from Tesla that our third-party app is registered we will work on integrating TeslaProxy into TeslaLogger. In other words Fleet-Helper is a prerequisite for this efforts.
to @marki555: script was developed specifically for use with Cloudflare, because that is the easiest way, and doesn't require a vps. I could add an option to skip Cloudflare step. But user will have to provide valid ssl certificates, and make sure that port 443 is available, as Tesla requires https for public-key.pem. |
What do you need Tesla confirmation for? I followed https://developer.tesla.com/docs/fleet-api/getting-started/what-is-fleet-api and https://developer.tesla.com/docs/fleet-api/authentication/third-party-tokens and got my tokens without any delay. The Cloudflare tunnel looks interesting! But I guess it costs money? P.S. my Raspi service https://github.com/mgerczuk/fleet-telemetry-raspi runs smoothly. The configuration tool https://github.com/mgerczuk/fleet-telemetry-config is barely working and looks really awful. If someone wants to help with the html...? Maybe you can also use tesla-fleet-helper to send the configuration to the car. |
It does not. Cloudflare tunnels free tier is way more than needed for our purposes.
One of the source of inspiration for my script are these instructions. And the guide says: "Once this is submitted, Tesla will process the CSR and update your account on the backend accordingly. It may take a few weeks to process". Maybe it is outdated, I will have to check, maybe everything is working already and there is no need to wait for any further confirmations. Actually let me try your fleet-telemetry-raspi. I guess we should sink-up offline and join the efforts. |
Yes, maybe separate the creation of the public/private key and app from the cloudflare tunnel/hosting (so it would just create the keys, pause and ask user to manuall copy the keys to the correct webserver for a domain and then continue with checking if the key is hosted and next steps). I tried to follow the script manually and I have created the keys and hosted them on my subdomain via https. However the next step is little misleading as it says to just create a developer account and provide client_id. As far as I unserstand I had to also create an App on the developer portal and provide the client_id/secret of the app, not of the developer account itself (the confusion maybe arised from the instructions you linked in your previous step which are from year ago and now the process is little different). The script then showed the response from Tesla, however I'm not sure how to guess if it was successful (the scr, issuer and ca fields are null). |
creating keys is just two lines in bash. I don't see any point in separating that from the script. Furthermore what you are describing here is pretty much manual step by step process. Immo doing that defeats the purpose of the script. If you can do what you described above - you can do everything manually. |
Since yesterday I have telemetry running on my local server. at the moment we did manual installation with opened port. Now I can focus on connection between local telemetry server and teslalogger connection and @yvolchkov will rewrite script/helper for easier installation The good thing: you do not have to wait for Tesla approval anymore :) |
that's not entirely correct though. It still does work. However the plan we had for telemetry crashed hard against the mTLS. But there's still a chance that we can manage to make it work with CF tunnels. Port forwarding shall be the last resort in case we failed. Alternatively, tunnel support will be added later, depending on complexity of the efforts. |
Status update: |
One more update: Next steps:
|
I have read that the Fleet API is free for the owner of the vehicle with some sane limits. Isn't it possible for TeslaLogger to use my Fleet API developer credentials? I guess it requires self-hosting some daemon which will listen for the connections from Tesla servers, but that should not be an issue at least for some of the users who run TeslaLogger in docker. Or is getting Fleet API developer account somehow limited to only big developer companies?
The text was updated successfully, but these errors were encountered: