Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fleet API self-usage / run own Fleet Telemetry Server #1401

Open
marki555 opened this issue Dec 9, 2024 · 23 comments
Open

Fleet API self-usage / run own Fleet Telemetry Server #1401

marki555 opened this issue Dec 9, 2024 · 23 comments

Comments

@marki555
Copy link

marki555 commented Dec 9, 2024

I have read that the Fleet API is free for the owner of the vehicle with some sane limits. Isn't it possible for TeslaLogger to use my Fleet API developer credentials? I guess it requires self-hosting some daemon which will listen for the connections from Tesla servers, but that should not be an issue at least for some of the users who run TeslaLogger in docker. Or is getting Fleet API developer account somehow limited to only big developer companies?

@bassmaster187
Copy link
Owner

bassmaster187 commented Dec 9, 2024

I'm not supporting own Developer instances, but I'm happy if anybody will contribute a pull request.

You just need to forward telemetry data to the class TelemetryConnection:handleMessage(string content)

private void handleMessage(string resultContent)

Kafka is not the right dispatcher for a small environment, but there is a pull request for a MQTT dispatcher. For a small amount of cars that should be enough.

teslamotors/fleet-telemetry#220

What i can do is separate the connection to my telemetry server and to the parser, so it is much more easier for you.
But that will happen in January, because I have a buch of work until everything is working.

After everything is done, I can provide you with the latest config I am sending to the vehicles to get the same result.

Maybe you can find a small team and you can share todos?

@Adminius

@Adminius
Copy link
Contributor

Adminius commented Dec 9, 2024

I've somebody who knows how to secure install Cloudflare Tunnel with Docker compose. He will help us to add a Tesla Porxy stuff and co. Let's see how it goes.

Cloudflare is IMHO best solution for such setup, because it handles automatically certificates, no ports should be open in local router and so on.

@bassmaster187
Copy link
Owner

I can double check what you are doing

@mgerczuk
Copy link

I'll try to strip all Kafka etc. from the telemetry server and add the MQTT datastore from erwin314. I hope this will run as a service on a Raspberry Pi. It would be nice if the Teslalogger could support MQTT as source for the telemetry data.

Additionally I'm planning to add the authorization callback handler and the public key url to the public port and some local Web UI to register the application and do the authorization. So everything should be contained in a single service.

I'll let you know when I have something working.

@mgerczuk
Copy link

It is in fact quite easy to run the fleet-telemetry server on a Raspi:

  • Checkout the tesla repository on the Raspi
  • Merge the erwin314 MQTT changes (more later)
  • Remove Kafka. It does not compile on a Raspberry / on any 32-bit system
  • 'go build' in cmd folder to get a >25 MB (!) binary
  • create a config.json with 'mqtt' settings. Use 'mqtt' in the records/V array, optionally also records/alerts and records/errors
  • run './cmd'
  • to have it run automatically install it as a Linux service.

I'm not happy with the MQTT implementation from erwin314 because he splits all received data records into single values and drops the original created_at timestamp. As far as I understand the Teslalogger gets forwarded the original data record converted to json. I'll try to change that.

I'll create a fork on Github when I'm finished. Maybe I can finally have Github create a deb package to install it all as a Linux service.

Currently I send the configuration with some quick hacks. I'll create a separate service for handling the authorization callback, token refresh and sending of the configuration to the vehicle.

@bassmaster187
Copy link
Owner

Yes, you need to setup the proxy command as well for auth token and config send

@mgerczuk
Copy link

https://github.com/mgerczuk/fleet-telemetry-raspi has the compilable source still with single value MQTT.

When you use Let's encrypt with apache a valid config.json looks like:

{
  "host": "",
  "port": 4444,
  "log_level": "debug",
  "logger": {
    "verbose": true
  },
  "mqtt": {
    "broker": "<your mqtt server>:1883",
    "client_id": "client-1",
    "topic_base": "telemetry",
    "qos": 1,
    "retained": false,
    "connect_timeout_ms": 30000,
    "publish_timeout_ms": 1000
  },
  "records": {
    "alerts": ["mqtt"],
    "errors": ["mqtt"],
    "V": ["mqtt"]
  },
  "tls": {
    "server_cert": "/etc/letsencrypt/live/<your domain name>/fullchain.pem",
    "server_key": "/etc/letsencrypt/live/<your domain name>/privkey.pem"
  }
}

Either place the config.json in the folder with cmd executable or specify it's location with ./cmd -config path-to-json.config. You may rename the cmd executable, of course.

Now you "only" have to open 4444 in your router and instruct your Tesla to send the telemetry data to <your-domain-name>:4444 :-)

@mgerczuk
Copy link

Yes, you need to setup the proxy command as well for auth token and config send

I'll write my own "proxy" since something must periodically refresh the tokens and I want to have a more user friendly way of updating the config. Of course that means that I will have to adapt to possible Tesla API changes, but I hope they will not happen so often.

And I tasted blood coding with Go so it'll be fun to write!

@bassmaster187
Copy link
Owner

I separated the connection to my telemetry server and the parser, so it is now very eazy for you to exchange the transport protocol and just fill the parser with handleMessage()

next benefit: it's now very eazy to make some unit tests.

Let me know if you need more help

28a52dd

@Adminius
Copy link
Contributor

Adminius commented Jan 2, 2025

I'm waiting for Tesla confirmation

https://github.com/yvolchkov/tesla-fleet-helper

@bassmaster187
Copy link
Owner

two weeks :-)

@marki555
Copy link
Author

marki555 commented Jan 2, 2025

I'm waiting for Tesla confirmation

https://github.com/yvolchkov/tesla-fleet-helper

What is the Cloudflare tunnel for? Just enables the docker to be used also behind NAT? So if I want to host it on my webserver with public IP (in docker or directly), I don't need it?
So I can just follow your script to generate keys, host them and register at Tesla?

@Adminius
Copy link
Contributor

Adminius commented Jan 2, 2025

Cloudflare Tunnel creates secure connection to your server without opening ports or creating certificates.
The script requires Cloudflare Tunel. If you don't want use it, you have to find another way.

@jjjasont
Copy link
Contributor

jjjasont commented Jan 3, 2025

For someone who has already setup Tesla HTTP Proxy in HomeAssistant, does it serve the same purpose of the Tesla-Fleet-Helper?

@yvolchkov
Copy link

yvolchkov commented Jan 3, 2025

For someone who has already setup Tesla HTTP Proxy in HomeAssistant, does it serve the same purpose of the Tesla-Fleet-Helper?

to @jjjasont: If you have already gone through Tesla HTTP Proxy in HomeAssistant, you should have all the necessary keys, yes. The idea of fleet-helper is to automate the process of generating keys and registering them as a 3p application.
The process of registering is complex and requires many steps. I tried to make it as simple as possible with this script.

Also HomeAssisant proxy goes further and implements the actual proxy itself. While with filet-helper this is till work in progress. Once I and @Adminius get confirmation from Tesla that our third-party app is registered we will work on integrating TeslaProxy into TeslaLogger. In other words Fleet-Helper is a prerequisite for this efforts.

What is the Cloudflare tunnel for? Just enables the docker to be used also behind NAT? So if I want to host it on my webserver with public IP (in docker or directly), I don't need it?
So I can just follow your script to generate keys, host them and register at Tesla?

to @marki555: script was developed specifically for use with Cloudflare, because that is the easiest way, and doesn't require a vps. I could add an option to skip Cloudflare step. But user will have to provide valid ssl certificates, and make sure that port 443 is available, as Tesla requires https for public-key.pem.

@mgerczuk
Copy link

mgerczuk commented Jan 3, 2025

What do you need Tesla confirmation for? I followed https://developer.tesla.com/docs/fleet-api/getting-started/what-is-fleet-api and https://developer.tesla.com/docs/fleet-api/authentication/third-party-tokens and got my tokens without any delay.

The Cloudflare tunnel looks interesting! But I guess it costs money?

P.S. my Raspi service https://github.com/mgerczuk/fleet-telemetry-raspi runs smoothly. The configuration tool https://github.com/mgerczuk/fleet-telemetry-config is barely working and looks really awful. If someone wants to help with the html...?

Maybe you can also use tesla-fleet-helper to send the configuration to the car.

@yvolchkov
Copy link

But I guess it costs money?

It does not. Cloudflare tunnels free tier is way more than needed for our purposes.

What do you need Tesla confirmation for?

One of the source of inspiration for my script are these instructions. And the guide says: "Once this is submitted, Tesla will process the CSR and update your account on the backend accordingly. It may take a few weeks to process". Maybe it is outdated, I will have to check, maybe everything is working already and there is no need to wait for any further confirmations.

Actually let me try your fleet-telemetry-raspi. I guess we should sink-up offline and join the efforts.

@bassmaster187 bassmaster187 changed the title Fleet API self-usage Fleet API self-usage / run own Fleet Telemetry Server Jan 3, 2025
@marki555
Copy link
Author

marki555 commented Jan 4, 2025

to @marki555: script was developed specifically for use with Cloudflare, because that is the easiest way, and doesn't require a vps. I could add an option to skip Cloudflare step. But user will have to provide valid ssl certificates, and make sure that port 443 is available, as Tesla requires https for public-key.pem.

Yes, maybe separate the creation of the public/private key and app from the cloudflare tunnel/hosting (so it would just create the keys, pause and ask user to manuall copy the keys to the correct webserver for a domain and then continue with checking if the key is hosted and next steps).

I tried to follow the script manually and I have created the keys and hosted them on my subdomain via https. However the next step is little misleading as it says to just create a developer account and provide client_id. As far as I unserstand I had to also create an App on the developer portal and provide the client_id/secret of the app, not of the developer account itself (the confusion maybe arised from the instructions you linked in your previous step which are from year ago and now the process is little different).

The script then showed the response from Tesla, however I'm not sure how to guess if it was successful (the scr, issuer and ca fields are null).

@yvolchkov
Copy link

yvolchkov commented Jan 4, 2025

Yes, maybe separate the creation of the public/private key and app from the cloudflare tunnel/hosting (so it would just create the keys, pause and ask user to manuall copy the keys to the correct webserver for a domain and then continue with checking if the key is hosted and next steps).

creating keys is just two lines in bash. I don't see any point in separating that from the script. Furthermore what you are describing here is pretty much manual step by step process. Immo doing that defeats the purpose of the script. If you can do what you described above - you can do everything manually.

@Adminius
Copy link
Contributor

Adminius commented Jan 15, 2025

Since yesterday I have telemetry running on my local server.
tesla-fleet-helper mentioned earlier doesn't work as expected because of mTLS challenges.

at the moment we did manual installation with opened port. Now I can focus on connection between local telemetry server and teslalogger connection and @yvolchkov will rewrite script/helper for easier installation

The good thing: you do not have to wait for Tesla approval anymore :)

@yvolchkov
Copy link

tesla-fleet-helper mentioned earlier doesn't work as expected because of mTLS challenges.

that's not entirely correct though. It still does work. However the plan we had for telemetry crashed hard against the mTLS. But there's still a chance that we can manage to make it work with CF tunnels. Port forwarding shall be the last resort in case we failed. Alternatively, tunnel support will be added later, depending on complexity of the efforts.

@Adminius
Copy link
Contributor

Status update:
I was able to connect TeslaLogger with my local Telemetry Server via ZeroMQ protocol
Now I have to clean up my code and do some more testing.

@Adminius
Copy link
Contributor

One more update:
Since today I will test my local_telemetry-TL-version on daily base.
Driving/Charging/Sentry working as expected.
To verify: renew Tokens.
I'm not sure why MQTT doesn't work any more...hm

Next steps:

  • save additional data (that's not available/used by official logger) in into database (e.g. EnergyRemainig)
  • test commands (wake up, sentry on/off and so on)
  • script for easy telemetry installation
  • script to send car configuration
    -???
  • merge in to main repository

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants