Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Customized tflike model can't work in iphone app #5850

Open
gzhhong opened this issue Feb 8, 2025 · 2 comments
Open

Customized tflike model can't work in iphone app #5850

gzhhong opened this issue Feb 8, 2025 · 2 comments
Assignees
Labels
platform:ios MediaPipe IOS issues task:hand landmarker Issues related to hand landmarker: Identify and track hands and fingers type:modelmaker Issues related to creation of custom on-device ML solutions

Comments

@gzhhong
Copy link

gzhhong commented Feb 8, 2025

Have I written custom code (as opposed to using a stock example script provided in MediaPipe)

Yes

OS Platform and Distribution

iOS

Python Version

3.9

MediaPipe Model Maker version

No response

Task name (e.g. Image classification, Gesture recognition etc.)

hand landmarks

Describe the actual behavior

Customized tflite model doesn't work in iPhone app

Describe the expected behaviour

The tflite model with the same meta data should work in iphone App as the old one

Standalone code/steps you may have used to try to get what you need

https://github.com/gzhhong/mediapipe-task

Other info / Complete Logs

The readme describe how to reproduce the issue. I use the formal iphone app example at https://github.com/google-ai-edge/mediapipe-samples/tree/main/examples/hand_landmarker/ios
@gzhhong gzhhong added the type:modelmaker Issues related to creation of custom on-device ML solutions label Feb 8, 2025
@kuaashish kuaashish added platform:ios MediaPipe IOS issues task:hand landmarker Issues related to hand landmarker: Identify and track hands and fingers labels Feb 10, 2025
@gzhhong
Copy link
Author

gzhhong commented Feb 10, 2025

Hello, just add more findings here. The output of keras suppose to be landmarks, handedness, presence_score, world_landmarks, but the output of tflite may be in another order, and that seems cause the problems. I think the key is to make the order of output of tflite is same as the output of keras, and the order should be landmarks, handedness, presence_score, world_landmarks.

@gzhhong
Copy link
Author

gzhhong commented Feb 12, 2025

Hello, I want to train a new tflite model and replace the one in hand_landmarker.task in the application https://github.com/google-ai-edge/mediapipe-samples/tree/main/examples/hand_landmarker/ios. Now my tflite model will ouput tensors in shape [1, 63], [1, 1], [1, 1], [1, 63], which represent the landmarks, handedness, presence_score and world_landmarks. The coordinates of landmarks are the pixel coordinates in the image of 224*224. Now the issue is, when the landmarks displayed in the iphone application, the landmarks just cover a small part of the hand image, it seems the coordinates not transferred from 224 to the screen. The landmarks sometimes shrink as one point.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
platform:ios MediaPipe IOS issues task:hand landmarker Issues related to hand landmarker: Identify and track hands and fingers type:modelmaker Issues related to creation of custom on-device ML solutions
Projects
None yet
Development

No branches or pull requests

2 participants