Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sorry for bother you again #12

Closed
Blackhorse1988 opened this issue Apr 1, 2019 · 25 comments
Closed

Sorry for bother you again #12

Blackhorse1988 opened this issue Apr 1, 2019 · 25 comments

Comments

@Blackhorse1988
Copy link

well the eye tracking works fine when i set

        moments = cv2.moments(contours[-1])

with -2 no pupils are detected.

But the tracking doesn´t follow the pupils when I move just the eyes.
When I move my head it works.
Do you have an idea?

@antoinelame
Copy link
Owner

antoinelame commented Apr 1, 2019

coutours[-1] is the contour of the eye frame, it can't work if you want the pupil position. You can read this to understand why.

It seems that the auto-calibration gives you a wrong threshold value for the binarization. That's strange. We need to dig in. Get back to coutours[-2], add this in your example.py and send me screenshots of what you get:

try:
    cv2.imshow('Binarized left eye', gaze.eye_left.pupil.iris_frame)
    cv2.imshow('Binarized right eye', gaze.eye_right.pupil.iris_frame)
except Exception as e:
    print(e)

@antoinelame
Copy link
Owner

I guess I edited my previous message after you run the test. Add the code I gave you, and you should see two new frames on your screen. Take a screenshot and send it to me

@antoinelame
Copy link
Owner

I can't see your picture if you add new replies to this thread by email. Please, go to Github. :)

@Blackhorse1988
Copy link
Author

Screenshot - 01_04

@antoinelame
Copy link
Owner

Yes, the threshold value given by the auto-calibration system is definitely wrong with you.

Before the while True: loop, add that to your example.py file:

results_displayed = False

And then, in the loop, add that:

if gaze.calibration.is_complete() and not results_displayed:
    results_displayed = True
    print(gaze.calibration.thresholds_left)
    print(gaze.calibration.thresholds_right)

try:
    cv2.imshow('Right eye', gaze.eye_right.frame)
    cv2.imshow('Left eye', gaze.eye_left.frame)
    cv2.imshow('Binarized left eye', gaze.eye_left.pupil.iris_frame)
    cv2.imshow('Binarized right eye', gaze.eye_right.pupil.iris_frame)
except Exception as e:
    print(e)

Keep looking at the center for 5 seconds when you start the program.
Then, send me a screenshot containing these 4 frames and also what you get in your terminal.

@Blackhorse1988
Copy link
Author

Screenshot - 01_04 002

@antoinelame
Copy link
Owner

Thanks, but I also need a screenshot containing the 4 frames you get.

@Blackhorse1988
Copy link
Author

Screenshot - 01_04 003

@Blackhorse1988
Copy link
Author

Hey :) Did you found anything?

@Blackhorse1988
Copy link
Author

Screenshot - 02_04
Screenshot - 02_04 002

@Blackhorse1988
Copy link
Author

Is that normal?

@Blackhorse1988
Copy link
Author

Now it works! I don´t understand!

@Blackhorse1988
Copy link
Author

Ok, seems there is a problem with the threshold. Everytime I change the light setup behind my camera it works or not.

Is the code working also in dark?

@antoinelame
Copy link
Owner

Hi @Blackhorse1988
Your video is really impressive, congrats!
Do you still need help with the auto-calibration system? If it's not totally accurate with you or your environment, you can send me video samples of you looking in different positions and I will use it to improve the algorithm.

@Blackhorse1988
Copy link
Author

Hey :) its your impressive code :) I changed the threshold manually to the best value, so an autocalibration would be great. I will do that tomorrow and really thank you for you time and support!

I am working with that robotic arm in my job and I would like to do that with the glass from pupil labs too.

Do you have experience to make the code working for android?

@antoinelame
Copy link
Owner

You did well to pass your own threshold. What value did you choose?

With every people I tried, the auto-calibration system was working pretty well. Too bad it's not the same for you. According to your screenshots, the program had calculated consistent values for which it obtained good iris sizes. So, you shouldn't get binarized frames with only white pixels. In worst cases, you should have a wrong iris binarization, but with some black pixels. I'm a little bit surprised and I would like to understand why, so I will keep you in touch after having done tests with your samples.

OpenCV works on Android, so it seems to me that it would be possible to run a version of this code on a mobile, with Java/Kotlin, or maybe with tools that make possible to run Python code.

About Pupil Labs, I don't know well, I've never tried it.

@Blackhorse1988
Copy link
Author

Blackhorse1988 commented Apr 3, 2019 via email

@Blackhorse1988
Copy link
Author

If I would mount a camera to a glass and just detect one eye, would that work too with your code?

@Blackhorse1988
Copy link
Author

Hi Antoine,

do you have an idea to modify the code to track just one eye?

@nishthachhabra
Copy link

Hi Antoine,
I am working on raspberry pi3 and have followed each step as well as issues discussed over comments , but still getting delay of frame.

@antoinelame
Copy link
Owner

Hi @Blackhorse1988
Sorry for for the delay! To track with only one eye, for example the left one:

In def pupils_located(self):, remove:

int(self.eye_right.pupil.x)
int(self.eye_right.pupil.y)

In def _analyze(self):, remove:

self.eye_right = Eye(frame, landmarks, 1, self.calibration)

Then update horizontal_ratio(), vertical_ratio() and is_blinking() so that the average of both eyes is not calculated.

Happy coding!

@antoinelame
Copy link
Owner

Hi @nishthachhabra
I don't know well about delays, it always worked fine with my laptop. I think it's the cv2.imshow() in the example.py that makes most of the delay. Do you really need to display the result? If yes, you can try to display images with something else. Try also the change the value passed to cv2.waitKey() in the example.

@EderSant
Copy link

Hi @antoinelame,

I'm having a problem similar to @Blackhorse1988, because most of the time the application only idendates winks and looks to match. In a few times he identifies looks to the right. I could not at any point identify looking up or down.

@antoinelame
Copy link
Owner

Hi @EderSant,

Yes you're right, the wink detection is perfectible and I will improve it. However, the "blinking state" doesn't mean that the program doesn't know if you are looking on the right or on the left. If you are using the demo, the display of "blinking" overrides the display of the gaze direction. In this case, you can just remove theses lines in example.py:

if gaze.is_blinking():
        text = "Blinking"

About the identification of up/down, I didn't write is_up() and is_down() functions because I think the margin of error is too large. But you can do it by using vertical_ratio().

Please new time open your own issue. 😉

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants