-
Notifications
You must be signed in to change notification settings - Fork 63
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Updated code to work with latest opencv and code enhancements.
- Loading branch information
Avinash Kumar
committed
Jan 15, 2024
1 parent
9c645b0
commit 0b48e78
Showing
40 changed files
with
135 additions
and
119 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,48 +1,79 @@ | ||
# Panoramic-Image-Stitching-using-invariant-features | ||
I have implemented the Panoramic image stitching using invariant features from scratch. Implemented the David Lowe paper on "Image stitching using Invariant features". | ||
# Panoramic Image Stitching | ||
|
||
NOTE: You can experiment with any images (your own choice). I have experimented with many images. You can check result below. You can find many images in "Image_Data" folder. | ||
Create panorama image from given set of overlapping images. | ||
|
||
CREATE DATA: | ||
- You can create multiple images like tajm1.jpg, tajm2.jpg, tajm3.jpg and tajm4.jpg (shown below) from your desired images (taj.jpg).Make sure there will be some overlapping parts between two consecutive created images in a sequence. then only algorithm will find and match features and create panorama image of all images which you have provided. | ||
- OR you can directly feed multiple images from camera in a sequence with some overlapping parts between two consecutive images. | ||
|
||
Please install Libraries: | ||
1. Numpy | ||
2. OpenCV (version 3.3.0) | ||
3. imutils | ||
## Requirements | ||
* numpy >= 1.24.3 | ||
* opencv-python >= 4.9.0 (latest as of 2024) | ||
* opencv-contrib-python >= 4.9.0 (latest as of 2024) | ||
* imutils >= 0.5.4 | ||
|
||
TO RUN CODE: | ||
1. Put images in your current folder where your code is present. | ||
2. Run stitch.py code. | ||
3. Provide the number of images you want to concantenate as input. Like: 2,5,6,10 etc. | ||
4. Enter the image name in order of left to right in way of concantenation. Like: | ||
Enter the 1 image: tajm1.jpg | ||
Enter the 2 image: tajm2.jpg | ||
Enter the 3 image: tajm3.jpg | ||
Enter the 4 image: tajm4.jpg (See below example). | ||
5. Then, you will get your panorama image as Panorama_image.jpg in your current folder. | ||
|
||
- Used SIFT to detect feature and then RANSAC, compute Homography and matched points and warp prespective to get final panoramic image. | ||
## Description | ||
We have implemented the **panoramic image stitching algorithm** using invariant features from scratch. | ||
We have Implemented the David Lowe research paper on "Panoramic Image Stitching using Invariant Features". | ||
Used SIFT to detect features, RANSAC, Homography and Warp Prespective concepts. | ||
|
||
RESULTS: | ||
|
||
Result of tajm1.jpg, tajm2.jpg, tajm3.jpg, tajm4.jpg | ||
## About Data | ||
**NOTE:** You can experiment with any images (of your own choice). We have experimented with many which you can find in | ||
`data/` folder. Please check the results below. | ||
#### Sample Images | ||
* Repo already provides sample images present in `data/` folder. Copy images from `data/` folder | ||
and put it into `inputs/` folder. | ||
* **Default**: you will find `data/tajm` folder images in `inputs/` folder. | ||
#### Custom Images | ||
You can create your own images as well and put it into `inputs/` folder. | ||
* Make sure your images must be in sequence and have overlapping parts between consecutive images. | ||
* Minimum width and height for all images should be 400. | ||
|
||
![alt text](https://github.com/AVINASH793/Panoramic-Image-Stitching-using-invariant-features/blob/master/Result/tajm_report.JPG) | ||
|
||
Result of nature1.jpg, nature2.jpg, nature3.jpg, nature4.jpg, nature5.jpg, nature6.jpg | ||
## How To Run | ||
1. Put images in `inputs/` folder from which you want to create panorama image. | ||
2. Run: | ||
```shell | ||
python3 stitch.py | ||
``` | ||
3. Enter the number of images you want to concatenate | ||
(i.e number of images present in `inputs/` folder): | ||
```shell | ||
Enter the number of images you want to concatenate: 4 | ||
``` | ||
4. Keep entering the images name along with path and extension. For Ex: | ||
```shell | ||
Enter the image names with extension in order of left to right in the way you want to concatenate: | ||
Enter the 1 image name along with path and extension: inputs/tajm1.jpg | ||
Enter the 2 image name along with path and extension: inputs/tajm2.jpg | ||
Enter the 3 image name along with path and extension: inputs/tajm3.jpg | ||
Enter the 4 image name along with path and extension: inputs/tajm4.jpg | ||
``` | ||
5. `panorama_image.jpg` and `matched_points.jpg` will be created in `output/` folder. | ||
|
||
![alt text](https://github.com/AVINASH793/Panoramic-Image-Stitching-using-invariant-features/blob/master/Result/nature_report.JPG) | ||
|
||
Result of my1.jpg and my2.jpg | ||
## RESULTS | ||
|
||
![alt text](https://github.com/AVINASH793/Panoramic-Image-Stitching-using-invariant-features/blob/master/Result/my_report.JPG) | ||
#### Result of Images from data/tajm folder | ||
tajm1.jpg, tajm2.jpg, tajm3.jpg, tajm4.jpg | ||
|
||
Result of taj1.jpg and taj2.jpg | ||
![alt text](https://github.com/AVINASH793/Panoramic-Image-Stitching-using-invariant-features/blob/master/result/tajm_result.jpg) | ||
|
||
![alt text](https://github.com/AVINASH793/Panoramic-Image-Stitching-using-invariant-features/blob/master/Result/taj_report.JPG) | ||
#### Result of Images from data/nature folder | ||
nature1.jpg, nature2.jpg, nature3.jpg, nature4.jpg, nature5.jpg, nature6.jpg | ||
|
||
Result of room1.jpg and room2.jpg | ||
![alt text](https://github.com/AVINASH793/Panoramic-Image-Stitching-using-invariant-features/blob/master/result/nature_result.jpg) | ||
|
||
![alt text](https://github.com/AVINASH793/Panoramic-Image-Stitching-using-invariant-features/blob/master/Result/room_report.JPG) | ||
#### Result of Images from data/my folder | ||
my1.jpg and my2.jpg | ||
|
||
![alt text](https://github.com/AVINASH793/Panoramic-Image-Stitching-using-invariant-features/blob/master/result/my_result.jpg) | ||
|
||
#### Result of Images from data/taj folder | ||
taj1.jpg and taj2.jpg | ||
|
||
![alt text](https://github.com/AVINASH793/Panoramic-Image-Stitching-using-invariant-features/blob/master/result/taj_result.jpg) | ||
|
||
#### Result of Images from data/room folder | ||
room1.jpg and room2.jpg | ||
|
||
![alt text](https://github.com/AVINASH793/Panoramic-Image-Stitching-using-invariant-features/blob/master/result/room_result.jpg) |
This file was deleted.
Oops, something went wrong.
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
Oops, something went wrong.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,117 +1,101 @@ | ||
import numpy as np | ||
import imutils | ||
import cv2 | ||
|
||
class Panaroma: | ||
|
||
def image_stitch(self, images, lowe_ratio=0.75, max_Threshold=4.0,match_status=False): | ||
|
||
#detect the features and keypoints from SIFT | ||
class Panaroma: | ||
def image_stitch(self, images, lowe_ratio=0.75, max_Threshold=4.0, match_status=False): | ||
# detect the features and keypoints from SIFT | ||
(imageB, imageA) = images | ||
(KeypointsA, features_of_A) = self.Detect_Feature_And_KeyPoints(imageA) | ||
(KeypointsB, features_of_B) = self.Detect_Feature_And_KeyPoints(imageB) | ||
|
||
#got the valid matched points | ||
Values = self.matchKeypoints(KeypointsA, KeypointsB,features_of_A, features_of_B, lowe_ratio, max_Threshold) | ||
(key_points_A, features_of_A) = self.detect_feature_and_keypoints(imageA) | ||
(key_points_B, features_of_B) = self.detect_feature_and_keypoints(imageB) | ||
|
||
# get the valid matched points | ||
Values = self.match_keypoints(key_points_A, key_points_B, features_of_A, features_of_B, lowe_ratio, max_Threshold) | ||
if Values is None: | ||
return None | ||
|
||
#to get perspective of image using computed homography | ||
# get wrap perspective of image using computed homography | ||
(matches, Homography, status) = Values | ||
result_image = self.getwarp_perspective(imageA,imageB,Homography) | ||
result_image = self.get_warp_perspective(imageA, imageB, Homography) | ||
result_image[0:imageB.shape[0], 0:imageB.shape[1]] = imageB | ||
|
||
# check to see if the keypoint matches should be visualized | ||
if match_status: | ||
vis = self.draw_Matches(imageA, imageB, KeypointsA, KeypointsB, matches,status) | ||
|
||
return (result_image, vis) | ||
vis = self.draw_matches(imageA, imageB, key_points_A, key_points_B, matches, status) | ||
return result_image, vis | ||
|
||
return result_image | ||
|
||
def getwarp_perspective(self,imageA,imageB,Homography): | ||
val = imageA.shape[1] + imageB.shape[1] | ||
result_image = cv2.warpPerspective(imageA, Homography, (val , imageA.shape[0])) | ||
|
||
def get_warp_perspective(self, imageA, imageB, Homography): | ||
val = imageA.shape[1] + imageB.shape[1] | ||
result_image = cv2.warpPerspective(imageA, Homography, (val, imageA.shape[0])) | ||
return result_image | ||
|
||
def Detect_Feature_And_KeyPoints(self, image): | ||
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) | ||
|
||
def detect_feature_and_keypoints(self, image): | ||
# detect and extract features from the image | ||
descriptors = cv2.xfeatures2d.SIFT_create() | ||
(Keypoints, features) = descriptors.detectAndCompute(image, None) | ||
|
||
Keypoints = np.float32([i.pt for i in Keypoints]) | ||
return (Keypoints, features) | ||
descriptors = cv2.SIFT_create() | ||
(keypoints, features) = descriptors.detectAndCompute(image, None) | ||
keypoints = np.float32([i.pt for i in keypoints]) | ||
return keypoints, features | ||
|
||
def get_Allpossible_Match(self,featuresA,featuresB): | ||
|
||
# compute the all matches using euclidean distance and opencv provide | ||
#DescriptorMatcher_create() function for that | ||
def get_all_possible_matches(self, featuresA, featuresB): | ||
# compute the all matches using Euclidean distance. Opencv provide DescriptorMatcher_create() function for that | ||
match_instance = cv2.DescriptorMatcher_create("BruteForce") | ||
All_Matches = match_instance.knnMatch(featuresA, featuresB, 2) | ||
|
||
return All_Matches | ||
|
||
def All_validmatches(self,AllMatches,lowe_ratio): | ||
#to get all valid matches according to lowe concept.. | ||
valid_matches = [] | ||
|
||
def get_all_valid_matches(self, AllMatches, lowe_ratio): | ||
# to get all valid matches according to lowe concept.. | ||
valid_matches = [] | ||
for val in AllMatches: | ||
if len(val) == 2 and val[0].distance < val[1].distance * lowe_ratio: | ||
valid_matches.append((val[0].trainIdx, val[0].queryIdx)) | ||
|
||
return valid_matches | ||
|
||
def Compute_Homography(self,pointsA,pointsB,max_Threshold): | ||
#to compute homography using points in both images | ||
|
||
(H, status) = cv2.findHomography(pointsA, pointsB, cv2.RANSAC, max_Threshold) | ||
return (H,status) | ||
def compute_homography(self, pointsA, pointsB, max_Threshold): | ||
return cv2.findHomography(pointsA, pointsB, cv2.RANSAC, max_Threshold) | ||
|
||
def matchKeypoints(self, KeypointsA, KeypointsB, featuresA, featuresB,lowe_ratio, max_Threshold): | ||
|
||
AllMatches = self.get_Allpossible_Match(featuresA,featuresB); | ||
valid_matches = self.All_validmatches(AllMatches,lowe_ratio) | ||
def match_keypoints(self, KeypointsA, KeypointsB, featuresA, featuresB, lowe_ratio, max_Threshold): | ||
all_matches = self.get_all_possible_matches(featuresA, featuresB) | ||
valid_matches = self.get_all_valid_matches(all_matches, lowe_ratio) | ||
|
||
if len(valid_matches) > 4: | ||
# construct the two sets of points | ||
pointsA = np.float32([KeypointsA[i] for (_,i) in valid_matches]) | ||
pointsB = np.float32([KeypointsB[i] for (i,_) in valid_matches]) | ||
if len(valid_matches) <= 4: | ||
return None | ||
|
||
(Homograpgy, status) = self.Compute_Homography(pointsA, pointsB, max_Threshold) | ||
# construct the two sets of points | ||
points_A = np.float32([KeypointsA[i] for (_, i) in valid_matches]) | ||
points_B = np.float32([KeypointsB[i] for (i, _) in valid_matches]) | ||
(homograpgy, status) = self.compute_homography(points_A, points_B, max_Threshold) | ||
return valid_matches, homograpgy, status | ||
|
||
return (valid_matches, Homograpgy, status) | ||
else: | ||
return None | ||
|
||
def get_image_dimension(self,image): | ||
(h,w) = image.shape[:2] | ||
return (h,w) | ||
def get_image_dimension(self, image): | ||
return image.shape[:2] | ||
|
||
def get_points(self,imageA,imageB): | ||
|
||
def get_points(self, imageA, imageB): | ||
(hA, wA) = self.get_image_dimension(imageA) | ||
(hB, wB) = self.get_image_dimension(imageB) | ||
vis = np.zeros((max(hA, hB), wA + wB, 3), dtype="uint8") | ||
vis[0:hA, 0:wA] = imageA | ||
vis[0:hB, wA:] = imageB | ||
|
||
return vis | ||
|
||
|
||
def draw_Matches(self, imageA, imageB, KeypointsA, KeypointsB, matches, status): | ||
|
||
(hA,wA) = self.get_image_dimension(imageA) | ||
vis = self.get_points(imageA,imageB) | ||
def draw_matches(self, imageA, imageB, KeypointsA, KeypointsB, matches, status): | ||
(hA, wA) = self.get_image_dimension(imageA) | ||
vis = self.get_points(imageA, imageB) | ||
|
||
# loop over the matches | ||
for ((trainIdx, queryIdx), s) in zip(matches, status): | ||
if s == 1: | ||
ptA = (int(KeypointsA[queryIdx][0]), int(KeypointsA[queryIdx][1])) | ||
ptB = (int(KeypointsB[trainIdx][0]) + wA, int(KeypointsB[trainIdx][1])) | ||
cv2.line(vis, ptA, ptB, (0, 255, 0), 1) | ||
|
||
return vis | ||
return vis |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,11 @@ | ||
Result Description: | ||
|
||
tajm_result.jpg: result of images from data/tajm folder | ||
|
||
nature_result.jpg: result of images from data/nature folder | ||
|
||
room_result.jpg: result of images from data/room folder | ||
|
||
taj_result.jpg: result of images from data/taj folder | ||
|
||
my_result.jpg: result of images from data/my folder |
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters