Skip to content

Technical Documentation Camera

ArUco Marker Detection and Color Tracking using OpenCV and Python

This script utilizes the power of OpenCV (computer vision library) along with Python to detect ArUco markers from a live webcam feed. Additionally, it employs color detection techniques to track blue and yellow objects in the frame. This can be used to track drawn lines on the playing canvas.

Purpose

The primary objective of this script is to:

  • Detect ArUco markers, a type of augmented reality marker, using OpenCV's ArUco module. (Track little endians)

  • Identify and track blue and yellow objects in the webcam feed based on their color properties (HSV range).

Design Choices

OpenCV and Python

OpenCV is chosen for its robust computer vision capabilities and ease of use in Python. Its also free to use, keeping the cost for the project low.

Python's simplicity and extensive libraries make it ideal for rapid prototyping and development in computer vision applications. It also be run on an raspberry pi without many ajustments.

Color Detection

To track blue and yellow objects, the script converts the webcam feed from the RGB color space to the HSV color space. HSV (Hue, Saturation, Value) simplifies color detection by separating color information effectively. Thresholds are set to identify specific ranges of blue and yellow colors within the feed. (Ajust these values/range if the detection is to wide/narrow)

Marker and Color Visualization

Detected ArUco markers are outlined with green lines, and their IDs and orientations are displayed on the frame. Additionally, blue and yellow objects meeting area thresholds are bounded by rectangles and labeled accordingly with their respective colors. The coordinates of these objects are also displayed on the frame, this way can monitor if the program detects the objects correctly.

Functionality Overview

  • Webcam Access and Configuration

The script accesses the default webcam and configures its properties such as frame width and height.

  • Color Detection

Blue and yellow color ranges in the HSV color space are defined. The script continuously captures frames, converts them to HSV, and applies color masks to identify blue and yellow objects. Detected areas meeting certain size criteria are marked and their coordinates displayed on the frame.

  • ArUco Marker Detection

ArUco dictionaries are defined and utilized for marker detection. Detected markers are outlined, labeled with their IDs and orientations, and their centers are highlighted with orientation lines.

  • Display and User Interaction

The processed frames are displayed in a window. The script exits upon pressing the 'q' key.

1
2
3
4
5
Note:

- If multiple webcams are connected, the script may need modifications to specify the desired webcam index (usually 0 for the default webcam).

- Ensure proper lighting conditions for accurate color detection and ArUco marker identification.

Running the Script

Make sure you the following on your device:

Python

Ensure Python is installed on your system. You can download and install Python from the official website. Link to python site

OpenCV

Use pip, Python's package manager, to install OpenCV. Open a terminal or command prompt and run:

1
pip install opencv-python

Execution Steps:

Script Initialization

Save the provided Python script in a convenient location on your system.

Setting up the Webcam

  • Connect a webcam to your computer.
  • Ensure the webcam is functioning correctly and properly positioned.

Running the Script

  • Open a terminal or command prompt.
  • Navigate to the directory where the script is saved using the cd command.

Execute the Script

  • Run the script by typing:
    1
    python arucoDetectionColorCoordinates.py
    

Interacting with the Application - Once the script starts running, a window displaying the webcam feed will appear. - Place blue and yellow objects within the webcam's view. - ArUco markers, if detected, will be outlined with their IDs and orientations displayed. - Blue and yellow objects meeting area criteria will be highlighted and their coordinates shown on the frame.

Exiting the Application

  • To exit the application, press the 'q' key on your keyboard. This will close the window displaying the webcam feed.

The code explained

This documentation outlines the functionality and usage of the Color Detection and ArUco Marker Recognition system implemented in Python using the OpenCV library. The system is designed to detect and visualize various shades of red and white in a live camera feed, alongside the recognition of ArUco markers.

Color Detection

  • Red Color Detection:

The system detects red color in the frame using HSV color space. The red color range is defined in terms of lower and upper bounds.

1
2
red_lower = np.array([0, 100, 100])  # Lower bound for red color in HSV
red_upper = np.array([10, 255, 255])  # Upper bound for red color in HSV

The system then applies the red color mask to the frame and finds contours to identify red areas.

1
2
red_mask = cv2.inRange(hsv, red_lower, red_upper)
red_contours, _ = cv2.findContours(red_mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

ArUco Marker Recognition

The system uses ArUco marker recognition to identify markers in the frame. ArUco dictionary and parameters are defined, and the detectMarkers function is used to find marker corners and IDs.

1
2
3
4
arucoDict = cv2.aruco.Dictionary_get(ARUCO_DICT["DICT_4X4_50"])
arucoParams = cv2.aruco.DetectorParameters_create()

corners, ids, rejected = cv2.aruco.detectMarkers(img, arucoDict, parameters=arucoParams)

If markers are detected, the system utilizes the aruco_display function to draw lines, calculate marker centers, orientation angles, and display relevant information on the frame.

1
2
if corners is not None and ids is not None:
    img = aruco_display(corners, ids, rejected, img, 1920, 1080)

Empty spot detection

Introduction

This documentation outlines the functionality and usage of the Empty Spot Detection system implemented in Python using the OpenCV library. The system is designed to identify various shades of white in a live camera feed, with a specific focus on detecting empty spots within the captured frame.

Functionality

System Setup

The system initializes the camera, configuring it to a Full HD resolution (1920x1080). Users may need to adjust the camera index based on their setup.

1
2
3
camera = cv2.VideoCapture(0)
camera.set(cv2.CAP_PROP_FRAME_WIDTH, 1920)
camera.set(cv2.CAP_PROP_FRAME_HEIGHT, 1080)

Image Processing

  1. Color Space Conversion: Frames are converted to the HSV color space to facilitate better detection of white shades.

1
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
2. Thresholding: Different shades of white are isolated by defining lower and upper thresholds in the HSV color space.
1
2
3
lower_white = np.array([0, 0, 150])
upper_white = np.array([180, 100, 255])
mask = cv2.inRange(hsv, lower_white, upper_white)
3. Morphological Operations: Morphological operations (opening and closing) are applied to refine the mask.
1
2
3
kernel = np.ones((5, 5), np.uint8)
mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel)
mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel)

Contour Detection

Contours are identified in the processed image using OpenCV's contour-finding functions.

1
contours, _ = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

Visualization

Contours are drawn around different shades of white, bounding rectangles are outlined, and centroids are marked on the frame.

1
2
3
cv2.drawContours(frame, [contour], -1, (0, 255, 0), 2)
cv2.circle(frame, (cX, cY), 5, (255, 0, 0), -1)
cv2.putText(frame, f'Center: ({cX}, {cY})', (cX, cY - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2)

Display

The processed frame, highlighting different shades of white and displaying contours and centroids, is shown in a window.

1
cv2.imshow('Empty Spot Detection', frame)

Termination

The system can be terminated by pressing 'q'.

1
2
if cv2.waitKey(1) & 0xFF == ord('q'):
    break

Usage

  • Ensure that the OpenCV library is installed (pip install opencv-python).
  • Adjust the camera index in the script (usually 0 for the default camera).
  • Run the script to observe the live feed with highlighted empty spots.

Adjustable Parameters

  • Camera Resolution: Modify the CAP_PROP_FRAME_WIDTH and CAP_PROP_FRAME_HEIGHT values for the desired resolution.
  • HSV Thresholds: Adjust lower_white and upper_white values to encompass the range of shades representing empty spots. Fine-tune based on environmental conditions and lighting.

Angle Calculations

measuring the angle between two moving objects is fairly simple, it boils down to just using the proper sin-cos-tan in your calculation.

things are easier with a visualiser, so here's a drawing explaining the whole thing.

Alt text

As you can see we have two coords, that of the the square, in this case representing an aruco marker and that of the red scribble, representing a red scribble. Using those two coordinates we can calculate tan using this simple function.

1
2
3
4
5
6
def calculate_angle(x1, y1, x2, y2):
    delta_x = x2 - x1
    delta_y = y2 - y1
    angle_rad = math.atan2(delta_y, delta_x)
    angle_deg = math.degrees(angle_rad)
    return angle_deg

This, however is not enough for our project as we want the aruco marker to face the object. So this is how the function ultimately is implemented.

Once a red object is detected it gets thrown in to an array.

1
2
3
4
5
6
7
           for coords in red_areas_with_coords:
                red_x, red_y = coords
                cv2.putText(img, f"Red: ({red_x}, {red_y})", (red_x, red_y), cv2.FONT_HERSHEY_SIMPLEX,
                            0.4, (0, 0, 255), 1)
                print(f"[Red Area] Coordinates: ({red_x}, {red_y})")

            red_object_coordinates.extend(red_areas_with_coords)

This data then gets used once an aruco marker is detected.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
  if len(objects) > 0 and len(red_object_coordinates) > 0:
            aruco_marker_id, aruco_marker_angle = objects[0] # Get coords of the aruco marker
            red_object_x, red_object_y = red_object_coordinates[-1] # Get coords of the red object
            angle_between_objects = calculate_angle(cX, cY, red_object_x, red_object_y) - aruco_marker_angle #use function to calculate angle between the two and then extract aruco_marker_angle from that to get the difference
            print(f"[Angle Between Objects] ArUco marker ID {aruco_marker_id} and Red Object: {angle_between_objects:.2f} degrees")
            aruco_marker_id_str = str(aruco_marker_id) #turn the aruco marker id into a string (otherwise you'll get an error later on.)

            # Check if 10 seconds have passed since the last data send
            if time.time() - last_send_time >= 10:
                # Send data to the server
                send_json_message(client, cart=aruco_marker_id_str, distance=GetDistance(cX, cY, red_object_x, red_object_y), angle=angle_between_objects) #send the data.

                # Update the last send time
                last_send_time = time.time()