Learning journal Ralph Adrichem
Laser cutting (22 nov)
Before trying laser cutting, I'd never even laid eyes on a laser cutter. It was a whole new world for me. Gathering info from the knowledge base was key to getting it right on my first try.
The Makerlab manual became my trusty companion, simplifying the process with visuals that made everything clearer.
Returning to the Makerslab was a pleasant experience. The teachers there were incredibly helpful, guiding me through this unfamiliar territory until it all made sense.
Seeing how swiftly ideas turned into real things was mind-blowing. I'm thrilled with the results and excited to explore all the creative possibilities that laser cutting has to offer!
Fiducial markers (5 dec)
Learning Question:
How do I get the hang of Aroco markers (fiducial markers) and use them to track robots using a Raspberry Pi camera?
Journey Summary:
I became intested in Aroco markers and their potential in tracking our small-endian robots using the Raspberry Pi camera.
Upon research, I discovered that pairing Aroco markers with the openCV library could significantly enhance our robot tracking capabilities.
I created a script capable of generating these markers. To ensure usability, I documented a step-by-step guide for end-users to effortlessly generate their own Aroco markers.
Throughout this process, I did researched on fiducial markers, delving into the openCV library documentation, and studying the detailed specifications of Aroco markers.
Understanding Aroco Markers:
Aroco markers, categorized as fiducial markers, work as unique identifiers for computer vision systems. Their distinctive patterns allow cameras (like the Raspberry Pi camera) equipped with the openCV library to detect and track these markers accurately. By recognizing these patterns, the system gains spatial awareness, enabling precise localization and tracking of objects, in this case, our small-endian robots.
These markers act as reference points, offering a robust method to determine the robots' positions and orientations within a given environment. The combination of Aroco markers and openCV facilitates seamless, efficient tracking, empowering our robots to navigate effectively.
Sources
- https://docs.opencv.org/4.x/
- https://docs.opencv.org/4.x/d5/dae/tutorial_aruco_detection.html
- https://medium.com/@calle_4729/using-mathematica-to-detect-aruco-markers-197410223f62
- https://en.wikipedia.org/wiki/Fiducial_marker
Configure a Raspberry (8 dec)
Learning Question: How to configure a Raspberry Pi 3 for OpenCV with no prior experience?
Starting this endeavor with no familiarity with Raspberry Pi, our goal was to set up a Raspberry Pi 3 to run OpenCV for object identification. We relied on tutorials like the video guide and the comprehensive Core Electronics tutorial to guide us through the process.
The procces began with formatting the SD card to prep it for the Pi 3 setup. Connecting a monitor, mouse, keyboard, and Pi Camera, we navigated through the software setup, taking careful steps as directed by the guides.
Importing essential libraries (import cv2 and import numpy as np) for OpenCV and Python was a puzzling start. However, we persisted and configured OpenCV on the Raspberry Pi 3, following every instruction and resolving each hiccup along the way.
From knowing little to nothing about Raspberry Pi and OpenCV, we ended up successfully setting up the Raspberry Pi for use with openCV + Pi Camera.
For individuals starting from scratch:
Be equipped with:
- Raspberry Pi 3
- Formatted SD card
- Monitor, mouse, keyboard
- Pi Camera
With no prior experience, follow the video and Core Electronics guide step by step. Import the required libraries, configure OpenCV, and conduct tests using the Pi Camera for object identification.
Through this process, we transitioned from complete novices to successfully setting up a Raspberry Pi 3 for OpenCV, demonstrating that even with no prior knowledge, achieving this goal is possible through dedication and following clear guides.
Sources
- https://core-electronics.com.au/guides/object-identify-raspberry-pi/
- https://www.youtube.com/watch?v=iOTWZI4RHA8&list=PLPK2l9Knytg7O_okVr-prI1KbZ8GJeMKz
Coordinates marker detection (11 dec)
Learning Question: How Do I Add Coordinates to My Marker Detection Code?
I recently dove into the world of OpenCV and Python to figure out how to include coordinates in marker detection. Hereβs a quick overview of what I discovered:
I wrote a Python script using libraries like numpy and OpenCV (cv2). This script set up an ARUCO dictionary and had a neat function called aruco_display. This function worked its magic by dealing with marker details, IDs, and figuring out where they were on the screen.
The code did some nifty stuff like getting the dictionary ready, starting up the camera feed, and grabbing frames non-stop. Then, using detectMarkers, it found those markers, and the aruco_display function jazzed up the video feed by putting extra info on those markers.
It wasnβt a walk in the park, though! I had to crack some tricky code, fix bugs, and make sure everything played nice with the camera feed. But in the end, it helped me understand how to spot markers in real-time and add in their coordinates using OpenCV and Python.
<<<<<<< HEAD
Oriantation (15 dec)
Learning Question:
How can I enhance ArUco markers' functionality for tracking robots with a Raspberry Pi camera by incorporating orientation?
Journey Summary:
In my exploration of ArUco markers and their role in tracking small robots via a Raspberry Pi camera, I delved deeper into augmenting their capabilities by integrating orientation.
Recognizing the potential impact of this addition, I expanded the existing script to not only identify ArUco markers but also display their orientation. My aim was to simplify this process, making marker orientation easily comprehensible for users.
Understanding ArUco Marker Orientation:
ArUco markers, functioning as fiducial markers, offer more than mere identification by providing crucial orientation data. By incorporating this detail, cameras such as the Raspberry Pi's equipped with OpenCV gain an in-depth understanding of the markers' spatial orientation.
This orientation data acts as a pivotal reference point, enabling precise determination of the robots' positions and orientations within their environment. With added orientation, our robots will navigate their surroundings with heightened accuracy.
Sources:
- https://docs.opencv.org/4.x/
- https://docs.opencv.org/4.x/d5/dae/tutorial_aruco_detection.html
- https://medium.com/@calle_4729/using-mathematica-to-detect-aruco-markers-197410223f62
- https://en.wikipedia.org/wiki/Fiducial_markerr
======= <<<<<<< HEAD
Color detection (19 dec)
Learning Question: How to to enhance the existing openCV script's to identifty the color blue within images ?
Drawing insights from tutorials available on YouTube and in-depth perusal of the OpenCV documentation, i got the info i needed to get to work. Initial challenges arose from an overly broad color range, causing strain on computational resources. Refinement of these parameters, informed by the sources notably improved the script's efficiency.
However, the script faced an issue: erroneously identifying minuscule blue pixels unrelated to the Rubik's cube. I looked online on how to adress these typed of readings. I opted for a minimum value filter, substantially enhancing the accuracy of color detection.
I'm really pleased with how my code turned out, and I'm eager to dive deeper into this. My next goal is to add coordinates to the detected colors. This way, I can pinpoint their exact locations. It's exciting to think about the possibilities this could unlock!
Sources
https://medium.com/@gowtham180502/how-to-detect-colors-using-opencv-python-98aa0241e713 https://www.youtube.com/watch?v=aFNDh5k3SjU https://www.google.com/search?client=firefox-b-d&q=open+cv+collor+detection#fpstate=ive&vld=cid:017bc788,vid:ddSo8Nb0mTw,st:0 https://docs.opencv.org/4.x/d0/d81/tutorial_table_of_content_mcc.html
First proper color detection
Filtered color detection + label (no more small readings)
======= <<<<<<< docs/learning_stories/ralphadrichem.md
Sprint review (20 dec)
Learning Question: How do we give a good sprint review with scrumm?
Realizing our team's struggle with conducting effective Sprint Reviews within Scrum, i did some reading on how to have a propper print review. Our initial lack of clarity and proficiency in executing a proper Sprint Review prompted me to delve deeper into understanding the key elements required for a successful session.
Components to Success:
Live Demos and Prototypes:
Rather than just presenting finished work, conducting live demonstrations or showcasing prototypes can offer a more interactive and engaging way to illustrate sprint accomplishments.
Storytelling and Use Cases:*
Utilize storytelling techniques or real-world use cases to explain how the delivered functionalities solve specific problems or cater to user needs. This helps stakeholders relate to the value of the sprint increment.
SWOT Analysis
(Strengths, Weaknesses, Opportunities, Threats): Implementing a SWOT analysis during the review can help identify the strengths and weaknesses of the sprint increment, explore potential opportunities, and mitigate threats.
Customer/User Feedback Sessions:
Incorporate direct customer or user feedback sessions within the review process. This can be facilitated through surveys, interviews, or interactive feedback sessions to gather firsthand insights.
Value-Based Prioritization:
Prioritize demonstrated functionalities based on their perceived value to the end-users or customers. This ensures that the most crucial features are highlighted and discussed during the review.
Timeboxing and Structured Agenda:
Implement timeboxing to ensure that the review stays within its allocated time frame. Having a structured agenda helps maintain focus and avoid unnecessary discussions.
By using these methods and working together with a focus on what matters most, we're set to make our Sprint Reviews even better. This change will help us get closer to our clients and make our work even better.
=======
Color Detection with Rubik's Cube and Color Markers (Ralph Adrichem)
Testperson: Peter Adrichem, age 59
In my test, i explored how well the system identified + tracked colors drawn on a Rubik's Cube. Here's what i found:
The test person found the colors accurately tracked on the screen, making it easy for them to track and identify each color.
They noted that the system's display of coordinates alongside the colors helped in understanding the precise locations of each color on the Rubik's Cube.
Feedback
Overall, the test person found the color detection to be quite effective. However, they noticed occasional challenges in differentiating similar shades, especially in complex lighting conditions.
They observed that the system's accuracy was impacted by changes in lighting, such as shadows or uneven illumination, affecting color detection reliability.
Conclusion:
The system showed promise in displaying colors on the Rubik's Cube, but there were challenges under specific conditions. Although it was found useful overall, improvements in handling similar shades and adapting to different lighting conditions would enhance its reliability and user-friendliness.
docs/learning_stories/ralphadrichem.md <<<<<<< HEAD
Object size Measurement (10 januari)
Learning Question: How do i intergrate object size measurements into my python program.
The objective of this research endeavor is to expand the functionality of an ArUco marker detection script by integrating object size measurement capabilities. This integration aims to provide insights into the sizes of detected objects within the webcam feed alongside the identification of ArUco markers.
I spent time understanding contour operations within OpenCV, realizing their significance in delineating object boundaries. Learning to calculate object size using contour area became a fundamental aspect of my knowledge in object measurement techniques.
Planning the Implementation:
Once I had a grasp of these concepts, I strategized the integration of object size measurement within my existing code, primarily focusing on detecting blue and yellow objects from the webcam feed.
Drawing from my understanding of contour operations, my aim was to calculate object sizes based on contour areas. I aimed to filter out smaller or irrelevant detections to ensure accurate measurements.
Once the object sizes were measured, I intended to display their coordinates on the frame, akin to the existing ArUco marker detection information, enriching the visual feedback.
Outcome
I visualized a cohesive integration of object size measurements alongside the existing script. The script would seamlessly detect, measure, and display the sizes and coordinates of blue and yellow objects, adding depth to the webcam feed's information.
Conclusion
From learning the core concepts of object detection and measurement to meticulously planning their integration, this journey has been one of continuous learning and growth. I will now intergrate my knowledge into the script.
Soucres
- https://pyimagesearch.com/2016/03/28/measuring-size-of-objects-in-an-image-with-opencv/
- https://opencv.org/
- https://www.youtube.com/watch?v=tk9war7_y0Q
- https://www.geeksforgeeks.org/measure-size-of-an-object-using-python-opencv/
- https://www.linkedin.com/pulse/how-grab-object-dimensions-from-image-sabri-sansoy
=======
c615776cb99efae6fa64741d9871bde9b6bdf15e cf89ba0eb98d1a26d5dadf1a2049c24c05befd98
2b670e21971cba2d819b29b15a21c531f726643f
Collaborative drawing (11 jan)
Learning Question: How do concepts like Mixed Initiative and Creative Interfaces redefine collaborative drawing with AI and robotics??
I wanted to learn more about Collaborative drawing, in the presentation at the start of the project is saw a few intresting articles and terms. I decided to dedicate a learning story to this subject to understand the project better.
Mixed Initiative
Concept Definition:
Mixed Initiative in collaborative drawing refers to a dynamic interaction where both humans and machines actively contribute to the creative process, sharing control and decisions.
Application in Collaborative Drawing:
Imagine a dance of creativity where artists and AI/robotic systems collaboratively shape the artwork. This concept transforms the creative process into a symbiotic relationship, where both entities contribute dynamically.
Creative Interfaces
Concept Definition:
Creative Interfaces are user interfaces designed to enhance the creative process, leveraging novel interactions and innovative technologies.
Application in Collaborative Drawing:
These interfaces may include gesture-based controls, touch-sensitive surfaces, and AI-driven suggestions, creating an environment that empowers artists to express their creativity in unique and interactive ways.
Creativity and Aging
Concept Definition:
Creativity and Aging challenge the notion that creative abilities decline with age, emphasizing the unique and evolving nature of creativity in older individuals.
Application in Collaborative Drawing:
This concept encourages the development of tools that support and enhance the artistic expressions of older individuals, providing a platform for diverse perspectives in collaborative drawing.
Robotics for Art - Collaborative Drawing
Concept Definition:
Involving the use of robotic systems as a medium for artistic expression, Robotics for Art - Collaborative Drawing explores the collaboration between humans and robots in the creation of visual art.
Application in Collaborative Drawing:
Imagine robotic arms becoming creative partners, responding to human input and contributing unexpected twists to the canvas. This collaboration introduces precision and unpredictability.
Sociocultural Theory of Creativity (Glaveanu)
Concept Definition:
The Sociocultural Theory of Creativity, proposed by Vlad Glaveanu, underscores the influence of social and cultural factors in shaping and fostering creativity.
Application in Collaborative Drawing:
Applying this theory in AI and robotics involves designing tools that are culturally sensitive, ensuring that the collaborative drawing process aligns with the values and norms of the community.
Perspective Affordance Theory (Glaveanu)
Concept Definition:
This theory focuses on how different perspectives provide affordances for creative thinking, emphasizing the importance of multiple viewpoints.
Application in Collaborative Drawing:
Implementing AI and robotic systems that can simulate or incorporate diverse artistic perspectives enriches the collaborative drawing process, adding depth and diversity.
sources
- https://www.microsoft.com/en-us/research/wp-content/uploads/2016/11/mixedinit.pdf
- https://www.structural-learning.com/post/sociocultural-theory
- https://www.uia.no/content/download/66266/777241/file/Volkoff%20and%20Strong_Affordance.pdf
- https://www.iberdrola.com/culture/computational-creativity-robot-art
- https://roboticart.org/
- http://mici.codingconduct.cc/