Sprint Report
This page is where we share the outcomes of our sprint. It's like a highlight reel of what we've accomplished. After this, we'll also dive into a team chat to talk about what went well and what we can improve. Take a peek at our sprint report to see what we've been up to.
Sprint 1
Date: | 30-11-2023 |
---|---|
Sprint number: | 1 |
Client: | Human-Robot Collaborative Drawing |
Team members present: | Bas Pijls-van Kooten Marianne Bossema Mario Tzouvelekis Susant Budhatoki Calvin Nessen Mike Tool Ralph Adrichem |
Client feedback
The client praised our initial sprint efforts, acknowledging our newfound self-sufficiency in understanding and independently working with the cart and Little Endian. They particularly appreciated our success in getting the Little Endian to drive, although they expressed dissatisfaction with the stepper motor's slow speed. They suggested exploring power delivery or capacitor removal from the PCB to address this.
While satisfied with the first sprint's outcome, they advised a multi-faceted approach for the next sprint. This involves focusing on diverse project aspects: Raspberry Pi camera integration, fiducial marker tracking, fundamental tasks for the Little Endian, emphasizing their seamless integration rather than solely improving the Little Endian's speed.
For the upcoming sprint, the primary focus will revolve around a significant user story: Enabling the Little Endian to autonomously drive to an empty spot. The objective is to extract as many user stories as possible from this central task
Sprint Achievements
We successfully completed the following user stories.
- As a product owner I want there to be a defenition of done Template
- As a user I want there to be a python application that opens a (connectable) socket so that I can connect to it with another device
- As a user I want the embedded device to be able to start a connection with a hardcoded socket so that I can receive data
- as a user I want the embedded device to be able to interpret received Json data so that it can start driving
- As a user I want the python application to send Json data that connected embedded devices can interpret so that they can start moving
- As a product owner, i want to have a global planning for the whole project
- As a user I want the bot to drive forward so it can draw
- As a user I want to have the design file for the robot so I can lasercut more robots.
- As a user I want multiple laser cutted boards of the robot so i can create multiple robots
- As a user I want to host a website so I can display a frontend
- As a user I want the kart to move based on the received data through the websocket connection so that I can make it go to different places
- As a user I want a design for a motor mount bracket so I can 3d print it.
- As a user I want to have 3d printed brackets so I can mount the stepper motor steady to the baseplate
Refinements / Removals
After our discussion with the client, there's no need to remove/refine any issues from our GitHub sprint. It seems our conversation aligned well with our current tasks. Let's stay focused and keep moving forward smoothly
Additions
Based on our talk with the client we could add the following user stories to sprint 2:
- As the end user, I expect the Little Endian to move swiftly to designated empty spots, ensuring efficient and timely completion of tasks.
- As the end user, I anticipate a consistent power supply to the Little Endian, resolving any issues that hinder its speed, ensuring smooth operation during navigation.
- As the end user, I desire camera to effectively recognize fiducial markers, allowing it to navigate the little Endian's precisely to designated empty spots without errors or confusion.
- As the end user, I appreciate a visible indicator (Success Lamp) on the Little Endian, signaling the successful arrival at an empty spot, providing clear updates on task completion.
- As the end user, I expect the integration of an ID cart system into the Little Endian, ensuring it identifies designated spots and avoids revisiting them unnecessarily, streamlining its tasks for user efficiency.
Sprint 2
Date: | 22-12-2023 |
---|---|
Sprint number: | 2 |
Client: | Human-Robot Collaborative Drawing |
Team members present: | Bas Pijls-van Kooten Marianne Bossema Mario Tzouvelekis Susant Budhatoki Calvin Nessen Mike Tool Ralph Adrichem |
Client feedback
During the sprint review held on Thursday, December 21, 2023, we presented our progress from Sprint 2, showcasing our overall achievements while specifically highlighting the milestones reached in this sprint. Additionally, we outlined a few adjustments made in response to encountered challenges.
-
Functional Drive Capability: We showcased the draw robot's ability to maneuver at a decent speed, emphasizing its capacity to rotate based on specified degrees.
-
Color Detection and Visual Recognition: Using a camera, we exhibited our capability to detect colors, identify vacant spaces within a white field, and successfully recognize Aruco markers.
The client expressed great enthusiasm and satisfaction with our achievements in this sprint. Their positive reception indicates a keen anticipation for our forthcoming endeavors in the subsequent sprint. They are eagerly looking forward to what we will accomplish next, reflecting their confidence in our team's capabilities.
Sprint Achievements
We successfully completed the following user stories:
- As a user I want the bots motors to be able to drive forward using milimeters so that I can control it more precise
- As a user I want the system to know the orientation of the little endians, this way I can draw with the robot.
- As a user I want to control the bots motors so that it can move to the left
- As a user I want an initial design for the camera holder so that a proper camera holder can be printed.
- As a user I want to control the bots motors so that it can move to the right
- As a user I want the python backend to start a ngrok server so that it is easier for the carts to connect to them
- As a user I want the wemos to only do something with the received data if it corresponds with its id
- As a user I want the python backend to connect to a mqtt server and subscribe to a subject
- As a user I want the wemos to connect to a mqtt server and subscribe to subject
- As a user I want a 3D design of a pen holder, so I can 3D print it
- As a user I want The website to have a button of which I can control the bots so that I can test the communication
- As a user I want the hole for the pen in te middle , so when the bot turns the pen stays in the same place
- As a user I want the system to identify the identity of the little endian carts so I will be able to keep them apart from each other.
- As A designer I want to create test cylinders to find the correct diameter for the pen holder
Refinements / Removals
After our discussion with the client, there's no need to remove/refine any issues from our GitHub sprint. It seems our conversation aligned well with our current tasks. Now we have to focus on to integrating everything to one project and not small separate functionalities.
Additions
During the review, we discussed with the client our plan to focus on integrating all functionalities into a cohesive system during the initial week of Sprint 3. Our aim is to ensure seamless collaboration among these features. Additionally, as a team, we need to update the documentation, which currently lacks comprehensive coverage of all the functionalities developed during this sprint.
At the end of the first week of Sprint 3, we will schedule a follow-up meeting with the client to identify and discuss the next set of functionalities they wish to see incorporated.
Sprint 3
Date: | 25-01-2024 |
---|---|
Sprint number: | 3 |
Client: | Human-Robot Collaborative Drawing |
Team members present: | Bas Pijls-van Kooten Marianne Bossema Mario Tzouvelekis Susant Budhatoki Calvin Nessen Mike Tool Ralph Adrichem |
Client feedback
Following our recent client meeting, the feedback on our project was positive. The client expressed genuine satisfaction with our work. They particularly appreciated the progress we made during the end-phase of our project, where we showed our problem-solving / and quality of our final prototype.
The feedback we received: - Main hardware and software issues have been successfully addressed, laying a solid foundation for further develpment. - The draw-bot prototype, controlled by a supervising camera, demonstrates the potential for collaborative drawing, offering a glimpse into future possibilities. - User tests on individual components have been conducted, highlighting areas for usability enhancement in subsequent iterations. - The team's collaboration and problem-solving skills were evident throughout the project, contributing to effective resolution of challenges. - Extensive documentation of the process ensures clarity and transparency, facilitating future development efforts. - The documentation includes valuable insights for future enhancements, such as recommendations on motor types and hardware platforms like Raspberry Pi. - The team's positive and open-minded attitude fostered regular communication, ensuring clear status updates and prompt issue resolution. - Effective project and process management, led by Ralph, played a pivotal role in maintaining team motivation and productivity.
Overall, the first prototype serves as a commendable starting point for further development, reflecting the team's commitment to excellence and continuous improvement.
Sprint Achievements
We successfully completed the following user stories:
- As a user I want the system to detect empty spots on the drawing area so the system knows what areas do not have a drawing
- As a user I want the cart to move through the MQTT connection using the new code so that I can send it towards the markers
- As a user I want the system to detect the position of a colored object/line, this way the little endians can move to the location where I have drawn a line (color).
- As a product owner I want the camera detection software documented, this way i can continue the project.
- As a student I want to know how OpenCV works and what I need to do to work with it so I can write a script for detecting objects colors etc.
- As a developer I want to understand how data is being sent between the karts and the mqtt server
- As a user I want the camera to calculate the angle at which the carts need to turn to face another detected line/ object so that we can send that data to the cart
- As a user I want the system to automatically detect the camera, so it can be plug and play.
- As a student I want to learn more about AI & Robotics for Art (Collaborative Drawing).
- As a user I want to print a Holder for the Fiducial markers that can be placed at the back of the marker/pen to find the position of the carts easier
- As a user I want there to be proper documentation about the life cycle of the system
- As a user I want documentation of the empty spots detection so I know how it works
- As a user I want a guide on how the codes works to rotate the cart with giving it a degree so I can implement it for myself
- As a user I want to have all functionalities working at the same time so there is one script to run and all data from the detection is visible on the camera feed
- As a user I want a guide on how the codes works to move the cart forward so i can implement it for myself
- As a user I want the system to detect the distance between two markers so that I can see the cart moving forward.
- As a user I want the system to detect a red marker line, this way the little endian can navigate towards it.
- As a user I want a nice playing field for the little endians so this makes the setup process easy and a hard and smooth surface
- As a user I want the detection software to better work with the angle calculation software so that It can work as intended
- As a user I want the backend to send data to the designated cart, based on the angle it needs to turn, the distance it needs to move and the id of the cart, so that the cart can start moving based on drawn objects
- As a user I want there to be proper documentation on how the angle and distance is being calculated so that future developers have a better understanding of what we have done
- As a user I want the camera detection software to stop detecting hands as red object so that the system more accurately detects things
- As a student I want to learn how i can implement a json file that's able to change parameters in some function
- As a user I want there to be more structure in the places where files are placed in our project so that future developers can more easily pick up where we ended the project.
- As a user I want no magic numbers in the combinedServerAndCamera script so the code is more readable and adjustable.
- As a user I want the data that gets sends by the server to not end with a decimal value so that there's a lower chance that improper data gets received by the connected clients
- As a user I want there to be commands for installing all the libraries so that future developers can easily use our software
- As a user I want a reset to default button so that i can get Values back to their original values
- As a user I want an organized docker file that includes only the required images
- As a user, I want a dedicated webpage where I can effortlessly input and customize variables for the cart. This functionality will enable me to specify parameters such as distance and angle without the necessity of delving into the code.
- As a student I want to learn about flask so I can make a web page that's able to run python script
- As a user I want to fix existing code to work with flask so that i can start using flask for web hosting
- As a user I want a documentation that explain how the webpage works, so that the next team can work better with the project
- As a student I want to learn about Ajax and how you can execute a python script using javascript
- As a user I want a bill of material so I can build my own cart if I want to
Refinements / Removals
In this sprint, we addressed some challenges by integrating a DC motor, which significantly boosted the cart's speed compared to the previous stepper motor setup. This adjustment aimed to enhance overall performance. During the demo, the cart successfully navigated to the red markers drawn by users. While it generally did a good job, there were occasional hiccups. At times, it took a few attempts to reach the marked point, and in some instances, the cart got temporarily lost. We identified that light and reflections played a significant role in these occasional hitches. Despite these challenges, the demo provided a solid impression of the product's functionality and its fusion of the integrated DC motor. We also showcased the user interface, allowing users to tweak different settings and values, highlighting the flexibility and customization options of our system.
Additions
As we hand over the project to the next team, our primary focus has been on establishing a strong foundation for future enhancements. The integration of the DC motor, improvements in navigation, and adaptive algorithms for challenges related to lighting represent key strides toward a more capable robotic cart. We encourage the incoming team to build upon these advances. Prioritize fine-tuning the navigation system to enhance accuracy and minimize instances of orientation issues. Additionally, refining adaptive algorithms to better adapt to varying lighting conditions will contribute to the cart's overall performance. The user interface, featuring customizable settings, provides opportunities for enhancing user interactions. Delving into user feedback can guide adjustments that improve the user experience and simplify the customization process. Looking ahead, we propose expanding the robotic cart's movements to respond more intuitively to user input. This may involve integrating additional commands or gestures, fostering a more dynamic and user-friendly interaction. Consider incorporating features like user-drawn path following, specific commands for turns, and gestures for adjusting speed. Exploring obstacle detection mechanisms further enhances the cart's autonomy, contributing to a more versatile and engaging user experience. These suggested enhancements aim to build upon the progress made in the previous phases, fostering continuous improvement.