Pick-and-Place Sorting Solution for 4 DoF Dynamixel Robotic Arm
March 2024
This was the final project for my RBE 3001 class, which primarily focused on the math behind all things robotic arms, including forward kinematics, inverse kinematics, trajectory planning, and more. As per the class requirement, we used MATLAB to program and dual-booted our devices to Ubuntu. For our final, the goal was to put everything we had learned in the course all together, plus teach ourselves how to make a camera detection algorithm. Simply put, the challenge was to use a camera to detect colored balls on a checkerboard then direct the arm to pick them up and sort them by color. My teammate Ben Cruse, who made the YouTube-style video available to watch below, focused primarily on perfecting our trajectory planning algorithm, while I turned my attention towards the camera. I have always been very interested in computer vision, so I was ecstatic at the opportunity to use MATLAB’s CV toolkit to develop my own detection and sorting algorithm, especially with how much I love state machines! Our detection was set up to run entirely in its own function—with a couple localization helpers—by taking a snapshot image as the param, and spitting out an array of the ball’s location and their color (0-4, where 0 was “unknown”). I became pretty obsessed with optimization. During my sophomore year, I like to think that I grew quite a bit as a programmer. I started with still pretty limited experience with a focus on “getting the job done.” But I started learning the value of documentation, structure, and doing things the “right” way, so by the time the spring semester rolled around, I was ready to take my code a step up and start thinking about optimization. An optimized CV algorithm is one with the least amount of masks, as those are computationally heavy and time-consuming. Many of my classmates—or rather, all, to my knowledge—approached it similar to TensorFlow, which takes a segmentation mask for each color. That was a minimum of four masks—but many used more! Instead, I implemented a single HSV mask that would grab all four colored balls. I then used the BlobAnalysis function in the toolkit to filter out any object that couldn’t be a ball, as well as grab the bounding box coordinates. I took the center coordinates of each box, ran it through our localization function, and then read the hue value at that coordinate to create and array with a position for our inverse kinematics algorithm and a color value for our state machine so we could sort it in the correct bin—also CADed and 3D-printed by yours truly. I also CADed and 3D-printed some purple blocks for the extra credit. I wrote a separate but similar detection algorithm for them that would run if no balls were found. The primary difference is that I had to get rid of my “circularity” check, which basically checked if the bounding box was “square enough” to be around a sphere. Together, we earned the high compliment from our professor of having the best final in the class, as well as praise from other teaching staff that we were the only team actually taking advantage of robot’s capabilities. At the end of the day, it was just so much fun! A true reminder of why I fell in love with robotics in the first place. |
Video created byBen Cruse |
Return toPROJECTS |