
Vision-Guided Robotic Arm
Built a 3D-printed 6-DOF robotic arm with custom inverse kinematics firmware and an OpenCV vision pipeline for autonomous pick-and-place of colored cubes. Developed at HACK@URI hackathon.
This project is a 3D-printed 6-DOF robotic arm (based on the BasementMaker Instructables design) controlled by an Arduino running custom inverse kinematics firmware written in Python and C++. The IK solver uses a closed-form geometric approach — computing wrist center position, base yaw via atan2, then solving the shoulder–elbow two-link problem with the law of cosines — to convert Cartesian end-effector targets into joint angles in real time. The arm supports smooth cubic-interpolated servo movements and a serial command interface for positioning, homing, and gripper control.
On the vision side, a Python + OpenCV pipeline detects colored cubes on a bounded white playing area using HSV color filtering and contour analysis. The system automatically finds the board boundaries (black tape border), identifies cubes of seven possible colors, and computes a mirror/ghost drop location by reflecting the pick position across a vertical divider. A full coordinate transformation pipeline converts camera pixels to board millimeters to robot base-frame millimeters for accurate end-effector targeting.
The pick-and-place operation runs as a 10-step state machine — approach, descend, grip, lift, traverse, descend, release, retract, return home, complete — with error handling and safe abort at every step. The system supports autonomous, manual keyboard, vision-only, and dry-run modes. Built at the HACK@URI hackathon, it was a great exercise in tying together embedded firmware, computer vision, and motion planning into a single integrated system.
Built alongside Julian Tamayo and William Penhall.
Gallery


