<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Projects | Personal Website</title><link>https://lxk-221.github.io/project/</link><atom:link href="https://lxk-221.github.io/project/index.xml" rel="self" type="application/rss+xml"/><description>Projects</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Sun, 01 Sep 2024 00:00:00 +0000</lastBuildDate><item><title>Assembly Robot</title><link>https://lxk-221.github.io/project/010.assembly-robot/</link><pubDate>Sun, 01 Sep 2024 00:00:00 +0000</pubDate><guid>https://lxk-221.github.io/project/010.assembly-robot/</guid><description>&lt;h2 id="abstrct">Abstrct&lt;/h2>
&lt;p>Movement Primitives (MP) are promising methods for modeling robot motion from human demonstrations. Using learned parameters and a phase variable typically scaled to $\boldsymbol{[0,1]}$, MP can generate a trajectory as the phase transitions from 0 to 1.
However, in assembly tasks that require high precision and online adjustments, typical MP methods do not work well, particularly when the trajectory is executed with limited controller gains, or when the task is hindered by some obstacles.
Therefore, we propose Phase-Recognizing Movement Primitive (PMP), which can make a stable estimation of the task phase online, make suitable adjustments when the assembly task is hindered by external disturbances, and finally achieve precise assembly while using a low gain compliance controller.
Specifically, given the robot state, we assume the phase is a random variable with a Gaussian distribution. Consequently, the phase velocity can be computed, enabling us to determine whether the task is hindered and to retry if the task is stuck.
We test our method on a Peg-In-Hole assembly task in simulation and a Slide-In-The-Groove assembly task on real UR5. The experimental results show that PMP can make stable estimations of the phase and thus make adjustments to complete the hindered assembly tasks.&lt;/p>
&lt;h2 id="experiment">Experiment&lt;/h2>
&lt;div style="position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;">
&lt;iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen="allowfullscreen" loading="eager" referrerpolicy="strict-origin-when-cross-origin" src="https://www.youtube.com/embed/Hcrcy2lksx4?autoplay=0&amp;controls=1&amp;end=0&amp;loop=0&amp;mute=0&amp;start=0" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="YouTube video"
>&lt;/iframe>
&lt;/div></description></item><item><title>BIM-based Robot</title><link>https://lxk-221.github.io/project/008.bim-based-robot/</link><pubDate>Fri, 01 Sep 2023 00:00:00 +0000</pubDate><guid>https://lxk-221.github.io/project/008.bim-based-robot/</guid><description>&lt;h2 id="overview">Overview&lt;/h2>
&lt;p>This research uses Building Information Modeling(BIM) information in 2D Simultaneous Localization and Mapping(SLAM). In this project, the robot uses the information from BIM to generate a 2D map as prior knowledge of the building. Then the mobile robot uses this global map for initial localization and path planning, which can be a coverage path for map updating or a path to a target position. Besides, the mobile robot can update this prior map based on its following exploration. In conclusion, the advantage of this method is that it can help the robot reach its desired target without generating a map first. Instead, the robot can reach the desired position based on prior knowledge and update the map while moving.&lt;/p>
&lt;p>The SLAM approach used in this project is based on Cartographer, and the method is tested in simulation in Gazebo with some random obstacles. All the work was done by myself.&lt;/p>
&lt;h2 id="abstract">Abstract&lt;/h2>
&lt;p>Construction robots don&amp;rsquo;t have the global information of the building unless they are allowed to build a map by SLAM in advance, which is time-consuming and prevents construction robots from making global plan for the task. At the same time, Building Information Modeling (BIM) is a digitalization and standardization of the building information. With the existing of BIM, the building interior in construction scenes is actually semi-unknown instead of totally unknown. In this research, we proposed a pipeline to transform the BIM to a 2D ideal map. Then, we combined the 2D ideal map and SLAM together for robot navigation. By using this ideal 2D map as the initial global map of the robot, the robot can obtain global information about the interior of the building, thus saving the time and inhance the efficiency.&lt;/p>
&lt;h2 id="piepline">Piepline&lt;/h2>
&lt;p>This figure shows the pipline of data transformation.
&lt;figure id="figure-data-transform-pipeline">
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img src="https://lxk-221.github.io/media/projects/bim-robot/Pipeline.png" alt="screen reader text" loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;figcaption>
Data Transform Pipeline
&lt;/figcaption>&lt;/figure>
&lt;/p>
&lt;p>This figure shows the building in the form of obj.
&lt;figure id="figure-mesh">
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img src="https://lxk-221.github.io/media/projects/bim-robot/mesh.png" alt="screen reader text" loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;figcaption>
Mesh
&lt;/figcaption>&lt;/figure>
&lt;/p>
&lt;p>This figure shows the building in the form of Octree.
&lt;figure id="figure-octree">
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img src="https://lxk-221.github.io/media/projects/bim-robot/Octree.png" alt="screen reader text" loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;figcaption>
Octree
&lt;/figcaption>&lt;/figure>
&lt;/p>
&lt;h2 id="simulation">Simulation&lt;/h2>
&lt;p>This figure shows different map in SLAM.&lt;/p>
&lt;p>
&lt;figure id="figure-map-information">
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img src="https://lxk-221.github.io/media/projects/bim-robot/map.png" alt="screen reader text" loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;figcaption>
Map Information
&lt;/figcaption>&lt;/figure>
&lt;/p>
&lt;p>As shown in this figure, although our robot was new in the environment, it had the global information about the building according to the global costmap, which is generated from the 2D ideal map by the layered costmap structure.
The largest box shows the global costmap, and the second large box shows the information from the sensor. And the smallest box shows the local costmap.&lt;/p>
&lt;p>
&lt;figure id="figure-right-global-path">
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img src="https://lxk-221.github.io/media/projects/bim-robot/Global.png" alt="screen reader text" loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;figcaption>
Right Global Path
&lt;/figcaption>&lt;/figure>
&lt;/p>
&lt;p>This figure shows that the robot can generate a right path to the target although the sensor doesn&amp;rsquo;t have any information around the target point.&lt;/p>
&lt;p>
&lt;figure id="figure-right-local-path">
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img src="https://lxk-221.github.io/media/projects/bim-robot/Local.png" alt="screen reader text" loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;figcaption>
Right Local Path
&lt;/figcaption>&lt;/figure>
&lt;/p>
&lt;p>This figureshows that the robot can change its local path according to the local cost map, to avoid the crush between itself and the obstacles.&lt;/p>
&lt;p>
&lt;figure id="figure-arrive-any-target">
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img src="https://lxk-221.github.io/media/projects/bim-robot/Target.png" alt="screen reader text" loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;figcaption>
Arrive Any Target
&lt;/figcaption>&lt;/figure>
&lt;/p>
&lt;p>This figure shows that the robot can arrive any target point on this specific floor in the building.&lt;/p>
&lt;h2 id="conclusion">Conclusion&lt;/h2>
&lt;p>In this paper, we proposed a pipeline to transform the BIM to a 2D ideal map, and then we used the 2D ideal to navigate the robot thus saving the time for building a map by SLAM in construction scenes. In the future, we will try to use more semantic information and lifecycle information in BIM to help robot finish its task.&lt;/p></description></item><item><title>Painting Robot</title><link>https://lxk-221.github.io/project/009.painting-robot/</link><pubDate>Fri, 01 Sep 2023 00:00:00 +0000</pubDate><guid>https://lxk-221.github.io/project/009.painting-robot/</guid><description>&lt;h2 id="overview">Overview&lt;/h2>
&lt;p>This project is the topic of my Master&amp;rsquo;s thesis.&lt;/p>
&lt;p>In construction spraying operations, spray robots offer significant advantages over manual labor. However, to meet the high precision required, spray robots must accurately perceive large-scale objects. Additionally, the demands of the spraying process require that spray robots incorporate new procedural constraints into their path planning for wall coverage painting. This research mainly addresses the following two issues:&lt;/p>
&lt;ol>
&lt;li>Full-coverage spray trajectory planning considering robot reachability constraints.&lt;/li>
&lt;li>Easy-to-deploy and verifiable algorithms across systems and platforms. Integrated spraying software combined with specific hardware.&lt;/li>
&lt;/ol>
&lt;h2 id="robot-model">Robot Model&lt;/h2>
&lt;p>
&lt;figure >
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img src="https://lxk-221.github.io/media/projects/painting-robot/PaintingRobot_SW.png" alt="SolidWorks Model" loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p>
&lt;h2 id="perception">Perception&lt;/h2>
&lt;p>Initially, we used a region-growing based method for perception, projecting the wall surface onto a 2D plane. This approach effectively transitioned the problem from 3D coverage path planning to 2D coverage path planning.&lt;/p>
&lt;p>However, we discovered that this method was inadequate for handling tasks involving corners and curved surfaces. Therefore, we are now exploring a slicing-based method to address these challenges.&lt;/p>
&lt;h2 id="slicing-based-painting-point-generation">Slicing-Based Painting Point Generation&lt;/h2>
&lt;p>The slicing-based painting point generation method uses a series of parallel planes to slice the area to be painted. This approach is commonly used in workpiece spraying. However, in construction spraying, the robot&amp;rsquo;s spraying range changes from known mesh files to perceived point cloud files, so the painting point generation method needs to be improved.
&lt;figure >
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img src="https://lxk-221.github.io/media/projects/painting-robot/painting_point_generation.png" alt="Painting Point Generation" loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p>
&lt;h2 id="painting-reachability-map-based-trajectory-optimization">Painting Reachability Map Based Trajectory Optimization&lt;/h2>
&lt;p>This paper proposes a Compact Reach Map and a Painting Reach Map, and formulates the spray trajectory generation problem as an optimization problem based on these two maps.&lt;/p>
&lt;h3 id="compact-reach-map">Compact Reach Map&lt;/h3>
&lt;p>
&lt;figure >
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img src="https://lxk-221.github.io/media/projects/painting-robot/compact_reach_map.png" alt="Compact Reach Map" loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p>
&lt;h3 id="painting-reach-map">Painting Reach Map&lt;/h3>
&lt;p>
&lt;figure >
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img src="https://lxk-221.github.io/media/projects/painting-robot/painting_reach_map.png" alt="Painting Reach Map" loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p>
&lt;h3 id="optimization">Optimization&lt;/h3>
&lt;p>Combining the constraints of the Painting Reach Map, the trajectory generation can be formulated as a convex optimization problem for solution.
&lt;figure >
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img src="https://lxk-221.github.io/media/projects/painting-robot/simulation_result.png" alt="Simulation Result (with doors and windows)" loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p>
&lt;h2 id="project-status">Project Status&lt;/h2>
&lt;p>This project is still in progress&amp;hellip;&lt;/p>
&lt;h2 id="experiments">Experiments&lt;/h2>
&lt;p>
&lt;figure id="figure-latex-paint-hanging-test">
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img src="https://lxk-221.github.io/media/projects/painting-robot/single-spray.gif" alt="Latex Paint Hanging Test" loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;figcaption>
Latex Paint Hanging Test
&lt;/figcaption>&lt;/figure>
&lt;figure id="figure-wide-range">
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img src="https://lxk-221.github.io/media/projects/painting-robot/wide-range.gif" alt="Wide Range" loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;figcaption>
Wide Range
&lt;/figcaption>&lt;/figure>
&lt;/p></description></item><item><title>Racing Robot</title><link>https://lxk-221.github.io/project/005.racing-robot/</link><pubDate>Fri, 01 Sep 2023 00:00:00 +0000</pubDate><guid>https://lxk-221.github.io/project/005.racing-robot/</guid><description>&lt;h2 id="overview">Overview&lt;/h2>
&lt;p>This project aims to participate in a robot racing competition. In the competition, the robot is asked to finish the track and hit the target at the destination without getting into the black area as fast as possible.
Robots are requested to use a humanoid robot with an omnidirectional chassis comprising three Mecanum wheels and a webcam on its backpack. All programs were running on a Raspberry Pi 4B in the backpack.&lt;/p>
&lt;p>In this project, I was responsible for the simulation, vision, and control algorithm. I built a simulation environment in the V-REP simulator, tested an edge detection-based method for track detection, and used vision-based PD control for the chassis. Besides, we use Keras and TensorFlow to solve a classified problem and rotate the robot in the right direction since the robot is placed in a random direction at the starting point.&lt;/p>
&lt;h2 id="simulation">Simulation&lt;/h2>
&lt;p>We first built a simulation environment in V-REP to test our algorithm, we used a remote Python API for V-REP to get simulation data from the virtual camera and control the robot&amp;rsquo;s motors according to our algorithm. The field of view of the real webcam is tested to guarantee that the virtual camera has similar parameters to the real one.&lt;/p>
&lt;h2 id="real-robot">Real Robot&lt;/h2>
&lt;p>The original position of the webcam is too high to get enough information on the ground in front of the robot. So we used a 3D printer to create an adjustable-angle camera mount, allowing it to tilt downwards to overlook the ground. To increase the friction of the wheels, we tried a new material for molding the tire and did some tests on the real robot.&lt;/p>
&lt;h2 id="video-of-test">Video of test&lt;/h2>
&lt;p>This video shows the whole process of the competition. As we can see, the robot starts with different orientations, then it classifies the command into three classes, namely left, right, and forward, according to the picture from the webcam. For the left or right command, the robot turns left or right with a specific angle. For the forward command, the robot starts to control its chassis with an edge detection-based PD control. The visual algorithm can be robust even when faced with camouflage plots. Finally, the robot uses an arc detection-based method to estimate the distance to the target and decelerate while adjusting its orientation.&lt;/p>
&lt;div style="position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;">
&lt;iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen="allowfullscreen" loading="eager" referrerpolicy="strict-origin-when-cross-origin" src="https://www.youtube.com/embed/cpf3xdfruAY?autoplay=0&amp;controls=1&amp;end=0&amp;loop=0&amp;mute=0&amp;start=0" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="YouTube video"
>&lt;/iframe>
&lt;/div>
&lt;!--
&lt;div style="position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;">
&lt;iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen="allowfullscreen" loading="eager" referrerpolicy="strict-origin-when-cross-origin" src="https://www.youtube.com/embed/L-4Lpmt8hqk?autoplay=0&amp;controls=1&amp;end=0&amp;loop=0&amp;mute=0&amp;start=0" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="YouTube video"
>&lt;/iframe>
&lt;/div>
--></description></item><item><title>Transport Robot</title><link>https://lxk-221.github.io/project/006.transport-robot/</link><pubDate>Fri, 01 Sep 2023 00:00:00 +0000</pubDate><guid>https://lxk-221.github.io/project/006.transport-robot/</guid><description>&lt;h2 id="overview">Overview&lt;/h2>
&lt;p>This project aims to make a lightweight transport robot for a material-carrying competition. Robots are asked to catch materials from several specific positions and carry them to some other positions. The more accurately the materials are placed, and the more materials are handled, the shorter the time used, resulting in a higher score. In this project, a mobile robot with a lightweight arm is designed, and two 2D cameras are used, which are mounted on the end effector of the arm and the side of the robot chassis, respectively. We use a visual serving strategy to adjust our arm to guarantee the accuracy of placing.&lt;/p>
&lt;p>In this project, I was responsible for the control of the chassis, arm, and camera. I use a RANSAC-based method to detect the target and do a close-loop control for placing action. I also participated in the design of the robot, including the layout, the type of arm selected, and the carrying strategy.&lt;/p>
&lt;h2 id="description-of-the-task">Description of the Task&lt;/h2>
&lt;p>Several materials in three colors—red, green, and blue—are arranged in different positions on the shelf. Besides, there are the same number of QR codes on the shelf as the number of materials, each QR code represents the target position of the material. With all this information, the robot should first grasp these materials onto the robot, and carry them to the desired position.&lt;/p>
&lt;p>Besides, black grids on the ground allow us to use grayscale sensors for the robot&amp;rsquo;s localization.&lt;/p>
&lt;p>For the placement, we use a RANSAC-based method to detect the center of the target position according to the camera mounted on the end effector.&lt;/p>
&lt;h2 id="final-version-of-our-robot">Final Version of Our Robot&lt;/h2>
&lt;p>This video shows the whole process of the competition, as we can see, the robot uses a visual serving strategy to adjust the end effector and achieve precise placement.
&lt;div style="position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;">
&lt;iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen="allowfullscreen" loading="eager" referrerpolicy="strict-origin-when-cross-origin" src="https://www.youtube.com/embed/XEpZW4rSB-8?autoplay=0&amp;controls=1&amp;end=0&amp;loop=0&amp;mute=0&amp;start=0" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="YouTube video"
>&lt;/iframe>
&lt;/div>
&lt;/p>
&lt;h2 id="group-and-team">Group and Team&lt;/h2>
&lt;p>
&lt;figure id="figure-team">
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img src="https://lxk-221.github.io/media/projects/transport-robot/Team.jpg" alt="screen reader text" loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;figcaption>
Team
&lt;/figcaption>&lt;/figure>
&lt;figure id="figure-group">
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img src="https://lxk-221.github.io/media/projects/transport-robot/Large_Team.jpg" alt="screen reader text" loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;figcaption>
Group
&lt;/figcaption>&lt;/figure>
&lt;/p></description></item></channel></rss>