SIA OpenIR  > 数字工厂研究室
基于机器视觉的联合收割机-运粮车装载状态识别方法研究
Alternative TitleResearch on loading status identification of Combine harvester - grain transport vehicle based on machine vision
刘丹
Department数字工厂研究室
Thesis Advisor王卓
Keyword联合收割机-运粮车 机器视觉 接触线 谷物三维模型 装载状态
Pages65页
Degree Discipline控制工程
Degree Name专业学位硕士
2020-05-26
Degree Grantor中国科学院沈阳自动化研究所
Place of Conferral沈阳
Abstract随着我国不断推进新型工业化、产品信息化和人口城镇化,利用多台农业机械进行联合作业的机群模式更加符合现代化农业发展的趋势。农业机械间协同作业的研究是实现大规模机群作业的基础。传统的谷物收割过程是与装载过程分开的,收割机满载后需要停车卸粮,通过驾驶员识别运粮车粮箱的装载状态并控制运粮车的行驶,谷物收割效率低。采用联合收割机-运粮车协同作业的方式可以实现不停车卸粮,加快作业速度,提高收割机性能。联合收割机-运粮车系统中,为提高主从协同作业的效率和监控能力,需要重点解决运粮车粮箱内谷物装载情况的动态识别问题。目前针对联合收割机-运粮车协同作业系统中粮箱装载状态识别的方法鲜有研究,大多是出料口固定于运输车正上方的谷物装载方式。传统的料堆识别方法主要有:三维激光扫描传感器、超声波传感器、压力传感器。这些传感器可以大体检测所装载谷物的高度、重量等信息,但是谷物在粮箱内的空间分布状态信息难以识别,同时存在故障率高、稳定性差等问题。针对上述问题,本文采用了机器视觉技术识别运粮车粮箱内谷物的装载状态,获取谷物的空间分布状态信息,以便于调整收割机与运粮车之间的相对位置,对协同控制具有更好的适用性。本文研究内容主要包括:(1)针对联合收割机-运粮车协同作业的特点提出了一种改进的接触线检测方法,利用谷物二维凸包上的凸点与粮箱边框直线之间的距离判断装载状态,相对于传统的仅靠边缘信息识别接触线的方式更加直观。采用利于硬件实现的细胞神经网络(Cellular Neural Network,CNN)算法检测运粮车边缘,提出了一种基于蚁狮算法的CNN边缘模板参数设计方式,分析其离散化的状态方程,简化了边缘检测的迭代过程。通过角度、距离等特征筛选出边缘检测后的粮箱边框直线,分割出粮箱区域作为感兴趣区域。利用谷物的黄色特征设计了一种RGB颜色空间与HSV颜色空间相结合的谷物区域分割方式,检测谷物区域的凸点坐标,得到谷物凸点与粮箱边框直线的距离信息。(2)针对二维接触线无法表示完整的粮箱空间分布信息的缺点,提出了一种基于多深度相机的谷物区域三维重建方式,将深度相机设计在粮箱对角位置,最大限度地利用了其有效视域范围。根据相机安装位置,设计了一种基于粮箱3D模型的多深度相机标定方式。首先构建运粮车粮箱体的3D模型图,对其角落处进行多视角网格扫描形成模板库,以粮箱中心位置为坐标系原点,利用模板匹配的方法将对角位置的深度相机转换到粮箱坐标系下。对两个深度相机采集到的点云信息进行滤波降噪等处理,利用迭代最近点(ICP)算法进行融合,得到粮箱内谷物的三维模型与空间分布信息。(3)根据联合收割机-运粮车协同作业的几何模型设计模拟实验平台,对图像采集卡及相机等设备进行布设与选择。在模拟实验平台上,利用彩色相机得到的接触线和深度相机建立的谷物三维模型识别谷物装载状态。
Other AbstractWith the continuous promotion of new industrialization, product informatization and population urbanization in China, the cluster model of using multiple agricultural machines for joint operation is more in line with the development trend of modern agriculture. The research of cooperative operation between agricultural machinery is the foundation of realizing large-scale cluster operation. The traditional grain harvesting process is separated from the loading process. The harvester needs to stop to unload the grain after it is fully loaded. The efficiency of grain harvesting is low when the driver identifies the loading status of grain containers and controls the movement of grain transport vehicle. The method of combining harvester and grain transport can realize unloading without stopping, speed up the operation and improve the performance of harvester. In order to improve the efficiency and monitoring ability of master-slave cooperative operation, the dynamic identification of grain loading in the grain containers of the combine harvester-grain transport vehicle system should be emphasized. At present, there is little research on the method of identifying the loading state of grain containers in the cooperative operation system of combine harvester. There are three-dimensional laser scanning sensor, ultrasonic sensor and pressure sensor. These sensors can detect the height, weight and other information of the grain, but the spatial distribution information of the grain in the grain box is difficult to identify, and there are problems such as high failure rate and poor stability. To solve the above problems, this paper adopts machine vision technology to identify the loading state of grain in the grain containers and obtain the spatial distribution state information of grain, so as to adjust the relative position between harvester and grain transport vehicle, which has better applicability for collaborative control. The research contents of this paper mainly include: (1) An improved contact line detection method was proposed for the cooperative operation of combine harvester and grain transport vehicle. The distance between the convex point on the 2-d convex hull of grain and the line of grain containers border is used to judge the loading state. Compared with the traditional method of identifying contact lines only by edge information, it is more intuitive. A Cellular Neural Network algorithm was used to detect the edge of grain transport vehicle. A method of parameter design of CNN edge template based on ant lion algorithm is presented. The iterative process of edge detection is simplified by analyzing the state equation of CNN discretization. The straight lines of grain containers border after edge detection were screened through the characteristics of angle and distance, and the grain containers area was segmented as the region of interest. Based on the yellow feature of grain, a grain area segmentation method combining RGB color space and HSV color space was designed. The convex point coordinates of grain area were detected, and the distance between grain convex point and grain containers border line was obtained. (2) In order to solve the problem that two-dimensional contact lines cannot represent complete spatial distribution information of grain containers, a 3d reconstruction method of grain area based on multiple depth cameras was proposed. The depth camera is designed in the diagonal position of the grain containers to make the most of its effective field of view. According to the camera installation location, a calibration method of multiple depth cameras based on 3D grain containers model was designed. Firstly, the 3D model diagram of the grain containers was constructed, and the template library was formed by multi-angle grid scanning at its corners. Taking the center position of the grain containers as the origin of the coordinate system, the depth camera at the diagonal position is converted to the grain containers coordinate system by the method of template matching. The point cloud information collected by two depth cameras was filtered and denoised, and the iterative nearest point algorithm was used for fusion to obtain the three-dimensional model and spatial distribution information of grains in the grain containers . (3) The simulation experiment platform was designed according to the geometric model of cooperative operation of combine harvester and grain transport vehicle. On the simulation platform, the grain loading state was identified by using the contact line obtained by the color camera and the 3d model established by the depth camera.
Language中文
Contribution Rank1
Document Type学位论文
Identifierhttp://ir.sia.cn/handle/173321/27144
Collection数字工厂研究室
Affiliation中国科学院沈阳自动化研究所
Recommended Citation
GB/T 7714
刘丹. 基于机器视觉的联合收割机-运粮车装载状态识别方法研究[D]. 沈阳. 中国科学院沈阳自动化研究所,2020.
Files in This Item:
File Name/Size DocType Version Access License
基于机器视觉的联合收割机-运粮车装载状态(4991KB)学位论文 开放获取CC BY-NC-SAApplication Full Text
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[刘丹]'s Articles
Baidu academic
Similar articles in Baidu academic
[刘丹]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[刘丹]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.