비주얼 서보잉, 이미지 프로세싱 OpenCV 및 Python 프로그래밍을 사용한 자가 운전 자동차 프로토타입 – Tito Valiant Muhammad

A prototype self-driving car has been created that mimics the human ability to drive along the road. Navigation uses the visual-surfing concept equipped with a camera sensor to process images, and uses Python programming to control the position of the vehicle toward the road line (see video experiment at the end of this story).

A prototype self-driving car advances only between a straight road indication and a one-way lane. The truck is 11 meters long, 60 centimeters wide, and has three black road markings at a total rotation angle of 90 degrees. There are no intersections, and only obstacles in front of the car can be detected. Test indoors with sufficient lighting conditions. The project methodology is shown below.

In this paper, a prototype self-driving car capable of avoiding obstacles in front of a vehicle using an ultrasonic sensor is created according to road surface display detected by a raspberry camera sensor directly mounted on a robot. The Rasberry Pi camera is used to sense road lane indications and provides computer-visioned visual information using OpenCV libraries such as grayscale, color thresholds, areas of interest, hop-line transformations, perspective transformations and road line centering. The robot is tested and applied in real time using Python programming on the Raspberry Pie 3 Model B+.

As a result of Image Processing, the road display x-axis pixel center (Titik Tengah jalan) and the camera robot (Titik Tengah robot) move along the road. The purpose of the control system is to maintain the robot heading by adjusting the motor servo angle for steering control. According to the experiment, the robot can automatically travel along both painted and unpainted road markings to avoid obstacles.

The original size of the image captured in this paper is a 448×208 pixel RGB color image. A video processing system acquires video information data from a continuous video on the front face of a vehicle by input of a decision module, and obtains data necessary for controlling the movement of the vehicle. The road sign detection procedure is described as follows:

We will now share a step-by-step image processing configuration while the car moves automatically in real time. It is described below.

First, the system calls the camera sensor and captures the original image into the RGB three-color layer with a 448×208 pixel camera raspberry pie. Using Python Programming and OpenCV, RGB values are a function of object color and overall brightness. The code is described below.

As a second process, I need to convert the RGB original image to a grayscale image to minimize processing time. The gray image process is minimized compared to RGB color images, which convert 24-bit and 3-channel color images into 8-bit single-channel/1 color layers with the OpenCV feature of “cv2.cvColor”.

A comparison between threshold strength differences or high contrast image pixel points is detected on the road line.In the experiment, the lower threshold is 127 and the upper threshold is 255 (white).

The image processing method of the Houghlines conversion is used for the process of displaying Blue Lines in road markings along the path/line of the previously white Color Thresholding feature. This method requires the OpenCV feature “cv2”.HoughLinesP(). “The results of this method are shown below.The image processing method of the Houghlines conversion is used for the process of displaying Blue Lines in road markings along the path/line of the previously white Color Thresholding feature. This method requires the OpenCV feature “cv2”.HoughLinesP(). “The results of this method are shown below.Specific area along the trapezoidal pixel point (red dot) selected as the boundary of the area of interest to display visual information. A process of obtaining a parameter value by feedback control of an automobile is facilitated because only one side/side side of the road display is detected. This feature is 300×300 pixel RGB and only displays road markings in the focus area. The Perspective Transform video output is applied as shown below.After obtaining a specific area in the Perspective Transform process, the system requires the ccv2.moments() 機能 feature to calculate the mid/central value of the x-axis pixel image of the detected road display.The purpose of this feature is to calculate the difference (error) between the center value of the x-axis pixel road display and the middle/middle value of the robot x-axis pixel. Differential calculation (error value) results are used to adjust the motor servo angle for steering control.A straight line track, the left and right tracks, which now allows users to a color image processing function threshold value, perspective transformation, and Hough Line between Microsoft and Transformer for road surface display using a normally detect. The system time information (x-axis) error pixel value generated by the vehicle control system adjusts it. 300 pixels “, ” 300 “, ” Pi camera 8 and test the capture and display the road using a raspberry pie. is not an obstacle in the front of the vehicle to the moving speed of the. about 6 m/s.Angle of HTML5, [ Link] robots ((x) axis ) Error of the Pixel position Value is generated and will adjust a servo motor for controlling a steering angle.Black Circle : camera angle = x axis center of the pixel value of the circular : a road display center pixel center pixel blue perspective conversion of Sagano Scenic Railway : kamera road sign error value and errors (d) a lie : the core center in an area sensing a road displayAs you can see above, if the Error Value result is between 130 and 26, the image process will detect a left turn of the road line.If the Error Value result is between -130 and -25, the image process detects that the road line is turning right.If the Error Value result is between -25 and 25, the image process detects that the road lines are straight.This is a hyperlink to my profile, which is my links. You can see a video of my experiment with self-driving cars.https://www.linkedin.com/posts/titovaliantmuhammad_autonomouscar-pythonprogramming-selfdrivingcar-activity-6592061994877186048-duwl Copyright @2021 Titovarian MuhammadLinkedInTitoValiantMuhammad # #autonom: #summetouscar #summetrogramming #self-driving car Prototype self-driving car and Python programming In my project, a prototype self-driving car was created… www.linkedin.comLinkedInTitoValiantMuhammad # #autonom: #summetouscar #summetrogramming #self-driving car Prototype self-driving car and Python programming In my project, a prototype self-driving car was created… www.linkedin.comLinkedInTitoValiantMuhammad # #autonom: #summetouscar #summetrogramming #self-driving car Prototype self-driving car and Python programming In my project, a prototype self-driving car was created… www.linkedin.com

error: Content is protected !!