Figure 1 - How the digital potentiometer was connected
Figure 2 - Galvanometer response from 0V to 5V
Since only one digital potentiometer was given to be tested, both mirrors were set to the same movement.
One thing that was noticed was that all of the connections required a lot of wires which can get confused, a better solution would possibly be to print a circuit board to connect both potentiometer to each mirror controller.
Figure 3 - Connections with one potentiometer
All the parts were cold even if they were turned on for some hours.
Also, there is still the need to change the voltage from 0V - 5V to -5V - 5V.
Figure 1 - Welded laser
Also the app was finished, at least the mesh creation part.
Figure 2 - Mesh using 2D
Figure 3 - Mesh using distance between points
Figure 4 - Mesh using on squares
There are still tests that need to be done on a real object to validate the results.
Also the kinect limitations were study, it was found that minimum distance to the depth sensor to work is 60 cm according to link, but tests with the kinect showed that it can only capture depth if the object is at least at 80 cm.
Figure 1- Mesh with 5mm intervals
Figure 2 - Division in squares
In figure 2, the gradient from the right object was calculated and the result is the left object, then squares with a stipulated interval were created and the mean gradient in each square was calculated. The result is the middle object.
By using this squares as guide to create the mesh, first a mesh with a certain density was created. This mesh would be to the zone with the highest gradient. Then, from those points, some of then were used in other intervals to create the other meshes densities.
This solution is just for the interior points since the exterior points were calculated the same way as 2D.
By using the delaunay triangulation on those points the result is on figure 3.
Figure 3 - Mesh for the 3D
This solution does not create a mesh as clean as 2D but the end result is what it was expected. By observing figure 4, it is possible to see that the number of points is higher than the zones with high gradient.
Figure 4 - Result of the process
Another image, closer to the cam allows a better comprehension of the result.
Figure 5 - Result on an object closer to the kinect
Also the drawings for the temporary structure were finished.
Figure 1 - First stages of the app
Since it would need a way to connect to the arduino, send the mesh coordinates and do the necessary calibrations, more buttons would need to be added on the future.
It was also found a way to control the height of the camera. It was found that it only works when it is used to set the depth camera properties, since doing it on the RGB camera would result in an error.
By using the app, it was noticed that the control of zones was giving bad results. The figure 2 was made using surf command, so it does not show the gradient between height, it shows yellow colour represents the highest height and the blue the lowest.
Figure 2 - Zones.
The problem with the gradient was that even thou the surface shape could be seen in figure 2 very well, the result of the function gradient is on figure 3.
Figure 3 - Gradient function
With these results, the first idea to separate the image based on its gradient, and then threat each zone as an object would work but it would take a lot of time to calculate and the result, possibly would not be as expected.
Also, the mesh processing was now done using exterior points based on the canny filter and the result was better.
Figure 4 - Mesh using canny to find the contours
By testing it on the app, it was found that the result it produced were much cleaner on the contours than the function used before, but, it is just in 2D.
In this week, it is expected that the image and 3D cloud treatments are finished. First it was necessary to remove the noise from the depth sensor from the kinect like the next example.
Image 1- example of a 3D image smoothing
To do so, first it was tried a function available at Link
The use of this function created another problem, some of the information was deleted.
Figure 2 - Funcion kinect depth normalization
As it is possible to see on figure 2, my arm disappear in the right image as well as most of the detail. This would possible create problems by not detecting the holes on a surface or maybe confuse a surface with the background.
Figure 3 - Point Cloud with kinect depth normalization
As it is possible to see in figure 3, the point cloud is missing a lot of detail, almost the same as if it was not used.
Figure 4 - Original Point cloud
The original point cloud was obtained using the tutorial that was available at mathworks at 06/03/2017 on the Link
Since the function created more problems than those it could solve, another solution was attempted. By observing the preview image of the depth video feed from the kinect, it was noticed that parts of the image would appear and disappear with time. So the solution tried was to get multiple snapshots and complete the missing information in one snapshot with the others.
Figure 5 - Comparison between the original mesh (right) and the mesh with multiple layer (left)
As it is possible to see, the mesh on the left is more refined than the one on the right, the only inconvenient is that it takes a couple of seconds to get all the snapshots. The method used to get all the snapshot was a for cycle with a pause of some milliseconds between each, then a mask was created to read all values of 0 from the previous snapshot and replace them with values from the new if available. The replacement was done by multiplying the mask by the new snapshot. The mask was from type uint16 as well as the snapshot.
Now with the refined mesh, it is necessary to associate the depth coordinates with the 2D from the image.
The firs thing done was to remove all of the points that would not be from the intended part. This was done with the same method described before, using a mask and multiplying operations. The result is in figure 6.
Figure 6 - Mesh treated
This was just to test the concept, so the mesh and the image with the black background, in figure 6, did not received any complex treatment, just a conversion from RGB to HSV and then the values of the H from the powerbank were chosen by trial an error.
To test it on a more controlled environment, a scene was created to simulate possible working conditions.
Figure 7 - Test on a "controlled environment"
In this test its possible to see the surface is much more refined, as the previous it was very hard to get the same result and it would influence the gradient calculations.
Now with this surface, the gradient was calculated so it would be possible to divide the surface in zones with same intervals of gradient. The result is in the following image.
Figure 8 - Gradient calculations
Figure 9 - Gradient Bars
As it can be seen on both images, 3 zones can be visualized. Also an median filter was used to smooth the values.
Now it is just necessary the creation on zones based on the different gradients calculated.
The idea was to use the max and min values from figure 9 and divide into a certain number of similar intervals. The first trial resulted in the figure 10.
Figure 10 - Different zones
The different zones still would need treatment but it would be interesting to first developed a user interface that could change different parameters in order to get the expected results.
Figure 2 - 3D modelled laser pointer
Since there is still the need to find a system of mirrors to allow the laser pointer and the laser from the vibrometer to be reflected to the galavanometer, a temporary structure was created to facilitate the use of the devices.
Figure 3 - 3D model of the temporary structure
Figure 2 - 3D representation of the galvanometer drivers
Figure 3 - 3D representation of the galvanometer power source
The first image was a rectangle with two wholes.
Figure 2 - First image used
This image was converted to gray levels and binarized resulting in a black and white image.
Figure 3 - Binarized image
After getting the image binarized, the object with the biggest area was selected, in this case since there is only one object it is not possible to see the effect. This selection was used to remove possible image nose that could be encounter in an image from the kinect cam.
With only the intended object selected, a mesh was created. First all the pixels from the image were read. Then only the pixels with value one, (pixels that correspond to the location of the object), were selected as valid for the mesh. At last, for cycle was used to select only a few of the valid points of the previous step.
Figure 4 - Mesh created
Figure 5 - Representation of the mesh above the surface
Another representation of the mesh were tried using delaunay triangulation but it was hard to control and took more time to compute. The end result is displayed on the next image.
Figure 6 - Mesh using delaunay triangulation
The code used to create this mesh was found at Link
From the images tested, it was reached the conclusion that they needed to be squared and the mesh density would increase or decrease with their dimensions. So the