Web Analytics

Smart Informatix Laboratory

Home Research Publications People Education Job Openings Contact


Smart Resilient Extraterrestrial Habitats


Deep space habitats require groundbreaking technological advances to overcome the unprecedented demands introduced by isolation and extreme environments. The Resilient ExtraTerrestrial Habitats research institute (RETHi) will harness promising next-generation technological advances to overcome the grand challenge of deep space habitation. By leveraging expertise in civil infrastructure management, autonomous robotics, hybrid simulation, machine learning, smart buildings, complex systems, and diagnostics and prognostics, RETHi will develop the transformative technologies needed to construct resilient deep space habitats that can adapt, absorb and rapidly recover from disruptions to deep space habitat systems without fundamental changes in function. As part of RETHi, funded by NASA, our research team will focus on the effective and efficient design and development of a resilient autonomous SmartHab as a complex system.




Active Perception Based on Deep Reinforcement Learning for Autonomous Robotic Inspection


In this project, an artificial intelligence framework is developed to facilitate the use of robotics for autonomous damage inspection. While considerable progress has been achieved by utilizing state-of-the-art computer vision approaches for damage detection, these approaches are still far from being used for autonomous robotic inspection systems due to the uncertainties in data collection and data interpretation. To address this gap, this study proposes a framework that will enable robots to select the best course of action for active damage perception and reduction of uncertainties. By doing so, the required information is collected efficiently for a better understanding of damage severity which leads to reliable decision-making. More specifically, the active damage perception task is formulated as a Partially Observable Markov Decision Process (POMDP), and a deep reinforcement learning-based active perception agent (ADS-DRL) is proposed to learn the near-optimal policy for this task. The proposed framework is evaluated for the autonomous assessment of cracks on metallic surfaces of an underwater nuclear reactor. Active perception exhibits a notable enhancement in the crack Intersection over Union (IoU) performance, yielding an increase of up to 40% when compared to its raster scanning and the reduction of the overall inspection time by more than two times.



Autonomous Road Condition Assessment Using Crowdsourced Data in Smart Cities


The assessment of road conditions is important in ensuring the safety and efficiency of transportation infrastructure. However, current methods of evaluation suffer from subjectivity, delayed response, and high costs. To address these limitations, this study proposes the development of an autonomous system that utilizes crowdsourced RGB-D data for a comprehensive and efficient assessment of road conditions. The establishment of this machine-assisted human-centric system for pavement condition evaluation includes the classification, localization and quantification of pavement data using deep learning and computer vision approaches. The system facilitates the tracking of identified defects and repair work, providing up-to-date information on pavement deterioration and maintenance. It can be used for quality control of the pavement rehabilitation processes where the road authorities can evaluate the quality of the work that is done by the contractors.



Unsupervised Anomaly Detection Based on Active Sensing, Autoencoders and Information Fusion


In this project, a novel anomaly detection framework is proposed based on active sensing and information theory. The framework involves exciting the structure at specific locations and collecting acceleration data. The data collected from hits at multiple excitation points and sensor locations are analyzed and fused to enhance anomaly detection. More specifically, an unsupervised anomaly detection framework using autoencoders (AEs) has been developed. Information fusion strategies are proposed to enhance the robustness of the approach to both aleatoric and epistemic uncertainties. Numerical studies based on the ASCE benchmark model and experimental studies based on the ASCE benchmark structure and a geodesic dome testbed have been carried out to study the capabilities as well as limitations of the proposed approach. Solutions are proposed for the practical cases of limited sensors and the existence of insufficient data. The framework’s ability to extract information from multiple sources allows it to identify anomalies that might have been missed by traditional detection methods.



Fusion of Color and Hallucinated Depth Features for Enhanced Multimodal Damage Segmentation


This study investigates the feasibility of abolishing depth-sensing at test time without compromising the segmentation performance. An autonomous damage segmentation framework is developed, based on recent advancements in vision-based multi-modal sensing such as modality hallucination (MH) and monocular depth estimation (MDE), which require depth data only during the model training. At the time of deployment, depth data becomes expendable as it can be simulated from the corresponding RGB frames. This makes it possible to reap the benefits of depth fusion without any depth perception per se. This study explored two different depth encoding techniques and three different fusion strategies in addition to a baseline RGB-based model. The proposed approach is validated on computer-generated RGB-D data of reinforced concrete buildings subjected to seismic damage. It was observed that the surrogate techniques can increase the segmentation IoU by up to 20.1% with a negligible increase in the computation cost.



Deep Learning-Based RGB-D Fusion for Multimodal Condition Assessment of Civil Infrastructure


A review of available literature suggests that most of the existing vision-based inspection techniques rely only on color information, due to the immediate availability of inexpensive high-resolution cameras. Regular cameras translate a 3D scene into a 2D space, which leads to a loss of information about distance and scale. This imposes a barrier to the realization of the full potential of vision-based techniques. This study aims to fill this knowledge gap by incorporating depth fusion into an encoder-decoder-based semantic segmentation model. Several encoding techniques are explored for representing the depth data. Additionally, various schemes for the data-level, feature-level, and decision-level fusions of RGB and depth data are investigated to identify the best fusion strategy. Overall, it was observed that feature-level fusion is the most effective and can enhance the performance of deep learning-based damage segmentation algorithms by up to 25% without any appreciable increase in the computation time.



Physics-constrained Deep Learning for Multi-objective Inverse Design of Acoustic Wave Scattering


The control of acoustic and elastic waves via material design has been an interesting subject with various applications such as non-destructive evaluation of structural components, biomedical devices, high-resolution imaging, radar, and remote sensing. To date, there is still a lack of powerful and eefficient design methodologies since the conventional optimization-based approaches suffer from the computation burden in parameter search whenever a design query is made. In this study, a novel deep auto-encoder (DAE) based approach is proposed to design the geometry of wave scatterers that satisfy the target downstream pressure fields. The proposed network consists of a geometry estimator and a DAE that provides the geometry estimator with physics constraints during the learning process. By joint optimization, the estimation of scatterer geometry is strengthened with the latent representations of the target pressure fields learned by the DAE. Once the training is finished, the design inference is quasi-instantaneous given a target 2D pressure field.



Deep Learning-based Building Attribute Estimation from Google Street View Images for Flood Risk Assessment


Floods are the most common and damaging natural disaster worldwide, both in terms of economic losses and human casualties. With the expected climate changes for the next century including sea level rise, assessing flood risk become important for coastal area residents and governments to make effective decisions about risk mitigation. Data for assessing flood risk may be from obsolete street-level surveys because it is expensive to collect comprehensive data. To tackle the gap between current flood risks and out-of-date data, this study proposes a framework based on deep learning that can collect comprehensive data effectively and efficiently without human-involved street surveys. The proposed framework analyzes Google street view (GSV) images and estimates multiple attributes of buildings simultaneously, including foundation height, foundation type, building type, and number of stories, that are necessary for assessing flood risks. Based on its effectiveness and efficiency, the proposed framework would save time and money for flood risk management.



Investigating Consistency among Frontline Employees and Bridge Inspectors Using a Simulated Virtual Reality Testbed


Effective training of technical staff in the areas of construction, maintenance, and materials is crucial for enhancing the performance as well as the service life of transportation infrastructure. While the existing training modules have been quite useful, they are typically computer-based courses where the trainees do not fully experience actual conditions in the field. On the other hand, on-site training sessions are costly and time-consuming, and traffic detouring could pose safety threats to the personnel and lead to traffic congestion. To address this issue, we propose to develop virtual reality (VR)-based modules where trainees will be engaged in realistic scenarios for rehabilitation and maintenance decision-making. Furthermore, the VR-based system will provide a testbed for carrying out a statistical assessment of the performance variability between inspectors and identify its key sources. This will potentially open the door to a reformation in the current inspection practices to enhance the consistency and reliability of the inspection process.



Design of 1D Acoustic Metamaterials Using Machine Learning and Cell Concatenation


Metamaterial systems have opened new, unexpected, and exciting paths for the design of acoustic devices that only few years ago were considered completely out of reach. However, the development of an efficient design methodology still remains challenging due to highly intensive search in the design space required by the conventional optimization-based approaches. To address this issue, this study develops two machine learning (ML) based approaches for the design of one dimensional periodic and non-periodic metamaterial systems. For periodic metamaterials, a reinforcement learning (RL) based approach is proposed to design the meta- material that achieves a user-defined frequency band gap. This RL-based approach surpasses conventional optimization-based methods in the reduction of computation cost. For non-periodic metamaterials, a neural network based approach capable of learning the behavior of individual material unit is presented. By assembling the neural network representations of individual material unit, a surrogate model of the whole metamaterial is employed to determine the properties of each material unit. Interestingly, the proposed approach is capable of synthesizing metamaterial assemblies with varying user demands while requiring only a one-time network training procedure.



NB-FCN: Deep Fully Convolutional Network and Parametric Data Fusion for Real-Time Crack Detection


Detecting cracks on metallic surfaces from the inspection videos is challenging since the cracks are tiny and surrounded by noisy patterns in the background. While other crack detection approaches require longer processing time, this study proposes a new approach called NB-FCN that detects cracks from inspection videos in real-time with high precision. An architecture design principle is introduced for fully convolutional network (FCN) where the FCN can take image patches for training without pixel-level labels. Based on naïve Bayes (NB) probability, a parametric data fusion scheme called pNB-Fusion is proposed to fuse crack score maps from multiple video frames and outperforms other fusion schemes. The proposed NB-FCN achieves 98.6% detection average precision (AP) and requires only 0.017 seconds for a 720× 540 frame and 0.1 seconds for a 1920× 1080 frame. Based on its capability and efficiency, the proposed NB-FCN is a significant step toward real-time video processing for autonomous inspection.




Wheat Spike Blast Image Classification Using Deep Convolutional Neural Networks


Wheat blast is a threat to global wheat production, and limited blast-resistant wheat cultivars are available. Current estimations of wheat spike blast (WsB) severity rely on human assessments, but this technique can be time-consuming, labor-intensive, and subjective. Reliable visual estimations paired with images of WsB can be used to train deep convolutional neural networks (DCNN) for disease severity classification. The objective of this study was to develop an accurate and reliable DCNN model to classify WsB severity under controlled conditions. RGB images acquired from cultivars with various levels of resistance were classified in three WsB severity categories by a pathologist. WsB severity classification model can facilitate the development of systems that can substantially improve the efficiency of WsB phenotyping.



Damage Chronology for Spatiotemporal Condition Assessment of Civil Infrastructure Using Unmanned Aerial Vehicle


This study presents a computer vision-based approach for representing time evolution of structural damages leveraging a database of inspection images. Spatially incoherent but temporally sorted archival images captured by robotic cameras are exploited to represent the damage evolution over a long period of time. An access to a sequence of time-stamped inspection data recording the damage growth dynamics is premised to this end. Identification of a structural defect in the most recent inspection data set triggers an exhaustive search into the images collected during the previous inspections looking for correspondences based on spatial proximity. This is followed by a view synthesis from multiple candidate images resulting in a single reconstruction for each inspection round. Cracks on concrete surface are used as a case study to demonstrate the feasibility of this approach. Once the chronology is established, the damage severity is quantified at various levels of time scale documenting its progression through time. The proposed scheme enables the prediction of damage severity at a future point in time providing a scope for preemptive measures against imminent structural failure. On the whole, it is believed that this study will immensely benefit the structural inspectors by introducing the time dimension into the autonomous condition assessment pipeline.




Enhanced Community Resilience through Deep Learning-based Damage Detection for Autonomous Post-disaster Reconnaissance


Timely assessment of damages induced to buildings due to an earthquake is critical for ensuring life-safety, mitigating financial losses and expediting the rehabilitation process as well as enhancing the structural resilience where resilience is measured by an infrastructure's capacity to restore full functionality post extreme events. Since manual inspection is expensive, time consuming and risky, robots, such as low-cost unmanned aerial vehicles (UAVs), can be leveraged as a viable alternative for rapid reconnaissance in future smart cities. Visual data captured by the sensors mounted on the robots can be analyzed and the damages can be detected and classified autonomously. To this end, region based convolutional neural network (Faster RCNN) is exploited to detect four different damage types, namely, surface crack, spalling (which includes facade spalling and concrete spalling), severe damage with exposed rebars and severely buckled rebars. The performance of the proposed approach is evaluated on manually annotated image data collected from reinforced concrete buildings damaged under several past earthquakes such as Nepal (2015), Taiwan (2016), Ecuador (2016), Erzincan (1992), Duzce (1999), Bingol (2003), Peru (2007), Wenchuan (2008), and Haiti (2010). Several experiments are conducted to evaluate the capabilities, as well as the limitations, of the proposed approach for earthquake reconnaissance. This work is a step stone forward to enhance the resilience of smart cities.




Pruning Deep Neural Networks for Efficient Edge Computing in IoT


Health monitoring of civil infrastructures is a key application of Internet of Things (IoT) while edge computing is an important component of IoT. In this context, swarms of autonomous inspection robots, that can replace current manual inspections, are examples of edge devices. Incorporation of pre-trained deep learning algorithms into these robots for autonomous damage detection is a challenging problem since these devices are typically limited in computing and memory resources. This study introduces a solution based on network pruning using Taylor expansion to utilize pre-trained deep convolutional neural networks for efficient edge computing and incorporation into inspection robots.Results from comprehensive experiments on pre-trained networks and two types of prevalent surface defects (i.e., crack and corrosion) are presented and discussed in details with respect to performance, memory demands, and the inference time for damage detection. It is shown that the proposed approach significantly enhances resource efficiency without decreasing damage detection performance.




Structural Dynamic Response Estimation and System Identification Using Deep Convolutional Neural Networks


This study presents a deep convolution neural network (CNN) based approach to estimate the dynamic response of a linear single degree of freedom (SDOF) system, a nonlinear SDOF system, and a full-scale three-story multi-degree of freedom (MDOF) steel frame. In the MDOF system, roof acceleration is estimated through the input ground motion. Various cases of noise contaminated signals are considered in this study, and the multilayer perceptron (MLP) algorithm serves as a reference for the proposed CNN approach. According to both the results from numerical simulations and experimental data, the proposed CNN approach is able to predict the structural responses accurately, and it is more robust against noisy data compared to the MLP algorithm. Moreover, the physical interpretation of CNN model is discussed in the context of structural dynamics. It is demonstrated that in some special cases the convolution kernel has the capability of approximating the numerical integration operator, and the convolution layers attempt to extract the dominant frequency signature observed in the ideal target signal while eliminating the irrelevant information during the training process.




Automated Defect Classification in Sewer Closed Circuit Television Inspections Using Deep Convolutional Neural Networks


Automated interpretation of sewer closed-circuit television (CCTV) inspection videos could improve the speed, accuracy, and consistency of sewer defect reporting. Previous research has attempted to use computer vision, namely feature extraction methods for automated classification of defects in sewer CCTV images. However, feature extraction methods use pre-engineered features for classifying images, leading to poor generalization capabilities. Due to large variations in sewer images arising from differing pipe diameters, in-situ conditions (e.g., fog and grease), etc., previous automated methods suffer from poor classification performance when applied to sewer CCTV videos. This paper presents a framework that uses an ensemble of binary deep convoluted neural networks (CNNs) to classify multiple defects in sewer CCTV images. A prototype system was developed to classify root intrusions, deposits, and cracks. The CNNs were trained and tested using 12,000 images collected from over 200 pipelines. The average testing accuracy, precision and recall were 86.2%, 87.7% and 90.6%, respectively, demonstrating the viability of this approach in the automated interpretation of sewer CCTV videos.




3D Dynamic Displacement-Field Measurement For Structural Health Monitoring Using Inexpensive RGB-D Based Sensor


This study presents a comprehensive experimental and computational study to evaluate the performance envelope of a representative RGB-D sensor (the first generation of Kinect sensor) with the aim of assessing its suitability for the class of problems encountered in the structural dynamics field, where reasonably accurate information of evolving displacement fields (as opposed to few discrete locations) that have simultaneous dynamic planar translational motion with significant rotational (torsional) components. This study investigated the influence of key system parameters of concern in selecting an appropriate sensor for such structural dynamic applications, such as amplitude range, spectral content of the dynamic displacements, location and orientation of sensors relative to target structure, fusing of measurements from multiple sensors, sensor noise effects, rolling-shutter effects, etc. The calibration results show that if the observed displacement field generates discrete (pixel) sensor measurements with sufficient resolution (observed displacements more than 10 mm) beyond the sensor noise floor, then the subject sensors can typically provide reasonable accuracy for transnational motion (about 5%) when the frequency range of the evolving field is within about 10 Hz. However, the expected error for torsional measurements is around 6% for static motion and 10% for dynamic rotation for measurements greater than 5°.




NB-CNN: Convolutional Neural Network and Naïve Bayes Date Fusion for Deep Learning-based Crack Detection


Detecting cracks on metallic surfaces is a challenging task since they are tiny, and noisy patterns exist on the components' surfaces. This study proposes a deep learning framework, based on a convolutional neural network (CNN) and a Naïve Bayes data fusion scheme, called NB-CNN, to analyze individual video frames for crack detection while a novel data fusion scheme is proposed to aggregate the information extracted from each video frame to enhance the overall performance and robustness of the system. To this end, a CNN is proposed to detect crack patches in each video frame, while the proposed data fusion scheme maintains the spatiotemporal coherence of cracks in videos, and the Naïve Bayes decision making discards false positives effectively. The proposed framework achieves a 98.3% hit rate against 0.1 false positives per frame that is significantly higher than state-of-the-art. approaches




Evaluation of Convolutional Neural Networks for Deep Learning-based Corrosion Detection


Corrosion is a major defect in structural systems that has a significant economic impact and can pose safety risks if left untended. Currently, an inspector visually assesses the condition of a structure to identify corrosion. This approach is time-consuming, tedious, and subjective. Robotic systems, such as unmanned aerial vehicles, paired with computer vision algorithms have the potential to perform autonomous damage detection that can significantly decrease inspection time and lead to more frequent and objective inspections. This study evaluates the use of convolutional neural networks (CNNs) for corrosion detection. A CNN learns the appropriate classification features that in traditional algorithms were hand-engineered. Eliminating the need for dependence on prior knowledge and human effort in designing features is a major advantage of CNNs. This study presents different CNN-based approaches for corrosion assessment on metallic surfaces.The effect of different color spaces, sliding window sizes, and CNN architectures are discussed. To this end, the performance of two pretrained state-of-the-art CNN architectures as well as two proposed CNN architectures are evaluated, and it is shown that CNNs outperform state-of-the-art vision-based corrosion detection approaches that are developed based on texture and color analysis using a simple multilayered perceptron network (MLP). Furthermore, it is shown that one of the proposed CNNs significantly improves the computational time in contrast with state-of-the-art pretrained CNNs while maintaining comparable performance for corrosion detection.




Review of Reconfigurable Swarm Robots for Structural Health Monitoring


Recent advancements in robotic systems have led to the development of reconfigurable swarm robots (RSR) that can change their shape and functionality dynamically, without any external intervention. RSR have the advantages of being modular, on-site reconfigurable, multifunctional, incrementally assemble-able, reusable, fault-tolerant, and even repairable on the orbit. Newly-developed reconfigurable robots are expected to bring a radical change in the prevailing structural health monitoring techniques, thus augmenting the efficiency, accuracy and affordability of inspection operations. This study presents a holistic review of the previous studies and state-of-the-art technologies in the field of RSR, and argues that RSR offer great potential advantages from the perspective of monitoring and assessment of civil and mechanical systems. A roadmap for future research has also been outlined based on the limitations of the current methods and anticipated needs of future inspection systems.




Robust 3D Scene Reconstruction in the Presence of Misassociated Features


Georeferencing through aerial imagery has several applications, including remote sensing, real-time situational mission awareness, environmental monitoring, rescue and relief, map generation, and autonomous hazard avoidance, landing and navigation of Unmanned Aerial Vehicles (UAV). In aerial imagery, Structure from Motion (SfM) is often used for 3D point reconstruction (i.e., ground locations) and for camera pose estimation (i.e., airborne position and orientation) from a set of geometrically matched features between 2D images. We introduce an adaptive resection-intersection bundle adjustment approach that refines the 3D points and camera poses separately after the "gross" misassociations are removed by an outlier rejection algorithm. For each iteration, the proposed approach identifies the potential misassociated features independently in the resection as well as the intersection stage, where these potential outliers, contrary to previous studies, are reexamined at later iterations. In this way, maximum number of inlier matched features is retained.




Color and Depth Data Fusion for Dynamic Displacement-Field Measurement


While there are several sensors for direct displacement measurements at a specific point in a uniaxial direction or multi-component deformations, there are only very limited, and relatively quite expensive, methodologies for obtaining the three-dimensional components of a displacement of a dynamically evolving (i.e., not pseudo-statically) deformation field. This study reports the results of a comprehensive experimental study to assess the accuracy and performance of a class of inexpensive vision-based sensors (i.e., RGB-D sensors) to acquire dynamic measurements of the displacement field of a test structure. It is shown that the class of sensors under discussion, when operated under the performance envelope discussed in this paper, can provide, with acceptable accuracy, a very convenient and simple means of quantifying three-dimensional displacement fields that are dynamically changing at relatively low-frequency rates typically encountered in the structural dynamics field.




Texture-based Video Processing Using Bayesian Data Fusion for Crack Detection


Regular inspection of the components of nuclear power plants is important to improve their resilience. Prevalent automatic crack detection algorithms may not detect cracks in metallic surfaces because these are typically very small and have low contrast. Moreover, the existence of scratches, welds, and grind marks leads to a large number of false positives when state-of-the-art vision-based crack detection algorithms are used. In this study, a novel crack detection approach is proposed based on local binary patterns (LBP), support vector machine (SVM), and Bayesian decision theory. The proposed method aggregates the information obtained from different video frames to enhance the robustness and reliability of detection.




Multimodal Sensor Fusion for Autonomous Data Acquisition of Road Surfaces


In this study, the development, evaluation, calibration, and field application of a novel, relatively inexpensive, vision-based sensor system employing commercially available off-the-shelf devices, for enabling the autonomous data acquisition of road surface conditions are performed. It is shown that the proposed multi-sensor system, by capitalizing on powerful data-fusion approaches of the type developed in this study, can provide a robust cost-effective road surface monitoring system with sufficient accuracy to satisfy typical maintenance needs, in regard to the detection, localization and quantification of potholes and similar qualitative deterioration features where the measurements are acquired via a vehicle moving at normal speeds on typical city streets. The proposed system is ideal to be used for crowdsourcing where several vehicles would be equipped with this cost-effective system for more frequent data collection of road surfaces.




Microcrack Assessment on Reactor Internal Components of Nuclear Power Plants


Ageing power facilities are increasingly susceptible to the onset of damage related to long exposure to stress, radiation, elevated temperatures and environmental conditions. One failure mechanism of particular concern is the onset of stress corrosion cracking. Currently, a technician manually measures the crack thicknesses at few points along a microcrack in a microscopic image, and the results are quantified by the Root Mean Square (RMS) of these measurements. In this study, a vision-based methodology is proposed for accurate quantification of microcracks that provides the thickness measurements for each pixel along the crack centreline and provides more comprehensive insight regarding the condition of a microcrack. A region-growing method is used for segmenting microcracks from complex backgrounds. The microcrack thicknesses are then automatically computed along the lines orthogonal to the crack centreline. The fast marching method is used to accurately estimate the centreline of microcracks.




Enhanced Crack Width Measurement


In this study a new contact-less crack quantification methodology, based on computer vision and image processing concepts, is introduced. In the proposed approach, a segmented crack is correlated with a set of proposed strip kernels. For each centerline pixel, the minimum computed correlation value is divided by the width of the corresponding strip kernel to compute the effective crack thickness. In order to obtain more accurate results, an algorithm is proposed to compensate for perspective errors.




Unsupervised Defect Detection for Autonomous Pavement Condition Assessment


Current pavement condition assessment procedures are extensively time consuming and laborious; in addition, these approaches pose safety threats to the personnel involved in the process. In this study, an RGB-D sensor is used to detect and quantify defects in pavements. This sensor system consists of an RGB color image, and an infrared projector and camera which act as a depth sensor. An approach, which does not need any training, is proposed to interpret the data sensed by this inexpensive sensor. This system has the potential to be used for autonomous cost-effective condition assessment of road surfaces. Various road conditions including patching, cracks, and potholes are autonomously detected and, most importantly, quantified, using the proposed approach. Several field experiments have been carried out to evaluate the capabilities, as well as the limitations of the proposed system. GPS information is incorporated with the proposed system to localize the detected defects.




Color and Texture Analysis for Corrosion Detection


Corrosion is a crucial defect in structural systems that can lead to catastrophic effects if neglected. In this study, we have evaluated several parameters that can affect the performance of color wavelet-based texture analysis algorithms for detecting corrosion. Furthermore, an approach is proposed to utilize the depth perception for corrosion detection. The proposed approach improves the reliability of the corrosion detection algorithm.The integration of depth perception with pattern classification algorithms, which has never been reported in published studies, is part of the contribution of this study. Several quantitative evaluations are performed to scrutinize the performance of the investigated approaches.




Contactless Crack Quantification


We developed vision-based crack quantification approaches that utilize depth perception to quantify crack thickness and, as opposed to most previous studies, need no scale attachment to the region under inspection, which makes these approached ideal for incorporation with autonomous or semi-autonomous mobile inspection systems including unmanned aerial vehicles. Guidelines are presented for optimizing the acquisition and processing of images, thereby enhancing the quality and reliability of the damage detection approach and allowing the capture of even the slightest cracks (e.g., detection of 0.1 mm cracks from a distance of 20 m), which are routinely encountered in realistic field applications where the camera-object distance is not controllable.




Crack Detection through Incorporation of Depth Perception


Inspired by human vision, where depth perception allows a person to estimate an object's size based on its distance to the object, we have introduced the integration of depth perception into image-based crack detection algorithms. This approach has improved the performances of crack detection and quantification systems. Whereas other proposed crack detection techniques used fixed parameters, this system utilizes depth perception to detect cracks. To this end, the crack segmentation parameters are adjusted automatically based on depth parameters. This feature is more practical for field applications where the camera-object distance cannot be controlled such as when unmanned aerial vehicles are used for data collection. The depth perception is obtained using 3D scene reconstruction.




Multi-Image Stitching for Evaluating Change Evolution and Inspection


This study presents and evaluates the underlying technical elements for the development of an integrated inspection software tool that is based on the use of inexpensive digital cameras. To this end, digital cameras are appropriately mounted on a structure (e.g., a bridge) and can zoom or rotate in three directions (similar to traffic cameras). They are remotely controlled by an inspector, which allows the visual assessment of the structure's condition by looking at images captured by the cameras. The proposed system gives an inspector the ability to compare the current (visual) situation of a structure with its former condition. If an inspector notices a defect in the current view, he/she can request a reconstruction of the same view using images that were previously captured and automatically stored in a database. Furthermore, by generating databases that consist of periodically captured images of a structure, the proposed system allows an inspector to evaluate the evolution of changes by simultaneously comparing the structure's condition at different time periods.



Copyright © 2014-2024 Smart Informatix Laboratory, Purdue University. All rights reserved.