Abstract:Urban built environment is the manufactured environment where human beings live. The stocks of the urban built environment refer to the quality of materials (e.g., concrete, steel, copper, etc.) that accumulated in buildings and infrastructure. Revealing the spatial distribution of urban built environment stocks arises as a new direction for digital city construction, which helps to understand the urban development patterns and urban resource and waste management. Developing an urban circular economy and realizing sustainable urban development is essential. Therefore, it is necessary to summarize and sort out the current spatial calculation method of built environment stocks.This study introduces a detailed theoretical basis and development status of three methods for spatial calculation of urban built environment stock: that are the top-down method, the bottom-up method, and the remote sensing calculation method. The advantages and limitations of these models are elaborated with application and data availability. The top-down approach has a complete set of theoretical foundations and algorithm models, which can perform large-scale material flow analysis well. Due to its inability to obtain a high spatial resolution, this method is not suitable for analyzing urban development within cities. Contrastingly, the bottom-up method permits fine-grained stock estimation by gathering cadastral-level physical measurements of buildings and infrastructure and associated material composition indicators. However, it is labour-intensive and the scope of the bottom-up method is often restricted to city-level or lower geographical regions. As for remote sensing calculation, previous studies established a linear regression relationship between the nighttime light radiation intensity and the built environment stocks in the study areas. However, the night light remote sensing data will degrade the reliability of quantitative analysis due to background noise and radiation saturation effect. Thus, stock data with the high spatial resolution are impossible to acquire. These three traditional methods are often difficult to strike a balance between large scale and high spatial resolution. However, in the era of big geographic data, more data sources have brought new research directions for stock calculation.Geo Big Data and Earth Observation data are essential in developing earth science, environmental science, remote sensing science, and geographic information science. Combining these wide-coverage, high-precision, and fast-update data and machine learning methods have been widely used in poverty surveys and energy consumption. This paper proposes a framework that combines big geographic data and machine learning for stock calculation based on the above background. We expect an end-to-end method to estimate grid stocks directly from publicly available information that minimizes manual involvement. However, the heterogeneity of geospatial and the black-box nature of deep learning may have an impact on the migration effects of the model. Despite its drawbacks, this migration model has the potential for large-scale, high-resolution stock calculation in future works.
Keywords:urban built environment stock;urban mining;urban metabolism;sustainable development;machine learning
Abstract:Earthquakes are sudden natural disasters that cause great loss of human life and property. However, starting an emergency response and carrying out rapid assessment of the disaster situation during the first few moments after an earthquake can effectively reduce the damage caused by the earthquake. Space Earth Observation Technology plays an important role in earthquake emergency because of its macro, fast, and wide coverage. With the continuous development of space earth observation technology and data processing technology, scholars worldwide have conducted several in-depth research on the extraction of earthquake damage information in the remote sensing emergency process. However, in actual emergency operations, a universal workflow does not exist. To promote in-depth and real operational efficiency of remote sensing earthquake emergency work, the problems in remote sensing emergency work are analyzed, and the future development trend of remote sensing earthquake emergency is put forward, so as to make the application of remote sensing technology in earthquake prevention and disaster reduction efficient. The timeliness of remote sensing emergency to a certain extent affects the efficiency and accuracy of decision-making that is vital in emergency assistance. At different stages of a disaster, the demand for remote sensing images, such as spatial resolution and time resolution, varies. In view of the application of remote sensing technology in earthquake emergency investigation, we analyze the application of remote sensing technology in the process of earthquake emergency by means of a literature review. According to the classification of optical remote sensing image and SAR image, we analyze and introduce the methods of earthquake damage information investigation using different type of remote sensing images, such as supervision classification, change detection, and depth learning.In the recognition of earthquake damage information, high-resolution optical remote sensing images usually only use their spectral features, ignoring the application of spatial distribution features and geometric features. Traditional pixel based classification methods are easy to contain salt and pepper effect in the classification results, so it is necessary to develop object-based multi feature knowledge driven classification technology on this basis. Therefore, the method of combining object-oriented analysis and knowledge integration will become the mainstream method of earthquake disaster extraction and evaluation in the future. For SAR images, the seismic damage evaluation of single temporal SAR images mainly focuses on the average seismic damage evaluation in blocks, and there is less research on the imaging mechanism, feature performance and information extraction of single buildings. After the earthquake, the earthquake damage targets are complex and diverse. In order to fully realize the automatic extraction of earthquake damage targets based on remote sensing images, it still needs the continuous improvement and development of the algorithm.Traditional change detection methods are mainly aimed at the same type of sensors. The effective combination and interactive utilization of remote sensing image data from different sources is also the development trend in the future. The characteristic information of earthquake damage targets with different degrees of earthquake damage in remote sensing images is also different. The feature-based object-oriented change detection method can make full use of the feature information of the image and effectively improve the accuracy of seismic damage identification. Therefore, the focus of multi temporal image change detection is the effective application of spatial features in object level change detection.We then summarize the existing problems and discuss the main difficulties faced by remote sensing earthquake emergency and the solutions from three aspects. Combined with real-time orbit data and practical work, the paper analyzes the future development trend of remote sensing technology in earthquake prevention and disaster reduction from the aspects of multi-technology combination of intelligent disaster identification, multi-source data collaborative analysis, and development of agile satellite. Through this approach, we promote remote sensing monitoring means to provide dynamic, real-time, and continuous space information emergency service, and improve the rapid development of earthquake emergency work response, refinement, and business application. This research can provide a good reference for the scientific research and business application of multi-source remote sensing technology in earthquake emergency investigation, and give more efficient play to the applicability and level of remote sensing technology in earthquake prevention and disaster reduction.
Keywords:remote sensing;Earthquake;emergency investigation;emergency special products;earthquake damage information extraction
Abstract:Measuring temperature and water vapor profiles with high accuracy and vertical resolution in the upper troposphere and lower stratosphere (UTLS, 5—35 km in height) is significant for improved monitoring of condition changes in the free atmosphere, and simultaneously measuring with laser occultation method is a significant supplement to the existing exploration technology. Here we introduce an algorithm to retrieve temperature and water vapor profiles from laser occultation data, and thoroughly describe the temperature retrieval method from dual oxygen absorption wavelength laser occultation data.In this paper, the method of laser occultation cooperative measuring and retrieval of temperature and water vapor molecular number density is studied and the emphasis is on the method of deriving temperature from detected data. The method is selecting two characteristic absorption peaks corresponding to different transition energy levels from the near-infrared band oxygen absorption spectrum, choosing one absorption line near each absorption peak, using the double absorption wavelength laser corresponding to the two absorption lines and the wavelength laser corresponding to the reference line for occultation measuring.The algorithm steps of cooperative retrieval of temperature and water vapor are retrieving temperature profile from occultation data and deriving pressure profile from temperature profile and the pressure prior value at the reference height of 5 km, then as a second step, retrieving water vapor molecular number density from single water vapor absorption wavelength and reference wavelength laser occultation data.We furthermore simulate the occultation process with selected oxygen absorption wavelength and water vapor absorption wavelength laser signals, and retrieve from the simulated transmittance data. The error term and its transfer relationship in the retrieval process are analyzed and the influence of each error term is presented by the simulation results. This results show that the temperature retrieval errors are generally smaller than 1.05 K, and the water vapor molecular number density retrieval errors are generally smaller 4%, which shows the feasibility of the temperature and water vapor collaborative retrieval method.
Keywords:Laser occultation;inverse Abel integral transform;atmospheric temperature and pressure;double absorption wavelength;atmospheric water vapor
Abstract:Together with the sustained and rapid development of China’s economy in the past few years, the continuous improvement of the national economy, and the trend of urbanization, is the unceasing decline of China’s air quality. As a result, frequent occurrence of haze pollution weather has seriously affected people’s health. As the main pollutant of haze, PM2.5 has become a major concern of all people. Studies have shown that people exposed to PM2.5 for a long time are likely to suffer from respiratory and cardiovascular diseases, and even die prematurely. Therefore, it is of particular importance to monitor and evaluate PM2.5 effectively and accurately. According to the requirements of national development, Sichuan Province and Chongqing City (hereinafter referred to as Sichuan and Chongqing Area) will work together to build a two-city economic circle in Chengdu-Chongqing Area. Therefore, it is urgent to study and evaluate the regional atmospheric environmental air quality. However, due to the unique topography and meteorological conditions in Sichuan and Chongqing, estimating the near-surface PM2.5 concentration by using satellite remote sensing method is difficult. Considering the complicated topography in Sichuan and Chongqing Area, “Elevation ()” was introduced for vertical correction of regional Aerosol Optical Depth. Moreover, considering the large difference of humidity and pollutant emission in different months, the hygroscopic correction factor grid was constructed for humidity correction by fitting the hygroscopic growth factor month by month and site by site. Therefore, first, the retrieval accuracy of AOD of Japanese geostationary Himawari satellite (Himawari-8) was verified in this study. Second, the vertical correction and humidity correction methods were used to estimate the hourly near-surface PM2.5 concentration from 09:00 to 16:00 in 2017—2018 in the Sichuan-Chongqing Area, and the accuracy was verified. The area PM2.5 concentration is helpful for the detailed analysis of pollutant generation, migration and dissipation hour by hour, which provides effective data support for regional pollution prevention and control. We arrive at the following conclusions: (1) Owing to the large differences in the hygroscopic growth characteristics of aerosol particles in different regions and time ranges, the four stations, namely, Ziyuan Station in Chengdu City, Rongxian Administrative Center Station in Zigong City, Shangqing Temple Station in Chongqing City, and Jingtan 2nd Road Station in Chongqing City, are selected to analyze the temporal and spatial differences in aerosol hygroscopic growth characteristics. Results show that the aerosol hygroscopic growth capacity of different sites in the same month is different. The aerosol hygroscopic growth capacity of the same site in different months is also different, and the relative humidity of different sites at different times changes greatly. Therefore, the hygroscopic correction factor grid constructed must be used by fitting the hygroscopic growth factor month by month and site by site during the humidity correction to estimate PM2.5. (2) Compared with PM2.5 and AOD, the correlation coefficient between PM2.5 and has significantly improved after vertical correction and humidity correction; it increased from 0.12—0.45 to 0.32—0.69. The correlation coefficient between the satellite estimated value and ground-based observation value is relatively high (r=0.82), and the RMSE is 18.64 μg/m3, which is a good estimation result. A comparison of the hourly and monthly scales shows that the estimation results from the afternoon are better than that from the morning, and that from autumn and winter are better than that from spring and summer. (3) According to the annual average spatial distribution of PM2.5 in 2017 and 2018, the concentration of PM2.5 in Sichuan and Chongqing Area is the highest in winter, lowest in summer, followed by that in spring and autumn. The vertical correction and humidity correction methods are used to estimate PM2.5 mass concentration in Sichuan and Chongqing Area in 2017 and 2018, with high accuracy and good estimation results. However, the topography and aerosol components are not considered in the estimation process. Therefore, the experimental results still needs to be improved.
Abstract:Forest height is an important indicator for representing the quality and quantity of forest resources. Polarimetric synthetic aperture radar interferometry (PolInSAR) technology has been demonstrated and validated as a potential way for forest height inversion and mapping in recent years. Airborne and spaceborne PolInSAR data have been applied in a variety of temperate, boreal, and tropical forests. However, for the distinctions of forest scattering mechanisms at different microwave length and the different theory-base of each estimation algorithm, uncertainties usually occurred during the procedure of forest height estimation and mapping using PolInSAR data. In order to clarify the uncertainties resulting from the selection of different microwave wavelength algorithms in the procedure of forest height inversion, this study discussed the effects caused by the four selected inversion algorithms and four typical microwave wavelength using a simulated forest scene. The four inversion algorithms include polarimetric phase center height estimation method (PPC), complex coherence phase center differencing algorithm (CCPCD), coherence amplitude inversion method (CAI), and hybrid inversion method using both phase and coherence information. The involved microwave bands are at P, L, C, and X bands. Results of this study demonstrated that the effects of the wavelength and estimation algorithm are evident on the performance of forest height estimation using PolInSAR data. First, the selected estimation algorithm directly affects the results accuracy of forest height estimation when the microwave wavelength is the same. The estimated results from CAI agree well with the average forest height in the simulated forest scene and show best performance at the four selected microwave bands, but the degrees of dispersion and the ratios of uncertainty of the estimated results are also highest among the four inversion algorithms. Second, it shows obvious effects of microwave wavelength on the performance of the four selected inversion algorithms. It shows no obvious effect on CAI method. However, it shows great effects on the performance of the hybrid inversion method. The estimation results acquired from hybrid inversion method show a better performance at long wavelength (P- and L- bands), but a worse performance at short wave length (C- and X- bands). Moreover, the results reveal great underestimation of CCPCD method, which usually using HV channel phase as the canopy scattering phase center and the phase difference between HH and VV channels as the surface scattering phase center to retrieve the forest height. T The uncertainties of estimation results depend on wavelength and algorithm selections. Short wavelength with CCPCD method and long wavelength with PPC method show better performance and the lowest uncertainties on forest height estimation, whereas CAI method shows the highest uncertainties in forest height estimation at P, L, C, and X bands.
Keywords:PolInSAR;forest height;uncertainty;inversion;simulated forest scene
Abstract:As the largest land cover, forests play an important role in human living environment, biological habitat, and global carbon cycle. Forest health is directly related to global ecological security and sustainable development of human society. In recent years, urban construction, disasters, forest management and deforestation, and other factors have caused different degrees of disturbance to forests. It is important to determine the exact time point and spatial range of forest burned area for forest damage assessment, management, carbon accounting, and forest restoration management. Owing to the continuity of spatial distribution of forest burned areas, most of the existing methods of forest burned area extraction use the two-step treatment strategy of first classification and then post-processing to suppress the effect of false alarm pixels. In this paper, a spatiotemporal detection method, Stacked ConvLSTM, is proposed for the detection of forest fire tracks in time series. This method avoids subjective post-processing operations on the basis of maintaining better spatial continuity of the results, and achieves end-to-end extraction of forest burned area information, which improves the extraction accuracy of forest fire-burning land. This paper proposes to use Stacked ConvLSTM to detect forest disturbance in time and space. Combined with the characteristics of ConvLSTM in extracting temporal and spatial characteristics from long-term historical series, it can predict the change trend of vegetation in a period of time in the future, and accurately determine the time point and spatial range of forest disturbance. ConvLSTM is an LSTM variant proposed on the basis of LSTM. The full connection state from input layer to hidden layer and from hidden layer to hidden layer of LSTM is replaced by convolution connection, which can make full use of spatial information. Compared with single-pixel-based methods, ConvLSTM can extract the spatiotemporal structure information of time series images at the same time, which is better for spatiotemporal analysis. In this paper, Stacked ConvLSTM is used to detect the temporal and spatial distribution of forest burned areas, predict the change trend of vegetation in a period of time in the future, and determine the presence of forest burned areas by comparing with the newest time-series images. With MODIS long time series data, based on the historical time series of Yinanhe Forest Farm of Zhanhe Forestry Bureau in Heilongjiang Province and Beidahe Forest Farm of Bilahe Forestry Bureau in Inner Mongolia from 2001—2008 and 2001—2016, the extraction results of burned areas were compared with Stacked LSTM and bfast algorithm. The Stacked ConvLSTM, Stacked LSTM, and bfast algorithms were used to extract forest burned areas from MODIS time series in both regions, and to compare the detection results with the Fire_CCI 5.1 burned areas products released by ESA. Results show that, firstly, from the visual effect, in study area Ⅰ, the error detection of Stacked ConvLSTM is fewer than that of Stacked LSTM and bfast algorithm and maintains high continuity in spatial distribution. In study Area Ⅱ, Stacked ConvLSTM detected a more complete area of fire. Secondly, in study area Ⅰ , Stacked ConvLSTM was 0.120 and 0.405 more accurate than Stacked LSTM and bfast algorithms, respectively. Moreover, the recall rate, accuracy, and Fire_CCI 5.1 F1-score were higher. In study area Ⅰ , the accuracy of Stacked ConvLSTM is 0.924 had a higher recall rate, accuracy, and F1-score than Stacked LSTM, bfast algorithms, and Fire_CCI 5.1. The detection accuracy of ConvLSTM model in space is higher than that of the other two methods, and its continuity of detection results in space is better. The detection effect of ConvLSTM model is equivalent to that of Stacked LSTM in time, but both of them are closer to the real fire time point than bfast algorithm. Results show that Stacked ConvLSTM has advantages in obtaining the change trend of forest long-term historical series for spatiotemporal prediction, and improves the detection accuracy of forest fire to a certain extent.
Keywords:Stacked ConvLSTM;time series;spatiotemporal prediction;forest burned area
Abstract:As the main grain crop in China, winter wheat planting area has great inter-annual variations. Remote Sensing (RS) technology is the most effective method to obtain the planting area data of winter wheat over wide areas. The Normalized Difference Vegetation Index (NDVI) parameter derived from multi-temporal RS images with medium and low resolution, such as MODIS imagery, is frequently used for such purpose. But the accuracy of obtained winter wheat planting area is usually unsatisfactory due to the limited spatial resolution of the used RS images. Sentinel-2A and Sentine-2B, which were launched in 2015 and 2017 respectively, are currently the only spaceborne sensors capable of acquiring images in three red-edge spectral bands with meter-level spatial resolution. Red-edge bands have great potential in discriminating vegetation types and evaluating vegetation health condition, because these bands are sensitive to the chlorophyll and the nitrogen contents of vegetation. However, there are few studies on winter wheat extraction over wide area using the red-edge bands of Sentinel-2 data. In this paper, a wide-area winter wheat extraction method based on the Red-Edge Position Index (REPI) and the NDVI data derived from multi-temporal Sentinel-2 images is proposed, which is then applied to extract the winter wheat planting area in the Beijing-Tianjin-Hebei region for the year of 2020. Firstly, the temporal REPI and NDVI are extracted from Sentinel-2 data. Secondly, the key temporal phases for discriminating winter wheat are studied, and the decision rules based on temporal REPI and NDVI features for winter wheat discrimination are constructed. Finally, the winter wheat planting area in the Beijin-Tianjin-Hebei region in 2020 is extracted using the constructed decision rules. The extracted winter wheat area is validated by two kinds of reference data. It shows an error of -2.57% when compared with the winter wheat planting area data published by the National Bureau of Statistics. Additionally, the overall accuracy of the extracted winter wheat area is 94.24% and the Kappa coefficient is 0.88 in comparison with the visual interpretation results of Google Earth high-resolution images. The conclusions are: (1) REPI changes with the change of chlorophyll concentration during the growth period of winter wheat, and gradually increases with the growth of winter wheat seedlings after sowing period, until reaching the first peak (about 722 nm) at the emergence stage (November). During the overwintering period, it falls below 715 nm. The second peak is reached at heading stage (April-May), with the corresponding value greater than 730 nm. The REPI value in the mature stage drops below 720 nm. Especially, the REPI at heading stage provides excellent separability between winter wheat and woodland, while NDVI has a serious commission error when distinguishing winter wheat from woodland. (2) It can not achieve satisfactory result when either parameter (REPI or NDVI) of single date is used to extract winter wheat over wide areas. However, the decision tree classifier constructed based on the combination of REPI and NDVI parameters derived from multi-temporal Sentinel-2 data can extract winter wheat area with very high accuracy over wide areas, because other land cover types can be well excluded by the specific features presented by either REPI or NDVI of the key temporal phases, thus the effect of “different cover types with similar pectrum” associate with wide-area classification can be largely mitigated. This research demonstrates the significance of the red-edge bands of Sentinel-2 in discriminating winter wheat of wide-area.
Abstract:In recent years, forest fires occur frequently around the world, which severely damage the structure and function of the forest ecosystem. The initial assessment of burn severity could provide a quantitative basis for rapid implementations of post-fire restoration measures. In the last decades, remote sensing-based models have become an appropriate choice to assess burn severity, which generally require a certain amount of field survey data. However, this requirement could not be sufficiently satisfied in the first moments after fire, since the field survey work would cost a substantial amount of time and labor. The absence of field survey data in the initial assessment of burn severity would largely limit the efficient application of remote sensing technologies. In this study, a transfer learning algorithm (i.e., SSTCA, semi-supervised Transfer Component Analysis) was employed to propose an initial assessment model of burn severity to improve the time-efficiency of traditional remote sensing-based models. Firstly, the SSTCA algorithm was applied to project a series of new features from original spectral features of remotely sensed data. Based on these projected features, a Support Vector Regression (SVR) model was then trained using historical field survey data from source areas (i.e., Bear fire on June 27, 2002 and Mule fire on July 11, 2002). Thereafter, the SSTCA-SVR model was transferred to the initial assessment of burn severity of a target area (i.e., Lushan fire on March 30, 2020). Finally, the performance of this proposed model was quantitatively compared with those of some traditional models (i.e., dNDVI-, dLST-, dNBR-, and SVR-based models). Results showed that original spectral features of remote sensing images over source and target areas were quite different. After the SSTCA projection, projected features of source and target samples have a similar distribution pattern in the new features-based space. Meanwhile, in the initial assessment of burn severity, dNDVI- and dNBR-based models have overestimated burn severity levels with low accuracies (i.e., overall accuracy was from 20.80% to 24.80% and Kappa value was between 0.01 and 0.06). Compared with them, the dLST-based model has a better performance with an overall accuracy of 34.80% and a Kappa value of 0.19. Although SVR-based model has shown a promising performance with an overall accuracy of 58.00% and a Kappa value of 0.48, this model has overestimated the burn severity levels in some regions of burned areas. The assessment results of burn severity levels using SSTCA-SVR model has the best performance with an overall accuracy of 71.20% and a Kappa value of 0.64. We conclude that the application of a transferring learning algorithm would be helpful for building an assessment model of burn severity with a good transferring ability. In this way, more accurate results could be obtained in the initial assessment of burn severity, and the response of post-fire management might be accelerated after forest fires.
Abstract:Currently, convolutional neural network is a research hot spot in hyperspectral image classification. Some advanced models such as dilated convolution and deformable convolution have been developed successfully. However, the existing deformable convolution modules only shift in spatial domain and ignore the spectral difference information. Therefore, this paper proposed a novel deformable convolution, which extends spatial deformable to the spectral domain. In addition, a spectral deformable convolutional neural network (SDCNN) is proposed for hyperspectral image classification. In view of the existing problems in the application of spatial deformable convolution in HSI classification, this paper extends the deformable convolution to the spectral dimension, and proposes the spectral deformable convolution. Given that different feature categories may have different classification effects, more appropriate classification bands can be selected for different class by the learnt shifts in spectral domain. Therefore, the spectral feature extraction can focus on more effective band, thereby promoting more discriminative features. In this way, only one direction of spectral dimension needs to be learned, which reduces computational complexity, that is, only half of the occupying time than that of spatial deformable convolution. First, a full connection layer is used to learn the offset of the spectral deformable convolution, and a linear difference is used to perform feature correction in spectral domain. Second, a multilayer 1×1 convolution is used for spectral feature aggregation. Finally, a 3D convolution layer is used to extract spectral–spatial features. Experiments were conducted on three international popular datasets including Indian Pines, University of Pavia, and University of Houston. The experimental results demonstrate that the SDCNN is superior to other deep learning methods. SDCNN yields the highest classification accuracy with an overall accuracy of 98.86% (Indian pines, 10%/class), 99.81% (University of Pavia, 5%/class), and 97.41% (University of Houston, 50/class), which can well verify the effectiveness of the proposed model. First, through comprehensive experiments, SDCNN yields the highest accuracy compared with other existing models on three international popular datasets, which can prove the validity of the SDCNN method. Second, the effectiveness of the spectral deformable convolution module is proven by comparing the traditional dilated convolution module and the spatial deformable convolution module. SDCNN generalizes better with limited training samples, especially when the number of labeled samples is small, The stability of classification performance of SDCNN method is proved;. Finally, In the experiment of separating samples, SDCNN method achieves the best classification effect compared with other methods, and has obvious accuracy advantages compared with spatial deformable convolution method, which proves that SDCNN method has less dependence on spatial features and stronger feature extraction ability.
Abstract:Few shot classification is a hot topic in the field of deep learning, which aims at identifying novel concepts with little supervisory information. In the remote sensing scene few shot classification, existing methods are often unable to achieve satisfactory accuracy because the relationships among samples are ignored. To improve the RS few shot classification accuracy, the multi-graph convolutional network (Multi-GCN) is proposed in this paper. In the proposed method, a graph convolutional network is introduced into the metric network to smooth the samples’ features, which can model relationships among samples, make images from the same class get more similar feature representation and improve the classification accuracy. The proposed Multi-GCN is mainly composed of three parts: (1) Feature extraction network, which is composed of 4-layer convolutional neural networks and is used to extract images’ features; (2) Graph convolutional network, which is used to model the relationships among samples in the feature space, and update the node features by multi-graph convolution; (3) Metric prediction part, which is used to calculate the prototype of each class, and predict labels of the unlabeled samples according to the distance between samples and prototypes. Based on spectral domain analysis, the proposed multi-graph convolution can effectively suppress the high-frequency components of the graph signals and significantly enhance the clustering coefficients of features from the same class. In addition, the fine-tuning method is introduced in the training process, and labeled samples in the new class are used to train the last layer of the graph convolutional network for a small number of steps, which can further improve the classification accuracy and enhance the transfer ability of the model. To validate the effectiveness of the proposed method, in the experiments, Multi-GCN is compared with ProtoNet, GNN-FSL, and two single graph based methods ProtoGCN and ProtoIGCN on two different tasks, i.e. the same dataset few shot classification task and cross datasets few shot classification task. From the experimental results, we can obtain that in the same dataset few shot classification task, the proposed method is significantly superior to ProtoNet. When the number of labeled samples of each class is 1, the accuracy of the proposed method is higher than ProtoNet by more than 5%; Compared with GNN-FSL, the accuracy of the proposed method is about 10% higher on average. In the cross datasets few shot classification task, the classification accuracy of the proposed method is 10%—15% higher than that of GNN-FSL; And compared with ProtoNet, it is about 10% higher in the case of 1-shot and about 2%—5% higher in the case of 5-shot. In most cases, Multi-GCN is also better than single graph methods, such as ProtoGCN and ProtoIGCN. The classification accuracy can be further improved by using fine-tuning training methods. From the above experimental results, we can conclude that the proposed method can achieve a higher classification accuracy compared with ProtoNet, GNN-FSL, and the methods based on a single graph convolutional network. The conclusions of multi-graph convolutional operation and spectrum analysis in this work can also be extended to other graph-based semi-supervised learning.
Keywords:few shot learning;remote sensing scene classification;metric learning;multi-graph convolution;graph spectral analysis
Abstract:Terahertz band has a number of potential advantages that complement existing visible and infrared techniques in ice cloud sounding application, but treating various phase ice particles (mainly ice and graupel) as single ice particles is a major limitation of current terahertz ice cloud retrieval algorithms. In this paper, a pre-classified neural network algorithm based on the terahertz radiation characteristics of ice cloud is proposed, which is able to retrieve the physical parameters of ice and graupel particles, respectively. The algorithm first uses a pre-classified neural network to retrieve the density profiles of graupel particles separately from the 183 GHz band brightness temperature data. The retrieved graupel profiles are then used as a priori constraint to calculate the higher frequency band brightness temperature difference due to ice particles only. Finally, another pre-classified neural network is used to retrieve the density profiles of ice particles separately from the preceding terahertz brightness temperature difference data. The proposed algorithm are evaluated through the end to end simulation experiments. Firstly, a hybrid ice cloud dataset including ice and graupel particle parameters is built based on the numerical weather prediction (NWP) model and the actual observation data. Then the synthetic ice cloud brightness temperature data from 183—874 GHz (i.e. 183 GHz, 243 GHz, 325 GHz, 448 GHz, 664 GHz and 874 GHz) are generated through Discrete-Ordinate Tangent Linear Radiative Transfer (DOTLRT) radiative transfer model with the hybrid ice cloud dataset. Finally, the parameters of ice and graupel are retrieved by the proposed algorithm from the simulated brightness temperature data, and compared with the input parameters to assess the retrieval accuracy. The simulation experiments show that the average Root Mean Square Errors (RMSE) of the retrieved IWP and GWP are 8.97 g/m2 and 10.90 g/m2 respectively, and the average RMSE of the retrieved I_Dme and G_Dme are 7.54 μm and 25.38 μm respectively, and the average RMSE of the retrieved I_Zme and G_Zme are 309.21 m and 513.62 m respectively, and the retrieved density profiles of ice and graupel particles also have high accuracy. The results indicate that the proposed algorithm can retrieve the total path amount, equivalent ice particle size, and equivalent ice cloud height and density profile of ice and graupel particles respectively with high accuracy, which is more consist with the real condition of ice cloud than the current retrieval algorithm.
Keywords:Terahertz;ice cloud sounding;neural network;ice and graupel particles;retrieval of ice cloud parameter
Abstract:Snow and ice scatter solar radiation in a strong anisotropic fashion, especially in shortwave region, which, in turn, causes a significant difference in the study of the global energy balance and water cycles. The remote sensing community has developed a series of reflectance models for various applications in snow surface. Comprehensive comparison and evaluation of these models help in choosing an algorithm to produce satellite multi-angle remote sensing product. In this paper, we use the Polarization and Directionality of Earth Reflectances (POLDER) multi-angle snow data to compare and evaluate the performance of three models to characterize snow scattering. Three models including the kernel-driven linear Ross Thick-Li Sparse Reciprocal (RTLSR) model as the Moderate Resolution Imaging Spectroradiometer (MODIS) BRDF/Albedo operational algorithm, the Asymptotic Radiative Theory model (ART), and the lately developed RTLSR-Snow (RTLSRS) model have been well used in studies. First, the POLDER data are divided into pure snow data and impure snow data by the using the homogeneity index provided by the POLDER database. We then use three BRDF models to fit (1) a single pure snow BRDF dataset; (2) the entire archive of the pure snow BRDF data; (3) a single impure snow BRDF dataset; and (4) the entire achieve of the impure snow BRDF data. We analyze the result on the basis of the R2, RMSE and bias. As the volumetric scattering kernel and geometric optical kernel contribute little to pure snow reflectances, we further simplify the RTLSRS model by keeping only isotropic scattering and snow scattering kernel in the kernel-driven model framework (i.e., isotropic and snow-kernel model, ISM). The performance of the ISM model has further been evaluated using the POLDER pure snow data. The results are as follows: (1) The RTLSRS is the most accurate model among all models being considered. For a single pure snow BRDF dataset, the RTLSRS model has an RMSE value that is 45.45% lower than that of ART models and is only 18.46% of that for RTLSR model.For a single impure snow BRDF dataset, the BRDF curve of RTLSRS model is generally similar with RTLSR model’s, but the RMSE is 67.5% lower than RTLSR. The RMSE of the ART model is the largest in this case, arriving at 0.136. (2) The accuracy of the RTLSRS model in simulating the pure snow data (R2=0.969, RMSE=0.012) is higher than that of the impure snow data (R2=0.926, RMSE=0.013). (3) The simplified ISM model can characterize the pure snow BRDF data well. The R2 and RMSE can reach 0.949 and 0.034 for the entire POLDER pure snow data, which is even better than the ART model. RTLSRS has the highest accuracy in fitting various POLDER BRDF snow data. Although the ISM has low accuracy relative to its original RTLSRS model, it shows higher accuracy than the ART model in fitting the POLDER pure snow data. Results present that the index of the “homogeneity” provided by the entire archive of the POLDER snow database cannot necessarily meet the requirement to identify the pure snow pixels of POLDER snow data. Therefore, a new method must be developed to refine the POLDER snow data and provide more details that can improve the understanding for potential users in relation to snow optical scattering.
Keywords:snow;ART;RTLSR;RTLSRS;POLDER;kernel-driven BRDF model
Abstract:To solve the problem of lacking a reliable positioning source in complex environments such as indoor and underground, a high-precision position and attitude estimation method based on the low-frequency time-varying magnetic field is proposed in this paper. The traditional time-varying magnetic field positioning method requires the magnetic beacon coordinate system to be consistent with the target, which cannot solve the relative attitude angle information of the target and the accuracy is poor. The proposed method realized with the fingerprint matching algorithm overcomes the shortcoming of traditional solutions, which is penetrating, robust, and accurate.According to the Biot–Savart Law, the magnetic field intensity decays with the distance between the target and the magnetic source, and the orientation of the measured magnetic field has a certain relation to orientation from the source to the target. Hence, according to this principle, an improved fingerprint algorithm is introduced. Firstly, the RSSI fitting line of the magnetic beacon is calculated according to the measured magnetic field in space, and the position is estimated by the fingerprint matching algorithm. The attitude can be achieved from the estimated position and magnetic field direction vector model. Furthermore, the disturbing factors of the magnetic beacon positioning system are analyzed and the optimization method is approached to improve system performance.The performance includes the effective distance of a single magnetic beacon, the positioning accuracy, the stability of the proposed approach, and the influence of the magnetic beacon number are verified by the experiment. The effective distance of a single magnetic beacon is 14 m. The result exhibits the positioning error expectation is 0.069 m and attitude error expectation is 2.3°, respectively. The error does not accumulate over time, which has obvious advantages over the traditional magnetic beacon navigation solutions and has high engineering application value.
Keywords:position and attitude estimation;model of magnetic beacon;underground and indoor navigation;fingerprint matching algorithm
Abstract:Water facilities play an important role in water scheduling, ecological protection and restoration of natural wetlands, utilization of resources and functions, and development. The traditional methods of statistics on the location and count of water conservancy facilities rely on compiled data, which has disadvantages such as time-consuming, untimely data update, and unknown specific geographic locations. Remote sensing provides new possibilities for large-scale detection of water conservancy facilities. Aiming at the problem of detection of water conservancy facilities with remote sensing images, this study proposes a large-scale image detection algorithm. Based on the YOLO v3 network and the characteristics of water conservancy facilities, the study was divided into two main aspects: (1) We improved the YOLO algorithm and obtained the E-YOLO algorithm. We proposed a PPA feature fusion method and a four-feature map cross prediction method with proportional prediction box to optimize the problems of small samples. Besides, we improved the loss function by highlighting the loss of confidence. In addition, we used the transfer learning method to read part of the feature extraction parameters of the pre-trained model. (2) With the improved E-YOLO algorithm as the core, a large-area water conservancy facility detection algorithm combined with the water body index constraint was obtained. Aiming at the problem of large image size with a small target scale, we used the water body index to constrain the sliding step to reduce the missed detection rate and false detection rate at the same time. Then we combined the network output with the contour merging method to optimize the detection results. We used the GF-2 data for this study. The experimental results show that: the E-YOLO algorithm can significantly improve the detection effect of water conservancy facilities. Compared with YOLO v3, the average F2 score of E-YOLO is increased by 1.25% and the E-YOLO algorithm has a better stability. The large-area detection method constrained by the water index can improve the detection accuracy while ensuring efficiency. Compared with the large-step and small-step methods, its F2 accuracy is increased by 3.72% and 2.70%, respectively. Our method provides a good solution for the detection of water conservancy facilities.
Abstract:Glacier mass balance is a significant indicator of glacier accumulation and ablation state, and it also reflects the relationship between glacier and climate forcing, which have great impacts on evaluating glacier dynamics. Due to the continuous accumulation of greenhouse effect, large amount of mountain glaciers in the Qinghai-Tibet Plateau in China has been continuously depleted since 1970, especially in the East Kunlun Mountains and the inner regions of the Qinghai-Tibet Plateau. The large-scale and long-term observation of glacier mass balance is usually estimated according to the elevation change of the glacier surface by the remote sensing means of synthetic aperture radar interferometry, Lidar altimetry technology and photogrammetry using optical stereo image. In this work, we choose Malan Mountain, located in the north of Tibet Plateau, as our study area, which is one of the most ablated glacier regions in East Kunlun Mountains. To assess glacier mass change in Malan Mountain in recent two decades, we utilize SRTM DEM, TerraSAR-X/TanDEM-X, and ICESat-2 data to estimate its glacier mass balance during 2000—2012, 2012—2020, and 2000—2020. In order to obtain the true value of the long time change of glacier surface elevation, we take the following steps. Firstly, three kinds of elevation data were registered to eliminate the spatial errors, and then the penetration depth of ice in the East Kunlun Mountains was estimated by statistical method according to the difference between SRTM-X DEM and SRTM-C DEM. Finally, the accurate ice elevation change value was obtained by seasonal correction according to the seasonal change of glacier. The results show that: (1) During 2000—2020, 41 glaciers in Malan Mountain display remarkably negative mass change (-0.24 ± 0.06 m·w·e/a) and their overall elevation change is -5.64 ± 0.96 m. Besides, we also compare glacier mass change in Malan Mountain during two subperiods and we find that glacier ice mass loss rate is more apparent during 2000—2012 (-0.30 ± 0.04 m·w·e/a) than 2012—2020 (-0.22 ± 0.11 m·w·e/a). (2) Based on GPCC (Global Precipitation Climatology Center) and GHCN_CAMS (Global Historical Climatology Network) reanalysis dataset, we discover that the evidently negative mass change in Malan Mountain during 2000—2020 is mainly attributed to increasing summer temperature. Albeit slightly increasing annual precipitation for glacier ice mass accumulation in recent two decades, it still cannot compensate ice mass loss caused by increasing summer temperature. Additionally, we also find that the decreasing glacier ice mass loss rate during 2012—2020 is predominantly ascribed to decreasing summer temperature in this period. (3) According to Landsat-7 images during 2007—2012, we discover a surging glacier in the southern slope of Malan Mountain and its terminus advances approximately 251 m during this period.
Keywords:Mass balance;radar interferometry;TanDEM-X;ICESat-2;climate change;glacier surging;glacier in Malan Mountain
Abstract:Remote sensing observation on ecosystems and environment can continuously provide global insights of scientific significance on the sustainability of ecosystems and the status of human environment. It is also a concrete practice to advance the capability of global comprehensive earth observation and address ecological and environmental challenges in line with the goals and visions of advancing ecological civilization and building a shared future for all life on Earth. National Remote Sensing Center of China (NRSCC) initiated Global Ecosystems and Environment Observation: Annual Report from China (GEOARC) and has been continuously releasing 29 reports and more than 100 open-access datasets in ten years from 2012 to 2021, focusing on the priorities of Sustainable Development, Climate Change, Disaster Risk Reduction and Urban Resilience, the outcomes have been accomplished through the collaboration of governmental agencies, research institutions, international organizations and the public. The primary achievements are: (1) A batch of innovative algorithm models and data products with independent intellectual property rights have been obtained based on domestic high-resolution satellite and multi-source remote sensing images for fine-scaled remote sensing monitoring of ecosystem conditions and human footprints. (2) It has significantly improved the public's scientific cognition on key topics such as food security, climate change, urban expansion, land degradation, and natural disaster risk. (3) Fine-scaled observation and assessment were carried out on key regions such as the Belt and Road , Antarctica, Africa, ASEAN and ecologically fragile areas. This is also China's substantial contribution to the international community as the co-chair of GEO (Group on Earth Observations), and provides important information reference for decision making by relevant agencies and departments. On the occasion of 10th anniversary, this paper systematically elaborated and summarized the contents and highlights of the annual reports, and put forward the prospect of GEOARC.
Keywords:ecosystems and environment;remote sensing observation;GEOARC;domestic satellites