How to install Cloudstack to Ubuntu 18.04 Bionic Beaver

It would be a disaster if you want to install Cloudstack onto Ubuntu 18.04 at this time (one month after its release). All the dependencies are running into incompatible issues and making it impossible to do it. However, in my situation I have to do this right now. I have read all the posts on the web, including Rohit’s blog which is not helping. Thank Robit anyway as I finally found someone facing the same problem.

But I told myself that now is 2018 and no technical issue is unsolvable. The answer is actually simple: docker. Install docker onto Ubuntu 18.04 and install cloudstack inside docker container. Use docker container as either a cloudstack management or a cloudstack agent, according to your choice.

Dell inspiron 15 7000 gaming laptop automatically shutdown after a while

I bought a new Inspiron (honestly, open box at a very good price) and want to run deep learning program on it. I started the program before taking off and left it running for the whole night. When coming back in the morning, it really shocked me when I found it was mysteriously shut down. What the hell? I instantly googled everything about this automatically-shut-down thing and tried all the solutions, but the problem still persists after another night. I almost returned it at that time, but it is a very nice deal and looks really stunning. Will feel very bad since I already spend two days on it and set up the software environment. So I move on to find solutions.

Finally. Long story short, go to the control panel -> power options ->edit plan settings -> change advanced power settings -> turn off hard disk after (set it to never). Problem gone. That’s it.

Deeplearning4j CPU MKL DLL problem

Recently, I am excising the DeepLearning4j library which is developing rapidly with a growing user community. I use it because Java is my
“mother” language.

I have cloned the dl4j-examples and run without problem for nearly a year. In the last two months, I leave it behind and focus on something else (and installed some software for the new tasks). Yesterday, when I rerun the dl4j-examples, the program exited with a message “Intel MKL FATAL ERROR: Cannot load mkl_intel_thread.dll”. I searched all over the web and tried to compile through the ND4j. But, nothing helps.

After reading this page, I understand DL4j can use either MKL or OpenBLAS. I never installed MKL and always use OpenBLAS which is a built-in module of ND4j (as I understand). Why does ND4j suddenly choose MKL as its blas vendor instead of OpenBLAS?

Eventually, my solution is checking the PATH environment variable. My case is on Windows. Go to the environment variable setting page and examine the PATH variable carefully. Remove the pathes which may cheat DL4j to believe your machine has MKL installed. After the change, make sure reboot your machine to let the change take effect. The JVM won’t adopt the new changes until it is restarted.

In my case, I removed the pathes which are appearently appended to the PATH variable after I installed new software. The problem is solved.

Here is the log of OpenBLAS:

 15:41:16.342 [main] INFO org.reflections.Reflections - Reflections took 890 ms to scan 13 urls, producing 31 keys and 227 values
 15:41:26.898 [main] INFO org.nd4j.nativeblas.Nd4jBlas - Number of threads used for BLAS: 2
 15:41:26.995 [main] INFO o.n.l.a.o.e.DefaultOpExecutioner - Backend used: [CPU]; OS: [Windows 7]
 15:41:26.995 [main] INFO o.n.l.a.o.e.DefaultOpExecutioner - Cores: [4]; Memory: [1.8GB];
 15:41:26.996 [main] INFO o.n.l.a.o.e.DefaultOpExecutioner - Blas vendor: [OPENBLAS]
 15:41:28.658 [main] DEBUG org.reflections.Reflections - going to scan these urls:

Using Long Short-Term Memory Recurrent Neural Network in Land Cover Classification on Landsat and Cropland Data Layer time series

Abstract: Land cover maps are significant in the agricultural analysis. However, the existing workflow of producing maps takes too long. This work builds long short-term memory (LSTM) recurrent neural network (RNN) model to improve the update frequency. An end-to-end framework is proposed to train the model. Landsat scenes are used as Earth observations. Field-measured data and CDL (Cropland Data Layer) are used as ground truth. The network is deeply trained using state-of-the-art techniques. Finally, we tested the network on multiple Landsat images to produce five-class land cover maps with timestamps. The results are visualized and compared with CDL and ground truth. The experiment shows a satisfactory overall accuracy (>97%) and proves the feasibility of the model. This study paves a path to efficiently using LSTM RNN in remote sensing image classification.

  1. Introduction

The remote sensing (RS) community has vaguely recognized that conventional schemes are driving to dead end [1-5]. Typically in the 2012 ImageNet competition, the neural network based solutions outperformanced most conventional methods. A recent trend shows that researchers are moving to deep learning (DL). A quite number of studies practice neural network to classify RS images and already achieved some satisfactory results.

Meanwhile, RS images have accumulated to big scale [4]. NASA has archived petabytes of data in their archives [6-8]. Conventional image interpretation technique has only exposed a tiny part of the information in the rich mine [9,10]. The pace of mining data is far behind the speed of acquiring data. Slowness and too many manual processes are the major obstacles to fully exposition. Thus DL, which is more automatic and addresses faster interpretation, becomes more and more popular in RS image analysis.

However, the success of DL requires the availability of more data and more powerful computational engines like Graphic Processing Units (GPU). It is not equal to decidedly say DL is a better algorithm than conventional algorithms like SVM or Decision Trees. A success neural network also requires careful engineering and considerable domain expertise to design the network configuration.

Feedforward neural network (FNN) and recurrent neural networks (RNN) are two mostly used networks. The former feeds information straight through the network, the latter cycles the information through a loop. RNN is often considered to have better memory capability than FNN and is more suitable to deal with time series.

1.1. Problem Statement

CDL is a land cover product by USDA NASS for the continental U.S. It has a very high accuracy due to its private ground truth data from NASS field offices. But CDL only has one layer for each year and the land cover usually changes along with season. Landsat satellites have observed millions of scenes at 30 meter resolution in the past forty years. Using the knowledge from the available CDL to classify Landsat scenes in different seasons is a potential urgent need.

1.2. Contributions

This paper creates a LSTM RNN to utilize CDL time series to guess the land cover of the Landsat pixels. We build a RNN with three hidden LSTM (long-short-term-memory) layers. We preprocessed Landsat and CDL time series, and used them to prepare training and testing dataset. We trained the network by many times and recorded the accuracy of each training phase. The results are plotted to charts and compared. We retrieve a fairly satisfactory accuracy when using the trained network on several Landsat scenes.

1.3. Relate work

ANN (artificial neural network), especially deep neural networks (DNN), already has plenty of application in image recognition [11]. A thorough investigation is made on the current researches of ANN in RS. Audebert et al reveals the general potential benefits that could be brought by DL to remote sensing [12]. They tested various deep network architectures in classification and semantic mapping of aerial images and better performances are achieved. Cooner et al evaluated the effectiveness of multilayer feedforward neural networks, radial basis neural networks and Random Forests in detecting earthquake damage by the 2010 Por-au-Prince Haiti 7.0 moment magnitude event [13]. Duro et al compared pixel-based and object-based image analysis approaches for classifying broad land cover classes over agricultural landscapes using three supervised learning algorithms: decision tree (DT), random forest (RF), and support vector machine (SVM) [14]. Zhao et al used multi-scale convolutional auto-encoder to extract features and trained logistic regression classifier for classification and got better results than traditional methods [15]. Kussul et al designed a multilevel DL architecture to classify land cover and crop type from multi-temporal multisource satellite imagery [16]. Maggiori et al trained CNNs to produce classification maps out of images [17]. Das et al proposed Deep-STEP for spatiotemporal predication of satellite remote sensing data [18]. They derived NDVI data from thousands to millions of pixels from satellite remote sensing imagery using DL. Marmanis et al used a pretrained CNN from ImageNet challenge to extract an initial set of representations which are later transferred into a supervised CNN classifier [19]. Ienco et al evaluated the LSTM RNN on land cover classification considering multi-temporal spatial data from a time series of satellite images [20]. Their experiments are made under both pixel-based and object-based scheme. The results show the LSTM RNN is very competitive compared to state-of-the-art classifiers and even outperform classic approaches at low represented and/or highly mixed classes. Li et al used DL to detect and count oil palm trees in high-resolution remote sensing images [21]. Successful cases have validated the great potential of DL in RS image recognition. This study adds a study case using LSTM RNN upon satellite imagery time series.

  1. Materials and Methods

2.1. Study Area and Materials

We choose the Eastern North Dakota, which has sound archive of historical Landsat and CDL images, as the study area. North Dakota is a state of the northern U.S. as shown in Fig. 1 and agriculture is its number one industry and economy base [22]. The products of North Dakota take a significant part in the overall yield of U.S. agriculture according to the U.S. National Agricultural Statistics Service (NASS), especially spring wheat and durum wheat[1].

Landsat satellites have observed the Earth for more than four decades and obtained more than six millions of scenes [23,24]. Landsat 5 delivered back images from the space since 1984. Landsat 7 operated perfectly after its 1999 launch but generated gaps in all the captured images due to the malfunction of Scan Line Corrector since May 2003. In 2013, while the Landsat 5 is planned to decommissioned, a new satellite, Landsat 8, is pushed to the orbit to continue the mission [25]. Each Landsat satellite provides images for any point on the Earth surface about every two weeks at 30 meter resolution.

CDL is an annual land cover product of the continental U.S. made by NASS. It is very popular and widely accepted as a fairly accurate reflection to the truth. The resolution is also 30 meters. Each year has one layer. Landsat images are one of its source datasets. Meanwhile, CDL fused the ground truth data received by NASS field offices. That results in a much better accuracy than the other existing land cover products. The claimed accuracy is 85% to 95% for major crop types [26]. The covered period of Landsat and CDL in North Dakota is as shown in Fig. 2. Another reason to choose North Dakota is that it is the only state having CDL from the very beginning. The CDL program only started to cover the entire continental U.S. since 2008. Given 1997 is the year when both Landsat and CDL started to coexist, we select the years after as our study interval.


Figure 1. Study area

Figure 2. The availability of data since 1997 in North Dakota

2.2. Recurrent Neural Network and Long Short-Term Memory

Traditional RNN is simple. Different from FNN, RNN has feedback connections. The outputs of previous time steps will be considered in current time step. So the historical statuses have a long-term influence on the future judgment, which “memory” is all about. Given  is the sequence of input vectors, is the hidden vector sequence and  is the sequence of outputted vectors, where n represents the overall time steps. An example RNN cell is displayed in Fig. 3 (the left one). The equations computing output vectors from input vectors are as follows.

where  are weights on connections,  is the neuron activiation (mostly Tanh in RNN). The self-connection weigth  is usually simply set to 1.The subsequent back propagation will adjust all the weights of the entire input sequence.

Figure 3. RNN and LSTM introduction

LSTM RNN is more complex. We take the definition from Graves as baseline [27]. As shown in Fig. 3 (the right one), a cell of LSTM RNN has three extra “gates” which control the involvement of the context information. The input gate is responsible for scaling input to the cell. The output gate is to scale the output from the cell. The forget gate is to scale the old cell value. The equations for computing the gate outputs are:

where  is an activation function (e.g., logistic sigmoid or Tanh), and i, f, o and g are the output vectors of input gate, forget gate, output gate and the cell itself, respectively.  denotes the weights of connections, for example,  is the weight of the connection between the input  and the input gate and  is the connection weight between the input  and the input gate. b represents bias input.

The simple RNN cannot look far back to the past. LSTM solved that problem [28]. As its extraordinary performance, LSTM RNN have become a popular choice for modeling inherently dynamic process like voice and handwriting [27] and massively used by tech giants like Apple, Google, Microsoft and Amazon in their products. This work reuses the architecture and examines its performance in RS image classification. As the image spatial/temporal series have many similar characters to the signal of speech or handwriting, the same level of performances are expected to be achieved in RS too.

GeoFairy.Map wins in NGA Disparate Data Competition


GeoFairy.Map, derived from GeoFairy, is one of the final winners in Stage 1 of NGA Disparate Data Competition.

Geofairy.Map is a system created as an easy-to-go entry for the widely disparate datasets faced by both governments and corporations today. The system hubs many kinds of data sources, especially governmental datasets from NGA, NOAA, NASA, USGS, EPA, etc, and makes it much easier for people to discover, access, view and retrieve them via as few clicks as possible. With Geofairy.Map, users no longer need to jump about on the Internet and get exhausted to reach the information they are interested in. The time spent by users on data accessing is significantly reduced. This feature will be very helpful in the situations of emergency such as military actions, disaster responding and daily information quick inquiry. The datasets in Geofairy.Map are very heterogeneous in formats such as WMS, WMTS, ArcGIS REST, GeoJSON, Geopackage, ESRIShapefile, KML, NITF, GeoTiff, CSV, XLS, XML and GML. Geofairy.Map managed to manipulate them in a very uniformed, intuitive and secured interface. Geofairy.Map has both web based and mobile based clients. Everytime users submit any request, Geofairy.Map will repeatedly perform security checking on client, networking and server. The operation can be only processed when a clearance is granted for the current communication. Among the checking, client identity and networking anti-hijacking are the two major items that have to be validated. We designed and are making Geofairy.Map to take advantage of the Apple iOS Finger Print system to identify the users. On the backstage, as many data sources have restrictions on data access, we perform the server-to-server communication following secure channels and policies such as HTTPS, HTTP Auth, Token and WS-security policies.

The datasets already integrated in Geofairy.Map include (both NGA and other public datasets):

  • Hurricane_Hermine
  • Disaster Response – NEPAL
  • Wildlife Trafficking
  • Geonames – foreign
  • Navy BlueMarble – low res (WMS)
  • Navy BlueMarble – low res (WMTS)
  • Sample NITF Data
  • Sample Landsat Data
  • FAA Airports
  • Global Ground Control Point Dataset
  • AsiaMaritime Transparency (Public service)
  • Bathymetric, GeophysicalMaps (NOAA)
  • West Africa Ebola Outbreak
  • Data for World Food Program
  • S. Fire Data
  • NOAA Historical Hazard
  • Global Weather Map (from OpenWeatherMap)
  • USDA Land Cover Product (Crop Data Layer)
  • USDA National Hardness Zone Map
  • USGS DEM 90 Meters
  • NASA AIRS Product
  • NASA AIRS NRT Product
  • NASA TRMM Product
  • NASA OMI Product
  • Vegetation Monitoring Product from VegScape

A demo video of the web page version of Geofairy.Map is here:

A demo video of the mobile App version of Geofairy.Map is here:

Great thanks to the authors of GeoFairy.Map (order by contribution from high to low):

Ziheng Sun (

Liping Di (

Gil Heo

Chen Zhang

Ziao Liu

Hui Fang

Liping Guo

©All rights are reserved by authors.

how to configure mysql database for geonetwork 3.0.0+ on CentOS 6.6

1 install Java and tomcat on CentOS

2 download geonetwork 3.0.2 or 3.0.3 war package

3 use tomcat web application manager to upload the geonetwork war and deploy it

4 after the deployment, go to the directory $tomcat_home/web apps/geonetwork/WEB-INF/config-node/srv.xml choose the MySQL as database

5 then update the WEB-INF/configure-db/ with MySQL connection information

6 then most important part, set MySQL as table name case insensitive and create an empty database geonetwork in MySQL before restart the tomcat.

Set MySQL as table name case sensitive:

Add the following line to the mysqld
config in /etc/my.cnf

Create an empty database in MySQL:

mysql>create database geonetwork;

Then restart the tomcat and enter the link to test if geonetwork is successfully configured.


Good luck!