Deeplearning4j CPU MKL DLL problem

Recently, I am excising the DeepLearning4j library which is developing rapidly with a growing user community. I use it because Java is my
“mother” language.

I have cloned the dl4j-examples and run without problem for nearly a year. In the last two months, I leave it behind and focus on something else (and installed some software for the new tasks). Yesterday, when I rerun the dl4j-examples, the program exited with a message “Intel MKL FATAL ERROR: Cannot load mkl_intel_thread.dll”. I searched all over the web and tried to compile through the ND4j. But, nothing helps.

After reading this page, I understand DL4j can use either MKL or OpenBLAS. I never installed MKL and always use OpenBLAS which is a built-in module of ND4j (as I understand). Why does ND4j suddenly choose MKL as its blas vendor instead of OpenBLAS?

Eventually, my solution is checking the PATH environment variable. My case is on Windows. Go to the environment variable setting page and examine the PATH variable carefully. Remove the pathes which may cheat DL4j to believe your machine has MKL installed. After the change, make sure reboot your machine to let the change take effect. The JVM won’t adopt the new changes until it is restarted.

In my case, I removed the pathes which are appearently appended to the PATH variable after I installed new software. The problem is solved.

Here is the log of OpenBLAS:

 15:41:16.342 [main] INFO org.reflections.Reflections - Reflections took 890 ms to scan 13 urls, producing 31 keys and 227 values
 15:41:26.898 [main] INFO org.nd4j.nativeblas.Nd4jBlas - Number of threads used for BLAS: 2
 15:41:26.995 [main] INFO o.n.l.a.o.e.DefaultOpExecutioner - Backend used: [CPU]; OS: [Windows 7]
 15:41:26.995 [main] INFO o.n.l.a.o.e.DefaultOpExecutioner - Cores: [4]; Memory: [1.8GB];
 15:41:26.996 [main] INFO o.n.l.a.o.e.DefaultOpExecutioner - Blas vendor: [OPENBLAS]
 15:41:28.658 [main] DEBUG org.reflections.Reflections - going to scan these urls:

Using Long Short-Term Memory Recurrent Neural Network in Land Cover Classification on Landsat and Cropland Data Layer time series

Land cover maps are significant in the agricultural analysis. However, the existing workflow of producing maps takes too long. This work builds long short-term memory (LSTM) recurrent neural network (RNN) model to improve the update frequency. An end-to-end framework is proposed to train the model. Landsat scenes are used as Earth observations. Field-measured data and CDL (Cropland Data Layer) are used as ground truth. The network is deeply trained using state-of-the-art techniques. Finally, we tested the network on multiple Landsat images to produce five-class land cover maps with timestamps. The results are visualized and compared with CDL and ground truth. The experiment shows a satisfactory overall accuracy (>97%) and proves the feasibility of the model. This study paves a path to efficiently using LSTM RNN in remote sensing image classification.


GeoFairy.Map wins in NGA Disparate Data Competition


GeoFairy.Map, derived from GeoFairy, is one of the final winners in Stage 1 of NGA Disparate Data Competition.

Geofairy.Map is a system created as an easy-to-go entry for the widely disparate datasets faced by both governments and corporations today. The system hubs many kinds of data sources, especially governmental datasets from NGA, NOAA, NASA, USGS, EPA, etc, and makes it much easier for people to discover, access, view and retrieve them via as few clicks as possible. With Geofairy.Map, users no longer need to jump about on the Internet and get exhausted to reach the information they are interested in. The time spent by users on data accessing is significantly reduced. This feature will be very helpful in the situations of emergency such as military actions, disaster responding and daily information quick inquiry. The datasets in Geofairy.Map are very heterogeneous in formats such as WMS, WMTS, ArcGIS REST, GeoJSON, Geopackage, ESRIShapefile, KML, NITF, GeoTiff, CSV, XLS, XML and GML. Geofairy.Map managed to manipulate them in a very uniformed, intuitive and secured interface. Geofairy.Map has both web based and mobile based clients. Everytime users submit any request, Geofairy.Map will repeatedly perform security checking on client, networking and server. The operation can be only processed when a clearance is granted for the current communication. Among the checking, client identity and networking anti-hijacking are the two major items that have to be validated. We designed and are making Geofairy.Map to take advantage of the Apple iOS Finger Print system to identify the users. On the backstage, as many data sources have restrictions on data access, we perform the server-to-server communication following secure channels and policies such as HTTPS, HTTP Auth, Token and WS-security policies.

The datasets already integrated in Geofairy.Map include (both NGA and other public datasets):

  • Hurricane_Hermine
  • Disaster Response – NEPAL
  • Wildlife Trafficking
  • Geonames – foreign
  • Navy BlueMarble – low res (WMS)
  • Navy BlueMarble – low res (WMTS)
  • Sample NITF Data
  • Sample Landsat Data
  • FAA Airports
  • Global Ground Control Point Dataset
  • AsiaMaritime Transparency (Public service)
  • Bathymetric, GeophysicalMaps (NOAA)
  • West Africa Ebola Outbreak
  • Data for World Food Program
  • S. Fire Data
  • NOAA Historical Hazard
  • Global Weather Map (from OpenWeatherMap)
  • USDA Land Cover Product (Crop Data Layer)
  • USDA National Hardness Zone Map
  • USGS DEM 90 Meters
  • NASA AIRS Product
  • NASA AIRS NRT Product
  • NASA TRMM Product
  • NASA OMI Product
  • Vegetation Monitoring Product from VegScape

A demo video of the web page version of Geofairy.Map is here:

A demo video of the mobile App version of Geofairy.Map is here:

Great thanks to the authors of GeoFairy.Map (order by contribution from high to low):

Ziheng Sun (

Liping Di (

Gil Heo

Chen Zhang

Ziao Liu

Hui Fang

Liping Guo

©All rights are reserved by authors.

how to configure mysql database for geonetwork 3.0.0+ on CentOS 6.6

1 install Java and tomcat on CentOS

2 download geonetwork 3.0.2 or 3.0.3 war package

3 use tomcat web application manager to upload the geonetwork war and deploy it

4 after the deployment, go to the directory $tomcat_home/web apps/geonetwork/WEB-INF/config-node/srv.xml choose the MySQL as database

5 then update the WEB-INF/configure-db/ with MySQL connection information

6 then most important part, set MySQL as table name case insensitive and create an empty database geonetwork in MySQL before restart the tomcat.

Set MySQL as table name case sensitive:

Add the following line to the mysqld
config in /etc/my.cnf

Create an empty database in MySQL:

mysql>create database geonetwork;

Then restart the tomcat and enter the link to test if geonetwork is successfully configured.


Good luck!

If word docx or doc document file can not be opened because “the name in the end tag of the element must match the element type in the start tag”

This is a sad story with a happy end. I believe a lot of people met the same problem as I have but went to a sad end eventually. Thus, I decide to spend half an hour to write about how I solve this annoying problem to help those people who are desperate to recover their valuable documents.

Sometimes, a very very common operation in the famous Microsoft Office Word software may cause a very serious crash just like the following pic shows. Forgive me some of the information is displayed in Chinese. But you do not have to understand them. They tell only one thing: you are in big trouble now.


Fig. 1

People usually search the problem in Google and found some solutions like downloading a microsoft fixing tool which is called MicrosoftFixit.wordopenclosetag.Run.exe. Install and run it to fix the document. Maybe the tool can fix some issues, except mine. I realized that after numerous useless tries (that is why I hate Microsoft’s all kinds of tools and patches). So if you try it once and it doesnot work. Drop it instantly and do not waste time on the tool. Try the method introduced by this article.

Ok. No more nosenses. Let’s begin.

First, backup your crashed document. Even through it cannot be opened, it doesn’t mean the content in it is gone. They are still there. So please be very cautious. Don’t lose temper and delete it. Calm down. Make a copy and put it aside. Then rename the suffix of the document from .docx to .rar.


Fig. 2

Then, upzip the .rar to a folder. You will see the actual construction of the mysterious word file. It includes three folders (_rels, docProps, and word) and a file ([Content_Types].xml).


Fig. 3

Enter into the word folder.


Fig. 4

Open the file document.xml by a text editor. I suggest Notepad++. You can use your favorite editor. But the editor is required to be able to render XML with different colors and highlight the XML structures.


Fig. 5

The content of the document.xml is very large and only has two lines. In order to check the problems in the XML, we need to reformat it first. The tool I used for reformating is Eclipse Indigo IDE which is open source and free to download. Create a Java project in Eclipse and create a new file named text.xml. Copy all the content of document.xml into test.xml.


Fig. 6

In Eclipse, right click in text.xml. Choose “source” and Click “format”. The XML will be formated into a very friendly style. Copy the formated content back to the document.xml and save document.xml.

Next step is to locate where the error is. Zip all the three folders and files back into a .zip file. Important notice here. The folders and file must be on the first level in the zip file. So the correct way to create the zip file is shown in the following figures.


Fig. 7

Select the three folder and the file. Right click on them and click “Add to archive”. A window like Fig. 8 shows up.


Fig. 8

Select ZIP and click OK. Rename the suffix of the zip file from .zip back to .docx. Open it with word. An error box (Fig. 9) pops up. Click “details” and you will see the location of the error.


Fig. 9

See, it says the error is in row 37507 column 8, So get back to the document.xml in Notepad++. Locate there and observe. Here there is some tricks to quickly find the error. Fold the XML tags around the error locations by clicking the plus symbol on the left margin of Notepad++. Other editors might have similar symbols. If not, download Notepad++ which is also free to use.


Fig. 10


Fig. 11

After I fold several tags, I found the problem. I found the tag <mc:Fallback> on line 37307 and the tag <m:r> on line 36335 have no end tags (Fig. 10).  So I need to close the two tags. I can use the other complete tags as example and use the same pattern to fill up the missing end tags. In Fig.11 the complete content closing tag <mc:Fallback> is:

<w:pict />

However, in Fig. 10 only the first part appears on line 37307:

<w:pict />

So, we add the missed second part to make it complete.



Fig. 12

It seems good. Let’s see if it works. Save the modified document.xml and repeat the step zipping the folders and file back to word docx.


Bingo! It is back! My work is back!!!

Well, let’s conclude the whole process. The most difficult part is finding the missed end tags. Don’t lose your faith. Keep patient. You will find the error eventually. It is even easier than a cross word game. Remember usually the missing tags are all together. It is almost impossible that more than one place is crashed at the same time. Try to find the only place. If one place is fixed, all is fixed. Good luck!

P S. Although this approach depends on nobody and no help which may cost your money, it takes your time, maybe an hour, maybe two hours. To avoid this, be reminded to always backup your important document with version number.

How to export JavaDoc from an Eclipse project in English language

If you try to create a Java API document by javadoc.exe and your system language is not English, in Eclipse you may get a result Java API in a language other than English. The encoding problems may often cause that the navigation bar is full of grabled unrecognizable code like this.


In order to make the navigation bar in correct English language, just need make a small change. Go to the last page of the javadoc export wizard in Eclipse, add the following string into the “extra javadoc options” text area.

-locale en


Click the finish button. You will see the output Java API Web pages will be in English. Another problem is solved. Bingo!