Merge pull request #5 from developmentseed/image

Merging major changes
This commit is contained in:
Scisco
2014-08-28 18:48:04 -04:00
22 changed files with 828 additions and 345 deletions

1
.gitignore vendored
View File

@@ -13,4 +13,5 @@ MANIFEST
*flymake.py
.settings
metadata.csv
txt.txt

22
Formula/landsat-util.rb Normal file
View File

@@ -0,0 +1,22 @@
require "formula"
class LandsatUtil < Formula
homepage "http://www.developmentseed.org"
url "https://github.com/developmentseed/landsat-util/archive/v0.1.0.tar.gz"
sha1 "522e460c7cb8c229c3bf4e3d965b0c9f5924c1e9"
head "https://github.com/developmentseed/landsat-util.git", :branch => 'image'
depends_on "gdal"
depends_on "libtiff"
depends_on "imagemagick" => "with-libtiff"
depends_on "https://raw.githubusercontent.com/OSGeo/homebrew-osgeo4mac/master/Formula/orfeo-40.rb"
def install
minor = `python -c 'import sys; print(sys.version_info[1])'`.chomp
ENV.prepend_create_path "PYTHONPATH", libexec/"lib/python2.#{minor}/site-packages"
system "python", "setup.py", "install",
"--prefix=#{libexec}",
"--install-scripts=#{bin}"
bin.env_script_all_files(libexec+"bin", :PYTHONPATH => ENV["PYTHONPATH"])
end
end

View File

@@ -3,4 +3,4 @@ include .gitignore
include LICENSE
recursive-include landsat/assests *.prj *.sbn *.sbx
recursive-include landsat/assests *.shp *.xml *.shx *.html *.txt *.dbf
recursive-include docs *.html
recursive-include doc *.html

View File

@@ -1,4 +1,4 @@
Landsat Utility
Landsat-util
===============
A utility to search, download and process Landsat 8 satellite imagery.
@@ -14,32 +14,44 @@ Installation
**On Mac**
You need to install gdal before using this tool. You can try brew:
Use brew to install landsat-util:
.. code-block:: console
$: brew udpate
$: brew install gdal
$: brew install https://raw.githubusercontent.com/developmentseed/landsat-util/image/Formula/landsat-util.rb
**On Ubuntu (Tested on Ubuntu 14.04)**
For the dev version try:
Install PIP and some other dependencies for a successful install of requirements.txt
.. code-block:: console
$: https://raw.githubusercontent.com/developmentseed/landsat-util/image/Formula/landsat-util.rb --HEAD
**On Ubuntu**
Use pip to install landsat-util:
.. code-block:: console
$: sudo apt-add-repository ppa:ubuntugis/ubuntugis-unstable
$: sudo apt-get update
$: sudo apt-get install python-pip build-essential libssl-dev libffi-dev python-dev python-gdal -y
$: sudo apt-get install git python-pip build-essential libssl-dev libffi-dev python-dev python-gdal libgdal1-dev gdal-bin -y
$: sudo pip install -U git+git://github.com/developmentseed/landsat-util.git
**Installing Landsat8 Utility**
**On Other systems**
Either use pip or easy_install to install the utility:
Make sure you have these dependencies:
- GDAL
- ImageMagick
- Orfeo-40
Then Run:
.. code-block:: console
$: pip install git+git://github.com/developmentseed/landsat-util.git
$: pip install -U git+git://github.com/developmentseed/landsat-util.git
or download the repository and run:
Alternatively, you can also download the package and run:
.. code-block:: console
@@ -52,89 +64,76 @@ Usage
$: landsat -h
.. code-block:: console
**Search**
Usage:
Landsat-util helps you with searching Landsat8 metadata and downloading the
images within the search criteria.
With landsat-util you can also find rows and paths of an area by searching
a country name or using a custom shapefile and use the result to further
narrow your search.
Syntax:
landsat.py [OPTIONS]
Example uses:
To search and download images or row 003 and path 003 with a data range
with cloud coverage of maximum 3.0%:
landsat.py -s 01/01/2014 -e 06/01/2014 -l 100 -c 3 -i "003,003"
Options:
-h, --help show this help message and exit
Search:
To search Landsat's Metadata use these options:
-i "path,row,path,row, ... ", --rows_paths="path,row,path,row, ... "
Include a search array in this
format:"path,row,path,row, ... "
-s 01/27/2014, --start=01/27/2014
Start Date - Format: MM/DD/YYYY
-e 02/27/2014, --end=02/27/2014
End Date - Format: MM/DD/YYYY
-c 1.00, --cloud=1.00
Maximum cloud percentage
-l 100, --limit=100
Limit results. Max is 100
-d, --direct Only search scene_files and don't use the API
Clipper:
To find rows and paths of a shapefile or a country use these options:
-f /path/to/my_shapefile.shp, --shapefile=/path/to/my_shapefile.shp
Path to a shapefile for generating the rows andpath.
-o Italy, --country=Italy
Enter country NAME or CODE that will designate imagery
area, for a list of country syntax visit:
http://goo.gl/8H9wuq
Metadata Updater:
Use this option to update Landsat API if you havea local copy running
-u, --update-metadata
Update ElasticSearch Metadata. Requires accessto an
Elastic Search instance
**Example**
To Search rows and paths with date and cloud coverage limit and download images
.. code-block:: console
$: landsat -m --rows_paths="013,044" --cloud=5 --start=04/01/2014
$: landsat search --cloud 6 --start "july 01 2014" --end "august 1 2014" pr 165 100
Make sure to use right format for rows and paths. For example instead of using ``3`` use ``003``.
**Output folder structure**
The output is saved in the home directory of the user
To only search the rows and paths but not to download
.. code-block:: console
|-- Home Folder
| |-- output
| | |-- imagery
| | | |-- file_scene
| | | |-- zip
| | | | |-- LC80030032014174LGN00.tar.bz
| | | |-- unzip
| | | | |-- LC80030032014174LGN00
| | | | |-- LC80030032014174LGN00_B1.TIF
| | | | |-- LC80030032014174LGN00_B2.TIF
| | | | |-- LC80030032014174LGN00_B3.TIF
| | | | |-- LC80030032014174LGN00_B4.TIF
| | | | |-- LC80030032014174LGN00_MTL.txt
$: landsat search --onlysearch --cloud 6 --start "july 01 2014" --end "august 1 2014" pr 165 100
To find rows and paths in a shapefile and download with dates and cloud coverage
.. code-block:: console
$: landsat search --cloud 6 --start "july 01 2014" --end "august 1 2014" shapefile path/to/shapefile.shp
To find rows and paths in a shapefile and download and process images all together
.. code-block:: console
$: landsat search --imageprocess --cloud 6 --start "july 01 2014" --end "august 1 2014" shapefile path/to/shapefile.shp
To find rows and paths of a country and download images (The full list is http://goo.gl/8H9wuq)
.. code-block:: console
$: landsat search --cloud 6 --start "july 01 2014" --end "august 1 2014" country Singapore
**Download**
To download scene images directily
.. code-block:: console
$: landsat download LC80030032014142LGN00 LC80030032014158LGN00
**Image Process**
To process images that are aleady downloaded. Remember, the system only accepts zip files
.. code-block:: console
$: landsat process path/to/LC80030032014158LGN00.tar.bz
To pan sharpen the image
.. code-block:: console
$: landsat process --pansharpen path/to/LC80030032014158LGN00.tar.bz
Important Notes
===============
- All downloaded and processed images are stored at your home directory in landsat forlder: ``~/landsat``
- If you are not sure what images you are looking for, make sure to use ``--onlysearch`` flag to view the results first. Run the search again if you need to narrow down your result and then start downloading images. Each image is usually more than 700mb and it might takes a very long time if there are too many images to download
- Image processing is a very heavy and resource consuming task. Each process takes about 20-30 mins. We recommend that you run the processes in smaller badges
To Do List
++++++++++
- Add longitude latitude search
- Add Sphinx Documentation
- Improve console output
- Add more color options such as false color, true color, etc.
- Connect search to Google Address API

View File

@@ -0,0 +1,5 @@
import settings
import sys
if not settings.DEBUG:
sys.tracebacklimit = 0

View File

@@ -64,7 +64,7 @@ class Clipper(object):
rps = self.__generate_path_row('landsat-tiles.shp', 'landsat-tiles')
self._cleanup()
return rps
except OgrError:
except ogr2ogr.OgrError:
return False
def _cleanup(self):

View File

@@ -34,17 +34,23 @@ def create_paired_list(i):
i - the format must be 003,003,004,004 (commas with no space)
Returns:
[('003','003'), ('004', '004')]
[['003','003'], ['004', '004']]
"""
if type(i) is str:
if isinstance(i, str):
array = i.split(',')
elif type(i) is list:
array = i
elif isinstance(i, list):
# Make sure it is not already paired
if isinstance(i[0], list) or isinstance(i[0], tuple):
return i
else:
array = i
else:
return i
# Make sure the elements in the list are even and pairable
if len(array) % 2 == 0:
new_array = [tuple(array[i:i + 2])
new_array = [list(array[i:i + 2])
for i in range(0, len(array), 2)]
return new_array
else:

View File

@@ -104,6 +104,9 @@ class GsHelper(object):
except subprocess.CalledProcessError:
return False
def extract_row_path(self, scene_name):
return [scene_name[3:6], scene_name[6:9]]
def unzip(self):
"""
Unzip all files stored at settings.ZIP_DIR and save them in

438
landsat/image_helper.py Normal file
View File

@@ -0,0 +1,438 @@
# USGS Landsat Imagery Util
#
#
# Author: developmentseed
# Contributer: scisco, KAPPS-
#
# License: CC0 1.0 Universal
import os
import subprocess
import errno
import shutil
import tarfile
from tempfile import mkdtemp
import numpy
from osgeo import gdal
try:
import cv2
except ImportError:
pass
import settings
from general_helper import check_create_folder, get_file
def gdalwarp(src, dst, t_srs=None):
""" A subporcess wrapper for gdalwarp """
argv = ['gdalwarp']
if t_srs:
argv.append('-t_srs')
argv.append(t_srs)
argv.append('-overwrite')
argv.append(src)
argv.append(dst)
return subprocess.check_call(argv)
def gdal_translate(src, dst, **kwargs):
""" A subprocess wrapper for gdal_translate """
argv = ['gdal_translate']
for key, value in kwargs.iteritems():
argv.append('-%s' % key)
if isinstance(value, list):
for item in value:
argv.append(str(item))
else:
argv.append(str(value))
argv.append(src)
argv.append(dst)
return subprocess.check_call(argv)
class Process(object):
""" Full image processing class
Steps needed for a full process
1) _wrap()
2) _scale_pan()
3) _combine()
4) _image_correction()
5) _final_conversions()
"""
def __init__(self, zip_image, bands=[4, 3, 2], path=None):
""" Initating the Process class
Arguments:
image - the string containing the name of the image folder e.g. LC80030032014158LGN00
bands - a list of desired bands. Default is for True color
path - Path to where the image folder is located
"""
self.image = get_file(zip_image).split('.')[0]
self.destination = settings.PROCESSED_IMAGE
self.bands = bands
self.btm_prct = 2
self.top_prct = 2
if path:
self.path = path
self.temp = mkdtemp()
self.src_image_path = self.temp + '/' + self.image
self.warp_path = self.temp + '/' + self.image + '/warp'
self.scaled_path = self.temp + '/' + self.image + '/scaled'
self.final_path = self.temp + '/' + self.image + '/final'
self.delivery_path = self.destination + '/' + self.image
check_create_folder(self.src_image_path)
check_create_folder(self.warp_path)
check_create_folder(self.scaled_path)
check_create_folder(self.final_path)
check_create_folder(self.delivery_path)
self._unzip(zip_image, self.src_image_path)
def full(self):
""" Conducts the full image processing """
self._warp()
self._scale_pan()
self._combine()
self._image_correction()
self._final_conversions()
final_image = self._create_mask()
shutil.copy(final_image, self.delivery_path)
self._cleanup()
return
def full_with_pansharpening(self):
self._warp()
self._scale_pan()
self._combine()
self._image_correction()
self._final_conversions()
final_image = self._create_mask()
shutil.copy(final_image, self.delivery_path)
shutil.copy(self._pansharpen(), self.delivery_path)
self._cleanup()
return
def _cleanup(self):
""" Remove temp folder """
try:
shutil.rmtree(self.temp)
except OSError as exc:
if exc.errno != errno.ENOENT:
raise
def _pansharpen(self):
shutil.copy('%s/%s_B4.tfw' % (self.warp_path, self.image),
'%s/comp.tfw' % self.final_path)
argv = ['gdal_edit.py', '-a_srs', 'EPSG:3857',
'%s/comp.TIF' % self.final_path]
subprocess.check_call(argv)
argv = ['otbcli_BundleToPerfectSensor',
# '-ram', '6500',
'-inp', '%s/%s_B8.TIF' % (self.warp_path, self.image),
'-inxs', '%s/comp.TIF' % self.final_path,
'-out', '%s/pan.TIF' % self.final_path,
'uint16']
subprocess.check_call(argv)
for i in range(1, 4):
gdal_translate('%s/pan.TIF' % self.final_path,
'%s/pan-%s.TIF' % (self.final_path, i),
b=i)
argv = ['convert', '-combine']
for i in range(1, 4):
argv.append('%s/pan-%s.TIF' % (self.final_path, i))
argv.append('%s/pan.TIF' % self.final_path)
subprocess.check_call(argv)
argv = ['convert', '-depth', '8',
'%s/pan.TIF' % self.final_path,
'%s/final-pan.TIF' % self.final_path]
subprocess.check_call(argv)
argv = ['listgeo', '-tfw',
'%s/%s_B8.TIF' % (self.warp_path, self.image)]
subprocess.check_call(argv)
shutil.copy('%s/%s_B8.tfw' % (self.warp_path, self.image),
'%s/final-pan.tfw' % self.final_path)
argv = ['gdal_edit.py', '-a_srs', 'EPSG:3857',
'%s/final-pan.TIF' % self.final_path]
subprocess.check_call(argv)
return '%s/final-pan.TIF' % self.final_path
def _create_mask(self):
argv = ['gdal_calc.py',
'-A', '%s/%s_B2.TIF' % (self.warp_path, self.image),
'--outfile=%s/band-mask.TIF' % self.final_path,
'--calc=1*(A>0)',
'--type=UInt16']
subprocess.check_call(argv)
for i in range(1, 4):
gdal_translate('%s/final-color.TIF' % self.final_path,
'%s/band-%s.TIF' % (self.final_path, i),
b=i)
for i in range(1, 4):
argv = ['gdal_calc.py',
'-A', '%s/band-%s.TIF' % (self.final_path, i),
'-B', '%s/band-mask.TIF' % (self.final_path),
'--outfile=%s/maksed-final-%s.TIF' % (self.final_path, i),
'--calc=A*B',
'--type=UInt16']
subprocess.check_call(argv)
argv = ['convert', '-combine']
for i in range(1, 4):
argv.append('%s/maksed-final-%s.TIF' % (self.final_path, i))
argv.append('%s/comp.TIF' % self.final_path)
subprocess.check_call(argv)
argv = ['convert', '-depth', '8',
'%s/comp.TIF' % self.final_path,
'%s/final.TIF' % self.final_path]
subprocess.check_call(argv)
argv = ['listgeo', '-tfw',
'%s/%s_B4.TIF' % (self.warp_path, self.image)]
subprocess.check_call(argv)
shutil.copy('%s/%s_B4.tfw' % (self.warp_path, self.image),
'%s/final.tfw' % self.final_path)
argv = ['gdal_edit.py', '-a_srs', 'EPSG:3857',
'%s/final.TIF' % self.final_path]
subprocess.check_call(argv)
return '%s/final.TIF' % self.final_path
def _final_conversions(self):
""" Final color conversions. Return final image temp path """
print 'Convertin image tweaks'
# First conversion
argv = ['convert',
'-channel', 'B', '-gamma', '0.97',
'-channel', 'R', '-gamma', '1.04',
'-channel', 'RGB', '-sigmoidal-contrast', '40x15%',
'%s/rgb-null.TIF' % self.final_path,
'%s/rgb-sig.TIF' % self.final_path]
subprocess.check_call(argv)
# Second conversion
argv = ['convert',
'-channel', 'B', '-gamma', '0.97',
'-channel', 'R', '-gamma', '1.04',
'%s/rgb-scaled.TIF' % self.final_path,
'%s/rgb-scaled-cc.TIF' % self.final_path]
subprocess.check_call(argv)
print 'Convert: averaging'
# Fourth conversion
argv = ['convert',
'%s/rgb-sig.TIF' % self.final_path,
'%s/rgb-scaled-cc.TIF' % self.final_path,
'-evaluate-sequence', 'mean',
'%s/final-color.TIF' % self.final_path]
subprocess.check_call(argv)
def _image_correction(self):
try:
corrected_list = []
band_correction = [[2, 0.97], [4, 1.04]]
for band in self.bands:
print 'Starting the image processing'
file_path = ('%s/%s_B%s.TIF' % (self.warp_path,
self.image, band))
img = cv2.imread(file_path, 0) #-1 if the next package is released and includes (https://github.com/Itseez/opencv/pull/3033)
# Gamma Correction
for c in band_correction:
if c[0] == band:
img = img ** c[1]
# adding color corrected band back to list
corrected_list.append(img.astype(numpy.uint8))
# combining bands in list into a bgr img (opencv format for true color)
b, g, r = corrected_list[2], corrected_list[1], corrected_list[0]
img_comp = cv2.merge((b, g, r))
# converting bgr to ycrcb
imgy = cv2.cvtColor(img_comp, cv2.COLOR_BGR2YCR_CB)
# extracting y
y, cr, cb = cv2.split(imgy)
# equalizing y with CLAHE
clahe = cv2.createCLAHE(clipLimit=1.0, tileGridSize=(950, 950))
y = clahe.apply(y)
# merging equalized y with cr and cb
imgy = cv2.merge((y, cr, cb))
# converting ycrcb back to bgr
img = cv2.cvtColor(imgy, cv2.COLOR_YCR_CB2BGR)
# writing final equalized file
cv2.imwrite('%s/eq-hist.tif' % self.final_path, img)
except NameError, e:
print e.args[0]
print "Skipping Image Correction using OpenCV"
def _combine(self):
argv = ['convert', '-identify', '-combine']
for band in self.bands:
argv.append('%s/%s_B%s.TIF' % (self.warp_path, self.image, band))
argv.append('%s/rgb-null.TIF' % self.final_path)
subprocess.check_call(argv)
argv = ['convert', '-identify', '-combine']
for band in self.bands:
argv.append('%s/%s_B%s.TIF' % (self.scaled_path, self.image, band))
argv.append('%s/rgb-scaled.TIF' % self.final_path)
subprocess.check_call(argv)
def _scale_pan(self):
""" scaling pan to min max with 2 percent cut """
min_max = self._calculate_min_max()
# min_max = [6247, 32888]
min_max.extend([1, 255])
for band in self.bands:
print 'scaling pan to min max with 2%% cut for band %s' % band
gdal_translate('%s/%s_B%s.TIF' % (self.warp_path,
self.image, band),
'%s/%s_B%s.TIF' % (self.scaled_path,
self.image, band),
ot='byte', scale=min_max
)
def _calculate_min_max(self):
""" Calculate Min/Max values with 2 percent cut """
min_max_list = []
for band in self.bands:
file_path = ('%s/%s_B%s.TIF' % (self.warp_path,
self.image, band))
if os.path.exists(file_path):
print ('Starting the Min/Max process with designated -percent '
'cut- for band %s of %s' % (band, self.image))
print '...'
# Open images in the warp folder
ds = gdal.Open(file_path)
# converting raster to numpy array
values = numpy.array(ds.GetRasterBand(1).ReadAsArray())
to_list = values.tolist()
full_list = [item for sublist in to_list for item in sublist]
# removing zeros
value_list = filter(lambda x: x != 0, full_list)
list_len = len(value_list)
value_list.sort()
# determining number of integers to cut from bottom of list
cut_value_bottom = int(float(self.btm_prct) /
float(100) * float(list_len))
# determining number of integers to cut from top of list
cut_value_top = int(float(self.top_prct) /
float(100) * float(list_len))
# establishing new min and max with percent cut
cut_list = value_list[
(cut_value_bottom + 1):(list_len - cut_value_top)]
# adding min and max with percent cut values to list
min_max_list.extend([cut_list[0], cut_list[-1]])
print 'Finished processing band %s of %s' % (band, self.image)
return [min(min_max_list), max(min_max_list)]
def _warp(self):
""" Warping the images on provided bands + band 8 """
# Adding band 8 to the band list
new_bands = list(self.bands)
new_bands.append(8)
# Warping
for band in new_bands:
gdalwarp('%s/%s_B%s.TIF' % (self.src_image_path, self.image, band),
'%s/%s_B%s.TIF' % (self.warp_path, self.image, band),
t_srs='EPSG:3857')
def _unzip(self, src, dst):
print "Unzipping %s - It might take some time" % self.image
tar = tarfile.open(src)
tar.extractall(path=dst)
tar.close()

View File

@@ -11,195 +11,252 @@
from __future__ import print_function
import sys
import subprocess
from optparse import OptionParser, OptionGroup
import argparse
import textwrap
import json
from dateutil.parser import parse
from gs_helper import GsHelper
from clipper_helper import Clipper
from metadata_helper import Metadata
from search_helper import Search
from general_helper import georgian_day, year, reformat_date
from general_helper import reformat_date
from image_helper import Process
import settings
def define_options():
DESCRIPTION = """Landsat-util is a command line utility that makes it easy to
search, download, and process Landsat imagery.
help_text = """
Landsat-util helps you with searching Landsat8 metadata and downloading the
images within the search criteria.
Commands:
Search:
landsat.py search [-h] [-l LIMIT] [-s START] [-e END] [-c CLOUD]
[--onlysearch] [--imageprocess]
{pr,shapefile,country}
With landsat-util you can also find rows and paths of an area by searching
a country name or using a custom shapefile and use the result to further
narrow your search.
positional arguments:
{pr,shapefile,country}
Search commands
pr Activate paths and rows
shapefile Activate Shapefile
country Activate country
Syntax:
%prog [OPTIONS]
optional arguments:
-h, --help show this help message and exit
-l LIMIT, --limit LIMIT
Search return results limit default is 100
Example uses:
To search and download images or row 003 and path 003 with a data range
with cloud coverage of maximum 3.0%:
%prog -s 01/01/2014 -e 06/01/2014 -l 100 -c 3 -i "003,003"
-s START, --start START
Start Date - Most formats are accepted e.g.
Jun 12 2014 OR 06/12/2014
-e END, --end END End Date - Most formats are accepted e.g.
Jun 12 2014 OR 06/12/2014
-c CLOUD, --cloud CLOUD
Maximum cloud percentage default is 20 perct
-d, --download Use this flag to download found images
--imageprocess If this flag is used, the images are downloaded
and process. Be cautious as it might take a
long time to both download and process large
batches of images
--pansharpen Whether to also pansharpen the process image.
Pansharpening takes a long time
Download:
landsat download [-h] sceneID [sceneID ...]
positional arguments:
sceneID Provide Full sceneID, e.g. LC81660392014196LGN00
Process:
landsat.py process [-h] [--pansharpen] path
positional arguments:
path Path to the compressed image file
optional arguments:
--pansharpen Whether to also pansharpen the process image.
Pansharpening takes a long time
"""
parser = OptionParser(usage=help_text)
search = OptionGroup(parser, "Search",
"To search Landsat's Metadata use these options:")
def args_options():
parser = argparse.ArgumentParser(prog='landsat',
formatter_class=argparse.RawDescriptionHelpFormatter,
description=textwrap.dedent(DESCRIPTION))
search.add_option("-i", "--rows_paths",
help="Include a search array in this format:"
"\"path,row,path,row, ... \"",
metavar="\"path,row,path,row, ... \"")
search.add_option("-s", "--start",
help="Start Date - Format: MM/DD/YYYY",
metavar="01/27/2014")
search.add_option("-e", "--end",
help="End Date - Format: MM/DD/YYYY",
metavar="02/27/2014")
search.add_option("-c", "--cloud",
help="Maximum cloud percentage",
metavar="1.00")
search.add_option("-l", "--limit",
help="Limit results. Max is 100",
default=100,
metavar="100")
search.add_option("-d", "--direct",
help="Only search scene_files and don't use the API",
action='store_true',
dest='direct')
subparsers = parser.add_subparsers(help='Landsat Utility',
dest='subs')
parser.add_option_group(search)
# Search Logic
parser_search = subparsers.add_parser('search',
help='Search Landsat metdata')
clipper = OptionGroup(parser, "Clipper",
"To find rows and paths of a shapefile or a country "
"use these options:")
# Global search options
parser_search.add_argument('-l', '--limit', default=100, type=int,
help='Search return results limit\n'
'default is 100')
parser_search.add_argument('-s', '--start',
help='Start Date - Most formats are accepted '
'e.g. Jun 12 2014 OR 06/12/2014')
parser_search.add_argument('-e', '--end',
help='End Date - Most formats are accepted '
'e.g. Jun 12 2014 OR 06/12/2014')
parser_search.add_argument('-c', '--cloud', type=float, default=20.0,
help='Maximum cloud percentage '
'default is 20 perct')
parser_search.add_argument('-d', '--download', action='store_true',
help='Use this flag to download found images')
parser_search.add_argument('--imageprocess', action='store_true',
help='If this flag is used, the images are '
'downloaded and process. Be cautious as it '
'might take a long time to both download and '
'process large batches of images')
parser_search.add_argument('--pansharpen', action='store_true',
help='Whether to also pansharpen the process '
'image. Pan sharpening takes a long time')
clipper.add_option("-f", "--shapefile",
help="Path to a shapefile for generating the rows and"
"path.",
metavar="/path/to/my_shapefile.shp")
clipper.add_option("-o", "--country",
help="Enter country NAME or CODE that will designate "
"imagery area, for a list of country syntax visit: "
"http://goo.gl/8H9wuq",
metavar="Italy")
search_subparsers = parser_search.add_subparsers(help='Search commands',
dest='search_subs')
parser.add_option_group(clipper)
search_pr = search_subparsers.add_parser('pr',
help="Activate paths and rows")
search_pr.add_argument('paths_rows',
metavar='path_row',
type=int,
nargs="+",
help="Provide paths and rows")
metadata = OptionGroup(parser, "Metadata Updater",
"Use this option to update Landsat API if you have"
"a local copy running")
search_shapefile = search_subparsers.add_parser('shapefile',
help="Activate Shapefile")
search_shapefile.add_argument('path',
help="Path to shapefile")
metadata.add_option("-u", "--update-metadata",
help="Update ElasticSearch Metadata. Requires access"
"to an Elastic Search instance",
action='store_true',
dest='umeta')
search_country = search_subparsers.add_parser('country',
help="Activate country")
search_country.add_argument('name', help="Country name e.g. ARE")
parser.add_option_group(metadata)
parser_download = subparsers.add_parser('download',
help='Download images from Google Storage')
parser_download.add_argument('scenes',
metavar='sceneID',
nargs="+",
help="Provide Full sceneID, e.g. "
"LC81660392014196LGN00")
parser_process = subparsers.add_parser('process',
help='Process Landsat imagery')
parser_process.add_argument('path',
help='Path to the compressed image file')
parser_process.add_argument('--pansharpen', action='store_true',
help='Whether to also pansharpen the process '
'image. Pan sharpening takes a long time')
return parser
def main(options, args=None):
def main(args):
"""
Main function - launches the program
"""
# Raise an error if no option is given
raise_error = True
# Execute rows_paths sequence
if options.rows_paths:
raise_error = False
array = rows_paths_check(options.rows_paths)
date_rng = None
gs = GsHelper()
if options.start:
options.start = reformat_date(parse(options.start))
if options.end:
options.end = reformat_date(parse(options.end))
if options.rows_paths:
if options.direct:
files = gs.search(options.rows_paths, start=options.start,
end=options.end)
if files:
if gs.batch_download(files):
gs.unzip()
print("%s images were downloaded and unzipped!"
% len(files))
exit("Your unzipped images are located here: %s" %
gs.unzip_dir)
else:
exit("No Images found. Change your search parameters.")
if args:
if args.subs == 'process':
p = Process(args.path)
if args.pansharpen:
p.full_with_pansharpening()
else:
s = Search()
result = s.search(row_paths=options.rows_paths,
start_date=options.start,
end_date=options.end,
cloud_max=options.cloud,
limit=options.limit)
p.full()
exit("The output is stored at %s." % settings.PROCESSED_IMAGE)
elif args.subs == 'search':
try:
if args.start:
args.start = reformat_date(parse(args.start))
if args.end:
args.end = reformat_date(parse(args.end))
except TypeError:
exit("You date format is incorrect. Please try again!", 1)
s = Search()
if args.search_subs == 'pr':
result = s.search(row_paths=args.paths_rows,
limit=args.limit,
start_date=args.start,
end_date=args.end,
cloud_max=args.cloud)
elif args.search_subs == 'shapefile':
clipper = Clipper()
result = s.search(clipper.shapefile(args.path),
limit=args.limit,
start_date=args.start,
end_date=args.end,
cloud_max=args.cloud)
elif args.search_subs == 'country':
clipper = Clipper()
prs = clipper.country(args.name)
if prs:
result = s.search(prs,
limit=args.limit,
start_date=args.start,
end_date=args.end,
cloud_max=args.cloud)
try:
if result['status'] == 'SUCCESS':
print('%s items were found' % result['total_returned'])
print('Starting the download:')
for item in result['results']:
gs.single_download(row=item['row'],
path=item['path'],
name=item['sceneID'])
gs.unzip()
print("%s images were downloaded and unzipped!"
print('%s items were found' % result['total'])
if result['total'] > 100:
exit('Too many results. Please narrow your search')
else:
print(json.dumps(result, sort_keys=True, indent=4))
# If only search
if args.download:
gs = GsHelper()
print('Starting the download:')
for item in result['results']:
gs.single_download(row=item['row'],
path=item['path'],
name=item['sceneID'])
print("%s images were downloaded"
% result['total_returned'])
exit("Your unzipped images are located here: %s" %
gs.unzip_dir)
if args.imageprocess:
for item in result['results']:
p = Process('%s/%s.tar.bz' % (gs.zip_dir,
item['sceneID']))
if args.pansharpen:
p.full_with_pansharpening()
else:
p.full()
else:
exit("The downloaded images are located here: %s" %
gs.zip_dir)
else:
exit('Done!')
elif result['status'] == 'error':
exit(result['message'])
if options.shapefile:
raise_error = False
clipper = Clipper()
clipper.shapefile(options.shapefile)
exit("Shapefile clipped")
if options.country:
raise_error = False
clipper = Clipper()
clipper.country(options.country)
exit("Process Completed")
if options.umeta:
raise_error = False
meta = Metadata()
print('Starting Metadata Update using Elastic Search ...\n')
if meta.populate():
exit('Task Completed!')
else:
exit('Error!')
if raise_error:
parser.print_help()
exit('\nYou must specify an argument.')
except KeyError:
exit('Too Many API queries. You can only query DevSeed\'s '
'API 5 times per minute', 1)
elif args.subs == 'download':
gs = GsHelper()
print('Starting the download:')
for scene in args.scenes:
gs.single_download(row=gs.extract_row_path(scene)[1],
path=gs.extract_row_path(scene)[0],
name=scene)
exit("The downloaded images are located here: %s" % gs.zip_dir)
def exit(message):
def exit(message, code=0):
print(message)
sys.exit()
def rows_paths_check(rows_paths):
"""
Turn the search text into paired groups of two
"""
array = rows_paths.split(',')
paired = []
for i in xrange(0, len(array), 2):
paired.append(array[i:i + 2])
return paired
sys.exit(code)
def package_installed(package):
@@ -218,11 +275,11 @@ def package_installed(package):
def __main__():
global parser
parser = define_options()
(options, args) = parser.parse_args()
main(options, args)
parser = args_options()
args = parser.parse_args()
main(args)
if __name__ == "__main__":
__main__()

View File

@@ -7,7 +7,6 @@
# License: CC0 1.0 Universal
import json
import requests
import settings
@@ -70,8 +69,6 @@ class Search(object):
cloud_min,
cloud_max)
print search_string
# Have to manually build the URI to bypass requests URI encoding
# The api server doesn't accept encoded URIs
r = requests.get('%s?search=%s&limit=%s' % (self.api_url,
@@ -121,6 +118,8 @@ class Search(object):
for i in new_array])
except ValueError:
return ''
except TypeError:
raise Exception('Invalid Argument. Please try again!')
if start_date and end_date:
query.append(self._date_range_builder(start_date, end_date))
@@ -142,8 +141,8 @@ class Search(object):
search_string = '+AND+'.join(map(str, query))
if len(search_string) > 0:
search_string = search_string + '+AND+' + \
'+OR+'.join(map(str, rows_paths))
search_string = search_string + '+AND+(' + \
'+OR+'.join(map(str, rows_paths)) + ')'
else:
search_string = '+OR+'.join(map(str, rows_paths))

View File

@@ -14,6 +14,8 @@ import os
# Google Storage Landsat Config
DEBUG = os.getenv('DEBUG', False)
SOURCE_URL = 'gs://earthengine-public/landsat'
SCENE_FILE_URL = SOURCE_URL + '/scene_list.zip'
SATELLITE = 'L8'
@@ -39,9 +41,10 @@ HOME_DIR = os.path.expanduser('~')
# Utility's base directory
BASE_DIR = os.path.abspath(os.path.dirname(__file__))
DOWNLOAD_DIR = HOME_DIR + '/landsat/output/imagery'
DOWNLOAD_DIR = HOME_DIR + '/landsat'
ZIP_DIR = DOWNLOAD_DIR + '/zip'
UNZIP_DIR = DOWNLOAD_DIR + '/unzip'
PROCESSED_IMAGE = DOWNLOAD_DIR +'/processed'
SCENE_FILE = DOWNLOAD_DIR + '/scene_list'
ASSESTS_DIR = BASE_DIR + '/assests'

View File

@@ -3,3 +3,6 @@ elasticsearch==1.1.1
gsutil==4.4
requests==2.3.0
python-dateutil==2.2
nose==1.3.3
pdoc==0.2.4
numpy

View File

@@ -38,7 +38,7 @@ def readme():
return f.read()
setup(name="landsat",
version='0.1.0',
version='0.2.1',
description="A utility to search, download and process Landsat 8" +
" satellite imagery",
long_description=readme(),
@@ -55,6 +55,7 @@ setup(name="landsat",
"elasticsearch==1.1.1",
"gsutil==4.4",
"requests==2.3.0",
"python-dateutil==2.2"
"python-dateutil==2.2",
"numpy"
],
)

View File

@@ -10,10 +10,7 @@
import os
import sys
import errno
import shutil
import unittest
from tempfile import mkdtemp, mkstemp
try:
from landsat.clipper_helper import Clipper
@@ -33,29 +30,16 @@ class TestClipperHelper(unittest.TestCase):
cls.base_dir = os.path.abspath(os.path.dirname(__file__))
cls.shapefile = cls.base_dir + '/samples/test_shapefile.shp'
@classmethod
def tearDownClass(cls):
# try:
# shutil.rmtree(cls.temp_folder_base)
# except OSError as exc:
# if exc.errno != errno.ENOENT:
# raise
pass
def test_shapefile(self):
# Test with correct shapefile
self.assertEqual([(u'009', u'045'), (u'008', u'045')],
self.assertEqual([[u'009', u'045'], [u'008', u'045']],
self.c.shapefile(self.shapefile))
def test_country(self):
# Test output of a known country
self.assertEqual([('145', u'057'), ('145', u'058')],
self.assertEqual([['145', u'057'], ['145', u'058']],
self.c.country('Maldives'))
if __name__ == '__main__':
unittest.main()

View File

@@ -44,16 +44,24 @@ class TestGeneralHelper(unittest.TestCase):
def test_create_paired_list(self):
# Test correct input (string)
output = g.create_paired_list('003,003,004,004')
self.assertEqual([('003', '003'), ('004', '004')], output)
self.assertEqual([['003', '003'], ['004', '004']], output)
# Test correct input (list)
output = g.create_paired_list(['003', '003', '004', '004'])
self.assertEqual([('003', '003'), ('004', '004')], output)
self.assertEqual([['003', '003'], ['004', '004']], output)
# Test incorrect input
self.assertRaises(ValueError, g.create_paired_list, '003,003,004')
self.assertRaises(ValueError, g.create_paired_list, '')
# Test with paired list
output = g.create_paired_list([['003', '003'], ['004', '004']])
self.assertEqual([['003', '003'], ['004', '004']], output)
#Test with paired tupile
output = g.create_paired_list([('003', '003'), ('004', '004')])
self.assertEqual([('003', '003'), ('004', '004')], output)
def test_check_create_folder(self):
new_path = g.check_create_folder(self.temp_folder_test)

View File

@@ -34,7 +34,7 @@ class TestGsHelper(unittest.TestCase):
cls.g.download_dir = cls.temp_folder + '/download'
cls.g.zip_dir = cls.g.download_dir + '/zip'
cls.g.unzip_dir = cls.g.download_dir + '/unzip'
cls.g.scene_file = cls.g.download_dir + 'scene_list'
cls.g.scene_file = cls.g.download_dir + '/scene_list'
@classmethod
def tearDownClass(cls):
@@ -55,7 +55,7 @@ class TestGsHelper(unittest.TestCase):
# test a search with known result
query = '003,003'
start = '01/01/2014'
end = '06/01/2014'
end = '01/06/2014'
self.assertEqual(1, len(self.g.search(query, start, end)))

View File

@@ -21,94 +21,48 @@ except ImportError:
'../landsat')))
import landsat
class MockOptions(object):
""" Create Mock commandline options """
def __init__(self, **kwargs):
for k, v in kwargs.iteritems():
setattr(self, k, v)
class TestLandsat(unittest.TestCase):
dictionary = {
'rows_paths': None,
'start': None,
'end': None,
'cloud': None,
'limit': 100,
'direct': None,
'shapefile': None,
'country': None,
'umeta': None
}
@classmethod
def setUpClass(cls):
cls.base_dir = os.path.abspath(os.path.dirname(__file__))
cls.shapefile = cls.base_dir + '/samples/test_shapefile.shp'
cls.parser = landsat.args_options()
# @unittest.skip('Takes too much time')
def test_search_rows_paths_without_date_cloud(self):
self.dictionary['rows_paths'] = '136,008'
m = MockOptions(**self.dictionary)
def test_search_pr_correct(self):
# Correct search
args = ['search', '--onlysearch', 'pr', '008', '008']
self.assertRaises(SystemExit, landsat.main, m)
with self.assertRaises(SystemExit) as cm:
landsat.main(self.parser.parse_args(args))
# @unittest.skip('Takes too much time')
def test_search_rows_paths_w_date_no_cloud(self):
self.dictionary['rows_paths'] = '008,136'
self.dictionary['start'] = 'May 1 2013'
self.dictionary['end'] = 'May 15 2013'
self.assertEqual(cm.exception.code, 0)
m = MockOptions(**self.dictionary)
def test_search_pr_wrong_input(self):
args = ['search', '--onlysearch', 'pr', 'what?']
self.assertRaises(SystemExit, landsat.main, m)
with self.assertRaises(SystemExit) as cm:
landsat.main(self.parser.parse_args(args))
# @unittest.skip('Takes too much time')
def test_search_rows_paths_w_date_cloud(self):
self.dictionary['rows_paths'] = '008,136'
self.dictionary['start'] = 'May 1 2013'
self.dictionary['end'] = 'May 15 2013'
self.dictionary['cloud'] = 100
self.assertNotEqual(cm.exception.code, 0)
m = MockOptions(**self.dictionary)
def test_search_shapefile_correct(self):
args = ['search', '--onlysearch', 'shapefile', self.shapefile]
self.assertRaises(SystemExit, landsat.main, m)
with self.assertRaises(SystemExit) as cm:
landsat.main(self.parser.parse_args(args))
# @unittest.skip('Takes too much time')
def test_direct_search(self):
self.dictionary['direct'] = True
self.dictionary['rows_paths'] = '136,008'
self.dictionary['start'] = 'May 1 2013'
self.dictionary['end'] = 'May 15 2013'
self.assertEqual(cm.exception.code, 0)
m = MockOptions(**self.dictionary)
def test_search_shapefile_incorrect(self):
args = ['search', '--onlysearch', 'shapefile', 'whatever']
self.assertRaises(SystemExit, landsat.main, m)
# @unittest.skip('Takes too much time')
def test_shapefile(self):
self.dictionary['shapefile'] = self.shapefile
m = MockOptions(**self.dictionary)
self.assertRaises(SystemExit, landsat.main, m)
# @unittest.skip('Takes too much time')
def test_country(self):
self.dictionary['country'] = 'Maldives'
m = MockOptions(**self.dictionary)
self.assertRaises(SystemExit, landsat.main, m)
def test_metada(self):
self.dictionary['umeta'] = True
m = MockOptions(**self.dictionary)
self.assertRaises(SystemExit, landsat.main, m)
with self.assertRaises(Exception) as cm:
landsat.main(self.parser.parse_args(args))
self.assertEqual(cm.exception.args[0],
'Invalid Argument. Please try again!')