Urbanisation on high-class land around Auckland

The video shows the successive waves of urban growth from 1990 to 2008. The urban extent of the city in 1990 (pale yellow) is taken as a baseline. Watch the growth on low (purple) and on high-class land (red).

An interactive, full-screen version of this map is also available.

Auckland is facing the challenge of accommodating a growing population. A significant growth in house numbers is needed and various proposals have been made as to how this could be achieved. One proposal is to expand the city into the rural land that surrounds it. But all land is not equal. Some parts of the rural land – where the soils and climate support thriving produce, dairying, and bloodstock sectors – are more valuable than other areas. Such high-class land is a precious and limited resource. A change in the use of that land is therefore an important decision with many consequences.

Urbanisation on agricultural land is essentially an irreversible process. Dividing agricultural land into relatively small parcels for housing produces a sharp rise in land value, making it difficult for farmers to buy back; and many of the qualities of high-class land require centuries to return to their initial state.

The importance of urban growth over high-class land

The diagram below shows that over a period of 18 years, 15% of the available high-class land around Auckland has been lost to urbanisation.

Loss of high-class land around Auckland, 1990–2008

Loss of high-class land around Auckland, 1990–2008.

A comparison of urban growth on low- and high-class land around Auckland over the same time period shows that the expansion onto high-class land has drastically surpassed the expansion onto low-class land.

Comparison of the relative importance of urbanisation on low- and high-class land around Auckland

Comparison of the relative importance of urbanisation on low- and high-class land around Auckland.

 

Sources of this analysis

Information on the impact of housing planning on high-class land can be accessed in a 2012 study led by Robbie Andrew and John Dymond from Landcare Research.

High-class land has been mapped nationally as part of the Land Use Capability database.

LUC classes 1 and 2 were used in this analysis. LUC class 1 is the most versatile multiple-use land with minimal physical limitations for arable use. LUC class 2 is very good land with slight physical limitations to arable use, readily controlled by management and soil conservation practices.

The urban areas have been extracted from the Land Cover Database of New Zealand (1996, 2001 and 2008), and from the 1990 urban areas layer (1990).

All the data used in this analysis are open and available on the LRIS portal under the Landcare Data Use Licence. The data have been processed in  GRASS GIS. The figures have been produced in the R statistical environment, and the maps have been rendered using TileMill.

Any further discussion, perhaps a 3-way Lync?

Posted in General | 1 Comment »

Some comments on migrating Landcare Research’s Web Map Services Infrastructure to Amazon Cloud Computing Services

Author: Andrew Cowie

Introduction

Landcare Research has been offering web browser mapping applications, such as Our Environment for some time, but has now expanded its offerings to include web services.

Web services support computer-to-computer data sharing, which means your web applications and desktop software (e.g. ArcGIS, QGIS) can directly query and access our science data on demand.  Landcare Research provides two types of web services:

•   Web Feature Service (WFS) (http://lris.scinfo.org.nz) for accessing vector data (points, lines or polygons);

•   Web Map Service (WMS) (http://maps.scinfo.org.nz) for accessing maps (images).

Landcare Research’s WFS and WMS conform to international geospatial data standards defined by the Open Geospatial Consortium/ISO.

In this post we talk about our web map services and our experience of setting up our web services on Amazon Cloud Computing Services.

Background

Landcare Research has been providing public WMS for some of our science data and background topographic maps on an experimental basis for over a year now. These services have not been widely publicised because of concerns about the reliability of the services under heavy demand. Resource constraints (physical hardware, networks and level of staff cover) created challenges when trying to manage scalability and performance, particularly during sudden peaks in use, and in providing 24 hour availability of the servcies. Greater flexibility, scalability and guarantees around availability have now been obtained by migrating the backend databases and map caches which underpin the Web Map Services, along with the website which describes how to use them (http://maps.scinfo.org.nz), to the Amazon Cloud, using Amazon Web Services (AWS).

Benefits

Hoped for benefits from doing this included the following.

  • Landcare Research does not have to invest or incur costs of supporting the operation of high-end servers and networks on which to provide the services.
  • Guaranteed service level uptimes of hardware.
  • Rapid elasticity to handle varying levels of demand.
  • Systems and procedures in place to deal with any type of security threat.
  • Direct access to on-demand, high-grade network in a secure data centre.
  • On-site skilled technical teams and application services monitor the services 24×7.
  • Reduction in internal hardware and maintenance resource requirements.
  • Reduced operational expenses.

Architecture

The chosen architecture was to use two Amazon Elastic Compute Cloud (Amazon EC2) instances, one for data, the other for map generation and to house our WMS website.

Amazon EC2 is a web service that provides and enables resizable compute capacity in the cloud and is designed to make web-scale computing easier for developers. In its most basic form, an EC2 instance is a virtual machine, which in this case is configured as a server. One EC2 instance is used to hold the raw spatial data in a PostGIS database (), while the other instance is used to hold the map generation software (Mapserver), map caching software (Mapserver’s Mapcache) plus our public WMS website.

Using pre-generated map caches allow the services to be accessed faster while reducing server load. The map caches are stored in an Amazon Simple Storage Service (S3) bucket. Amazon’s S3 is basically storage space tied to a particular region in the Amazon cloud. They can be used to store and retrieve very large quantities of data, are flexible, scalable, and persistent.

The front-facing map serving virtual machine is connected to a load balancer and auto-scaling services, which automatically spin-up additional servers when load reaches a certain threshold. However, when the load drops off, these additional instances are killed off, leaving a single server running again. This is one of the major benefits of using AWS, the ability of the system to automatically scale itself up or down, to meet demand.

Challenges

The project itself faced a number of challenges.

Architecture

  • Initially a duplicate of the existing Palmerston North based architecture was created in the Amazon Cloud, using the same number of servers (3). However this was found to be

a)      expensive to run, and

b)      made handling load balancing and auto scaling more complex.

So two servers were eventually deployed; one to house the database while the other caches and serves the maps, and host the website. This latter machine is attached to Amazon’s load-balancing and auto-scaling services, which duplicates the machine when load becomes heavy, and when the load eases, kills off any un-needed instances.

  • As the map caches are very large (some 500,000 images), storing them on an EC2 instance, which would then be duplicated, was not feasible in terms of the time it would take to spin up additional instances, as well as the costs incurred in having multiple instances of very large machines running. The solution chosen was to store these caches in an S3 instance, which multiple machines can concurrently connect to. This was not straightforward to achieve as we needed to find a cache type which the caching software (Mapserver’s Mapcache) could talk to. Our preferred method, using Berkeley Database caches, was not possible due to the nature of how S3 works. Therefore we went for a solution where we store the map tiles in a file-based directory system in S3.

Map cache creation

  • The cache creation software was found to be slower when directly writing into the S3 backend, so we built the caches ‘locally’ on a volume attached to an EC2 instance, then used tools created specifically for working with S3 to copy the caches across to S3.

AWS location choice

  • Initially the East Coast USA was chosen as it is the default AWS option, and also the cheapest. However after testing the speed of the web services, it was found that the West Coast USA region could deliver faster response times for New Zealand. This meant the servers and services were moved from East coast region to West coast region

Setting thresholds for Auto-scaling

  • Alarm triggers had to be set to listen for when the server was under heavy load. These threshold values may need to be tweaked once the services have experienced some use.

General Comments

Using the AWS suite of tools proved on the whole to be straightforward, the challenges faced were mostly down to the nature of the spatial data we provide and getting the software we use to create and serve our maps efficiently and effectively with the Amazon services.

The speed of the services is reasonably good, however, still slower than the services we host internally which provide maps for Our Environment and S-map Online web applications. Speed can be improved by building native support for Amazon S3 as a backend to our map caching software. However the biggest speed increase would be gained by moving the services from their current West Coast USA home, to Amazon’s newest location region – Sydney, Australia. Unfortunately, the launch of this data centre was announced by Amazon the day after we formally launched our web services!  We hope to move the services to the Sydney data centre sometime soon…

Details on how to access the WMS and how to use it with various desktop GIS and web applications can be found at http://maps.scinfo.org.nz.

If you have any questions, please contact us: lris_support@landcareresearch.co.nz

 

 

 

 

 

 

Tags: , , , , ,
Posted in General | No Comments »

Informatics leads rollout of new National Land Resource Centre website

Members of the Informatics team have recently been involved in the roll out of the website for the new National Land Resource Centre.

An initiative of the Crown Research Institute Landcare Research, the National Land Resources Centre (www.nlrc.org.nz) will be a ‘one stop shop’ for providing information for policy, business and science, co-ordinating engagement and foresight into future issues, as well as undertaking capacity building across the sectors.

The centre has three main aims which have been developed in conjunction with stakeholders:

  • “Engagement with all those interested in the land resource by providing a gateway into available research and resources, workshops, forums and best teams.
  • Access to customised, easily consumable and fit-for-purpose information for policy, business and science users – for today and tomorrow’s New Zealand.  The NLRC will remove the barriers that sometimes prevent science being understood and used.”
  • And finally, an improvement in national capability to lift performance for those researching, governing and managing the land resource.”

The NLRC web site is a ‘one stop shop’ to data, information, tools and other types of resources of interest in protecting, enhancing and leveraging New Zealand’s land economy. The vision was for a website that allowed people to quickly, easily and conveniently find information about New Zealand’s land resources and related scientific research. The content of the site comprises comprehensive coverage on projects, organisations, data, tools, best practices and standards, along with related information on events and news. Content is updated on an ongoing basis so that the site reflects the current situation in the land resources arena within New Zealand.

The website has been implement using the Squiz Matrix content management system by staff at the New Zealand Squiz office. David Medyckyj-Scott (Informatics and Technical Director of the NLRC) and Emily Weeks (Content Manager) oversaw the design, implementation and roll out as well as populating the site with content. Other Informatics staff assisted in specific features in the website. The website itself is a mixture of static pages and searchable directories of resources. In the main, the site is a gateway linking to externally generated/hosted resources; it does not hold copies of resources local to the web site. One of the unique aspects of the site is that users are able to ‘locate’ resources with reference to a specific geographic extent and also view the geographic ‘footprint’ of a resources e.g. the geographic area of interest for a particular project.

 

The NLRC was officially launched by the Secretary for the Environment, Dr Paul Reynolds (Ministry for the Environment) at a function at Te Papa, Wellington on Thursday 5th July 2012.

Tags: , , , ,
Posted in General | No Comments »

High performance computing helps saves the NZ stick insect

With the increasing use of next generation DNA sequencing technologies as a regular part of biological research, vast amounts of DNA sequence data are being generated at an ever increasing rate. But with this added capability and power comes technical challenges around data management, processing and analysis. Genomics work carried out in Landcare is no exception when it comes to encountering such problems. An example of these challenges is found is some recent work led by Landcare Research scientist Thomas Buckley and carried out by post-doctoral researcher Alice Dennis and PhD student Luke Dunning.

Alice and Luke are interested in the functional coding regions of genomes of stick insect species native to New Zealand. Until recently processing their insect DNA sequence data collected from the next generation sequencing-by-synthesis platforms took one whole week per individual; even when using a fast multi-core desktop Linux machine with plenty of RAM.

The length of time to process the sequence data significantly slows down their research program and limits the number of such processing steps that can be undertaken within any given project. To improve on this problem Dan White (Informatics team) worked with Alice and Luke to use the computing resources within the National e-Science Infrastructure (NeSI) to shift their memory absorbing processes to NeSI’s high performance computing (HPC) resources which Landcare Research has access to as a member of NeSI. In comparison to the resources available at Landcare Research, the NeSI HPC resources that are now available include an 80 processor computer with each processor having 12 cores and 96 GB of RAM and access to 200TB of disk storage. For the stick insect case above early tests have shown a reduction in processing time from one week to just over 3 hours; a dramatic and significant improvement! Further improvements are expected as we explore the options of processing multiple files simultaneously and by chaining multi step sequence analyses together in an automated process.

· High powered computing saves the NZ stick insect: With the establishment and increasing use of next generation DNA sequencing technologies as a regular part of biological research vast amounts of DNA sequence data are being generated at an ever increasing rate. But with this added capability and power comes technical challenges around data management, processing and analysis. Genomics work carried out in Landcare is no exception. An example of these challenges is found is some of the work carried out by postdoctoral researcher Alice Dennis, and PhD student Luke Dunning, who work with Thomas Buckley. Alice and Luke are interested in the functional coding regions of genomes of stick insect species native to New Zealand. But processing their insect DNA sequence data collected from the next generation sequencing-by-synthesis platforms took one whole week per individual; even when using a fast multi-core desktop Linux machine with plenty of RAM. This type of delay significantly slows down their research program and limits the number of such processing steps that can be undertaken within any given project. To improve on this problem Dan White (Informatics) worked with Alice and Luke, and the computing resources within the National e-S

High powered computing saves the NZ stick insect: With the establishment and increasing use of next generation DNA sequencing technologies as a regular part of biological research vast amounts of DNA sequence data are being generated at an ever increasing rate. But with this added capability and power comes technical challenges around data management, processing and analysis. Genomics work carried out in Landcare is no exception. An example of these challenges is found is some of the work carried out by postdoctoral researcher Alice Dennis, and PhD student Luke Dunning, who work with Thomas Buckley. Alice and Luke are interested in the functional coding regions of genomes of stick insect species native to New Zealand. But processing their insect DNA sequence data collected from the next generation sequencing-by-synthesis platforms took one whole week per individual; even when using a fast multi-core desktop Linux machine with plenty of RAM. This type of delay significantly slows down their research program and limits the number of such processing steps that can be undertaken within any given project. To improve on this problem Dan White (Informatics) worked with Alice and Luke, and the computing resources within the National e-Science Infrastructure (NeSI) to shift their memory absorbing processes to NeSI’s high powered computing (HPC) resources  which LCR has access to as a member of NeSI. In comparison to the resources available at LCR, the NeSI HPC resources that were now available include an 80 processor computer with each processor having 12 cores and 96 GB of RAM and access to 200TB of disk storage. For the stick insect case above early tests have shown a reduction in processing time from one week to just over 3 hours; a dramatic and significant improvement! Further improvements are expected as we exploring the options of running multiple files simultaneously and by chaining multi step sequence analyses together in an automated process.

cience Infrastructure (NeSI) to shift their memory absorbing processes to NeSI’s high powered computing (HPC) resources which LCR has access to as a member of NeSI. In comparison to the resources available at LCR, the NeSI HPC resources that were now available include an 80 processor computer with each processor having 12 cores and 96 GB of RAM and access to 200TB of disk storage. For the stick insect case above early tests have shown a reduction in processing time from one week to just over 3 hours; a dramatic and significant improvement! Further improvements are expected as we exploring the options of running multiple files simultaneously and by chaining multi step sequence analyses together in an automated process.

Tags: , , , ,
Posted in General | No Comments »