MACA Data Portal


Welcome to the MACA data portal. This tool helps you to access MACA data that is hosted on NKN. If you download data, check 'best practices' and join the mailing list.

This tool gives you URL links to either raw MACA data, subsets of MACA data or even some summaries of MACA data.

To decide which download format you should choose, consider what type of machine you will be using to download these:

  • Any machine w/ web browser: Choose 'files of URLs'. Here the tool will create a file of URLs which you can copy/paste into a web browser (to download files one-by-one).
  • Windows Machine: Choose 'files of URLs'. There are Windows programs (i.e. Mass Download or others) that can use a file of URLs to perform a multiple-file download.
  • Macs: Choose 'bash script of cURLS'. This file can be executed to perform a multiple-file download.
  • Linux/Unix: Choose 'bash script of wgets'. This file can be executed to perform a multiple-file download.

To get started, make at least one selection from each category to define the MACA data files that you wish to download.
Then click on the 'Download File' button. A text file will be automatically downloaded to your computer. This text file will have URL paths to all MACA data that satisfies your selections.

Files of URLs:Copy and paste one URL from this text file into a web browser to download the data to your computer. If you make multiple selections in certain categories, multiple filenames meeting your criteria will be added to your file. In this case, note that you will have to be careful about only copy/pasting one filename at a time if you choose URL download.

Files of wget statements: These files are bash scripts of wget statements that can be executed on a Linux/Unix machine. Remember to first make these bash scripts executable (i.e. chmod +x macav2livneh_wget.sh). You can execute this bash script with ./macav2livneh_wget.sh

Files of curl statements: These files are bash scripts of curl statements that can be executed on a Mac machine. Remember to first make these bash scripts executable and then you can run them with ./macav2livneh_curl.sh

If you download any data, be sure to join the mailing list to be informed on data updates.

The most efficient way to download MACA data depends on your final interests. Please help the load on our servers by picking the right method for your interests.

If you are interested in point data for only a couple of point locations, likely the most efficient way to do this is to use the Design-Your-Own-CSV tool. You can also use the MACA Data Potal tool to download this data if you select 'CSV'(only available for Domain: point locations) for the download format.

If you are interested in point data for a lot of points, but within a subset of CONUS, likely the most efficient way to do this is download 'rectangular subsets' of CONUS using this tool and then extract the point locations once you have the regional subsets in hand. Note: if you try to download a lot of points using our server, likely you will bog down our servers and possibly crash them. Please choose wisely here by downloading a subset and then get the points from your subset.

If you are interested in very large regional subsets, likely the most efficient way to do this is to download the individual 5-10 year netcdf files and then extract a subset from these once downloaded. This data portal will provide you with some links to do that via THREDDs. However, these files are very large (~TB). You can try to utilize the aggregated files over years (1950-2005 or 2006-2099) for large regional subsets, as we are now linking to the smaller dataset on the Geo Data Portal(for MACAv2-METDATA only currently). However, if it is taking forever to download the data subset, you might email us to get some advice or download the entire dataset and take your own subset. Some users have reported that when using the aggregated files over years (1950-2005 or 2006-2099) for large regional subsets, they are getting '500 Internal Server Error'. These errors seem to be related to memory issues, as if you reduce the year range to say 2006-2049 the downloads will often succeed. Large spatial subsets seem to be a problem with the aggregated data. Always when attempting to download large data, realize that you may need to download less data in each request to get the downloads to succeed. Big data is hard to download. You may need to split your large regions up into smaller regions to download. A great approach is to find the largest spatial region and largest number of years that you can download as a sweet spot for your downloads. You may need to experiment with this by first finding a size of spatial region and number of years that you can download and then increase things from there. Also, in order to avoid hammering the server with parallel requests, you need to figure out how to submit the requests one at a time waiting until the first one completes before submitting the next one. You can insert linux sleep(300) commands between wget requests to wait 300 seconds (5 min) as a first attempt to spacing your requests out.

If you are interested in downloading all of the data over CONUS, we have a new option for you which we believe is the most efficient. We suggest that you download these using links to files stored on the Geo Data Portal(GDP). You can obtain those links by using this MACA Data Portal tool for MACAv2-METDATA only (currently) by selecting the time periods of historical (2005-2006) or rcp45/85 (2006-2099).This will give you links to the GDP fileServer. You can also get these MACAv2-METDATA files directly using FTP for MACAv2-METDATA, however this dataset is larger(20 TB) than that on GDP.

When executing a wget or cURL script, please try one line to see if the size of your request is adequate before executing all lines in the script. If your request is too big, executing all of the lines may put additional burden on our server. Also, record how long it takes to download just one of the files in your request and then use that (plus some additional padding) for a sleep() parameter between requests to give our servers time to rest between your requests. You are also not the only one making requests on a server, so adding extra padding could really help with that too.

Browser issues: There are issues with this data portal and different web browsers. These tools were developed with the Chrome browser. There are issues with the Internet Explorer browser. If there are weirdnesses with things not working right(i.e. you select a point location, but do not get an immediate set of boxes to enter the lat/lon of this point), try to update your version of the Chrome browser and try again. If the weirdnesses persists, contact Katherine Hegewisch(khegewisch@ucmerced.edu).

CSV service: Downloading .CSV file formats for point data extractions is now more stable.
  • We have now fixed the problem with a netcdf4 library in extracting data for our CSV service. The new server http://climate-dev.nkn.uidaho.edu has solved this problem. You should not get 'proxy errors' anymore.
  • MACAV1metdata does not seem to work with the CSV service.
  • A 500 server erorr however means that there is something wrong with the code. Please email khegewisch@ucmerced.edu with what you did to get this.
  • If you get an error message 'Service Temporarily Unavailable', email khegewisch@ucmerced.edu as likely the server is down.


  • Sometimes you will be allowed to make selections that we did not intend for you to make, due to instabilities in your browser interacting with our form or errors on our part. Note the following:
  • For point extractions, you should not be able to get netcdf data formats. Instead, get CSV formats. Netcdf format is only for rectangular subsets.


  • Wget/cURL URLS: You may encounter 500 Internal Service errors when executing the wget/cURL files. This may be an indication that your request is too large and is causing a memory error. In this case, you may need to split your request into smaller pieces(less years or smaller domain) to get past the size quota. If you paste a URL into a web browser, you may get more information on the error that is actually occurring. For example, if you get a 'Variable size in bytes... may not exceed ...', this is an indication that your request is too large and is hitting a quota. In this case, you should split your request into smaller pieces until you get success.

    NETCDF4 vs NETCDF3 File Formats and Supporting Libraries

    Note that the macav2-metdata/macav2-livneh data are in netCDF4 file format (while macav1-metdata is in netCDF 3 class file format). This difference may cause you issues.

    The biggest problem is that netCDF4 files require HDF5 libraries to be properly installed on your system. You may need to update your HDF5 libraries if you are having problems.
    • OPeNDAP requests: If there is incredible latency in a simple point extraction using OPeNDAP, you maybe having problems either with HDF or with your DAP configuration. See this page for more information on how to properly install the NETCDF4 library for data extraction. Specifically, you may need to update your python installation to make sure that you have enabled HDF5 and DAP library support.
    • Viewing Data with ncdump: netCDF4 files require HDF5 libraries. You may need to install the latest HDF5 libraries if you having problems with ncdump.






































    Files aggregated over all years in a scenario are available if you do not select 'full domain'

    Historical



    RCP 4.5





    RCP 8.5





    Domain:

    Download Format:



    Time between requests sec
    MACA domain overlays courtesy of Wenglong Feng, UI

    pFad - Phonifier reborn

    Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

    Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


    Alternative Proxies:

    Alternative Proxy

    pFad Proxy

    pFad v3 Proxy

    pFad v4 Proxy