taskGNU Astronomy Utilities - Tasks: task #15047, Server-client Gnuastro operation

 
 

You are not allowed to post comments on this tracker with your current authentication level.

task #15047: Server-client Gnuastro operation

Submitter:  Mohammad Akhlaghi <makhlaghi>
Submitted:  Tue 18 Sep 2018 02:14:19 PM UTC
   
 
Should Start On:  Mon 17 Sep 2018 10:00:00 PM UTC Should be Finished on:  Mon 17 Sep 2018 10:00:00 PM UTC
Category:  All Gnuastro Priority:  5 - Normal
Item Group:  None Status:  Need Info
Privacy:  Public Assigned to:  None
Percent Complete:  0% Open/Closed:  Open
Effort:  0.00

Tue 18 Sep 2018 05:44:49 PM UTC, comment #1: 

This is just further elaboration on this task.

The compression ratios reported in the previous commit are relative to the actual input file size (2.9Gb), not NoiseChisel's output on it. Comparing with the actual NoiseChisel output prior to compression (which was 724Mb), the compression ratios become: 36.6 and 49.3.

With this example, I was trying to emphasize that only one higher-level step (detection segmentation map from NoiseChisel) allows for such a large compression level, compared to getting the raw data and running NoiseChisel over it.

These segmentation maps allow fast server-side access to the data to produce high-level measurements over any interesting object. See arXiv:1611.06387 or its corresponding slides in the 26th ADASS (2016). As described there, the segmentation maps can also account for de-blending, I also have some thoughts on accounting for the PSF.

In other words, it won't  be necessary to do a very large set of pre-defined measurements over the survey in one run to generate one catalog (heavy for the server, with catalogs sometimes becoming larger than the actual images). A single generic measurement forces users to limit their science to the information in that pre-defined catalog. Their only alternative  is to that catalog is accessing the raw images (which need a lot of expertise and time to process on the server).

In this scenario, using Gnuastro's MakeCatalog, people can define their own custom measurement over any part of the images they need: at different times, with different filters, or even using custom segmentation maps (like circular apertures).

MakeCatalog is a very cheap operation for the server (it is multi-threaded, and will only read the pixels over the segmentation map into the server's memory, not a whole image). Note that the majority of the sources are faint/small (galaxy luminosity function). Also see task #13557 on an efficient way to find images for mapping of the segmentation map over an image.

As a summary, once we implement this server-client operation within MakeCatalog, the only intermediary file necessary for archiving (possibly locally) to start many high-level science applications (certainly not all!) is the segmentation map (which can be highly compressed).

Of course, the statement above doesn't reject the idea of producing an over-all generic survey catalog. That is certainly necessary/efficient for initial sample selections. The discussion here allows customization (removing the need for columns in the generic catalog which differ very slightly), and allows users to be more creative and complement those initial measurements with more accurate ones for their particular science case.

Mohammad Akhlaghi <makhlaghi>
Group administrator
Tue 18 Sep 2018 02:14:19 PM UTC, original submission:  

As we are processing larger and larger datasets (commonly referred to as "big data"), it is impossible/impractical to download the entire raw data for local processing. This is especially important with the upcoming LSST project, which will be producing data at rates of roughly 15Tb/night.

One possible solution that came up recently in my discussion with Mohammad-reza Khellat was the low-level use of network protocols within Gnuastro. In summary, within each program, all the heavy-duty parts of the processing (requiring the full input raw data set) will be done on the data center server and the higher-level parts will be done on the client.

The scenario that we have discussed so far looks something like this (and will certainly evolve):

  • Both the client (user's computer) and server (for example LSST data center) have the same version of Gnuastro installed.


  • The client-side program (for example NoiseChisel on user's computer), connects with the server-side program (NoiseChisel on server) and gets all the necessary low-level meta-data of the input dataset (for example numeric data type and image size: only a few bytes) and defines/manages the necessary high-level steps.


  • As in the current Gnuastro multi-threaded operations (where the operation is distributed in many threads), the client-side program instructs/manages the server-side to use the server's CPU and RAM in processing the low-level data into the higher-level products.


  • The output can then be stored in either of the following to ways: 1) on the server (for even higher-level processing), 2) on the client. We can use the SSH format of "server:file" to allow the programs to know where the output should be stored. In the latter case, during the processing, necessary patches of the output will be transferred to the client (as it is processed by each thread on the server, not all at once, thus greatly improving speed and redundancy) and the final output file is written on the user's computer.
    • In the case of programs like NoiseChisel and Segment (the programs on the boundary of low-level images to high-level catalogs), the output is a labeled/integer-valued image which can be highly compressed: for example in a test I just did, NoiseChisel's raw output (with --rawoutput and --oneelempertile) on a 2.9Gb image (28362 x 25297 pixels), containing a huge galactic cirrus structure, is 19.8Mb when compressed with Gzip's --best option or 14.7Mb with Lzip's --best option (compression ratios of ~150 and ~200). This can (potentially!) allow these labeled images to be the point where it is possible continue the processing (or archive) locally (while actual telescope images are on the server).
    • Higher-level programs like Segment or MakeCatalog, can also avoid having to read the full image into the server's RAM (to consume less of its precious resources). They can only load the parts of the input image that are needed at each moment/CPU-thread (over each detection or clump).


We will be looking into existing network protocols to find the best for this job, or possibly define new protocol that is tailored/suited for efficient operations like the scenario mentioned above.

This can be done in parallel with task #14779 (Enable usage in HTCondor at configure time), and may not be totally independent.

This is mainly a brainstorm now that we will be looking into more details for implementing. So, please leave your thoughts or comments (the more critical, the better) on this issue on GNU Savannah (you just need to create an ID in Savannah to post comments).

Mohammad Akhlaghi <makhlaghi>
Group administrator

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

No files currently attached

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by makhlaghi (Submitted the item)
  • -email is unavailable- added by makhlaghi
  • -email is unavailable- added by makhlaghi
  • -email is unavailable- added by makhlaghi
  • -email is unavailable- added by makhlaghi
  • -email is unavailable- added by makhlaghi
  •  

    There are 0 votes so far. Votes easily highlight which items people would like to see resolved in priority, independently of the priority of the item set by tracker managers.

     

    Follow 5 latest changes.

    Date Changed by Updated Field Previous Value => Replaced by
    2018-09-18 makhlaghi Carbon-Copy- Added -email is unavailable-
        Carbon-Copy- Added -email is unavailable-
        Carbon-Copy- Added -email is unavailable-
        Carbon-Copy- Added -email is unavailable-
        Carbon-Copy- Added -email is unavailable-

    Back to the top

    Powered by Savane 3.13-3230.
    Corresponding source code