|
80 | 80 | "import lsst.afw.table as afwTable\n", |
81 | 81 | "import lsst.geom as geom\n", |
82 | 82 | "\n", |
| 83 | + "# Import Pipeline tasks \n", |
83 | 84 | "from lsst.pipe.tasks.characterizeImage import CharacterizeImageTask\n", |
84 | 85 | "from lsst.pipe.tasks.calibrate import CalibrateTask\n", |
85 | 86 | "from lsst.meas.algorithms.detection import SourceDetectionTask\n", |
|
94 | 95 | "outputs": [], |
95 | 96 | "source": [ |
96 | 97 | "# Use lsst.afw.display with the matplotlib backend\n", |
97 | | - "afwDisplay.setDefaultBackend('matplotlib')" |
| 98 | + "afwDisplay.setDefaultBackend('matplotlib')\n", |
| 99 | + "plt.rcParams['figure.figsize'] = (8.0, 8.0)" |
98 | 100 | ] |
99 | 101 | }, |
100 | 102 | { |
|
114 | 116 | "source": [ |
115 | 117 | "## 1.0 Data access\n", |
116 | 118 | "\n", |
117 | | - "Here we use the `butler` to access a `calexp` from the DP0.1 dataset. More information on the `butler` and `calexp`, and how to determine the `dataId` (e.g., tract and patch, or as in the example below, visit id, raftName, and detector) are available in other tutorials. Here we start assuming that information is known, and assuming a basic understanding of `butler` use.\n", |
| 119 | + "Here we use the `butler` to access a `calexp` from the DP0.1 dataset. More information on the `butler` and `calexp`, and how to determine the `dataId` (e.g., the visit and detector numbers) are available in other tutorials. Here we start assuming that information is known, and assuming a basic understanding of `butler` use.\n", |
118 | 120 | "\n", |
119 | 121 | "If a warning in pink is output below, it is related to the recent gen2 to gen3 butler upgrade, and can be ignored." |
120 | 122 | ] |
|
197 | 199 | "outputs": [], |
198 | 200 | "source": [ |
199 | 201 | "# Plot the calexp we just retrieved\n", |
200 | | - "plt.rcParams['figure.figsize'] = (5.0, 5.0)\n", |
201 | 202 | "plt.figure()\n", |
202 | 203 | "afw_display = afwDisplay.Display()\n", |
203 | 204 | "afw_display.scale('asinh', 'zscale')\n", |
|
235 | 236 | "metadata": {}, |
236 | 237 | "outputs": [], |
237 | 238 | "source": [ |
238 | | - "plt.imshow(bkgd.getImage().array, origin='lower', cmap='gray')\n", |
| 239 | + "plt.figure()\n", |
| 240 | + "afw_display = afwDisplay.Display()\n", |
| 241 | + "afw_display.scale('linear', 'zscale')\n", |
| 242 | + "afw_display.mtv(bkgd.getImage())\n", |
239 | 243 | "plt.title(\"Local Polynomial Background\")" |
240 | 244 | ] |
241 | 245 | }, |
|
252 | 256 | "metadata": {}, |
253 | 257 | "outputs": [], |
254 | 258 | "source": [ |
255 | | - "mi = calexp.maskedImage\n", |
256 | | - "mi += bkgd.getImage()" |
| 259 | + "# Note: executing this cell multiple times will add the background multiple times\n", |
| 260 | + "calexp.maskedImage += bkgd.getImage()" |
257 | 261 | ] |
258 | 262 | }, |
259 | 263 | { |
|
262 | 266 | "metadata": {}, |
263 | 267 | "outputs": [], |
264 | 268 | "source": [ |
265 | | - "plt.rcParams['figure.figsize'] = (5.0, 5.0)\n", |
266 | 269 | "plt.figure()\n", |
267 | 270 | "afw_display = afwDisplay.Display()\n", |
268 | 271 | "afw_display.scale('asinh', 'zscale')\n", |
|
319 | 322 | "outputs": [], |
320 | 323 | "source": [ |
321 | 324 | "# Follow the same procedure as before to plot the cutout\n", |
322 | | - "plt.rcParams['figure.figsize'] = (5.0, 5.0)\n", |
323 | 325 | "plt.figure()\n", |
324 | 326 | "afw_display = afwDisplay.Display()\n", |
325 | 327 | "afw_display.scale('asinh', 'zscale')\n", |
|
335 | 337 | "\n", |
336 | 338 | "We now want to run the LSST Science Pipelines' source detection, deblending, and measurement tasks. While we run all three tasks, this notebook is mostly focused on the detection of sources.\n", |
337 | 339 | "\n", |
338 | | - "Recall that these tasks were imported up at the top of this notebook, from `lsst.pipe` and `lsst.meas`. More information can be found at pipelines.lsst.io (on that page, the search bar at upper left is a very handy way to find documentation for a specific task).\n", |
| 340 | + "Recall that these tasks were imported up at the top of this notebook, from `lsst.pipe` and `lsst.meas`. More information can be found at [pipelines.lsst.io](https://pipelines.lsst.io/) (the search bar at the top left of that page is a very handy way to find documentation for a specific task).\n", |
339 | 341 | "\n", |
340 | | - "We start by creating the most minimial schema possible. This schema will be passed to all of the tasks, as we call each in turn, and each task will add columns to this schema as it measures sources in the image." |
| 342 | + "We start by creating a minimal schema for the source table. The schema describes the output properties that will be measured for each source. This schema will be passed to all of the tasks, as we call each in turn, and each task will add columns to this schema as it measures sources in the image." |
341 | 343 | ] |
342 | 344 | }, |
343 | 345 | { |
|
361 | 363 | "cell_type": "markdown", |
362 | 364 | "metadata": {}, |
363 | 365 | "source": [ |
364 | | - "### 2.1 Configuration Classes\n", |
| 366 | + "### 2.1 Configuring Tasks\n", |
365 | 367 | "\n", |
366 | | - "Each task possesses an associated configuration class. The properties of these classes can be determined from the classes themselves." |
| 368 | + "Each task possesses an associated configuration class. The properties of these configuration classes can be determined from the classes themselves." |
367 | 369 | ] |
368 | 370 | }, |
369 | 371 | { |
|
384 | 386 | "cell_type": "markdown", |
385 | 387 | "metadata": {}, |
386 | 388 | "source": [ |
387 | | - "As a starting point, like the `schema` and `algMetadata` above, here we set some basic config parameters to get you started." |
| 389 | + "As a starting point, like the `schema` and `algMetadata` above, here we set some basic config parameters and instantiate the tasks to get you started. In this case, we configure several different tasks:\n", |
| 390 | + "\n", |
| 391 | + "* CharacterizeImageTask: Characterizes the image properties (e.g., PSF, etc.)\n", |
| 392 | + "* SourceDetectionTask: Detects sources\n", |
| 393 | + "* SourceDeblendTask: Deblend sources into constituent children\n", |
| 394 | + "* SingleFrameMeasurementTask: Measures source properties" |
388 | 395 | ] |
389 | 396 | }, |
390 | 397 | { |
|
393 | 400 | "metadata": {}, |
394 | 401 | "outputs": [], |
395 | 402 | "source": [ |
| 403 | + "# Characterize the image properties\n", |
396 | 404 | "config = CharacterizeImageTask.ConfigClass()\n", |
397 | 405 | "config.psfIterations = 1\n", |
398 | 406 | "charImageTask = CharacterizeImageTask(config=config)\n", |
399 | 407 | "\n", |
| 408 | + "# Detect sources\n", |
400 | 409 | "config = SourceDetectionTask.ConfigClass()\n", |
401 | | - "\n", |
402 | 410 | "# detection threshold in units of thresholdType\n", |
403 | 411 | "config.thresholdValue = 10\n", |
404 | | - "\n", |
405 | 412 | "# units for thresholdValue\n", |
406 | 413 | "config.thresholdType = \"stdev\"\n", |
407 | | - "\n", |
408 | 414 | "sourceDetectionTask = SourceDetectionTask(schema=schema, config=config)\n", |
| 415 | + "\n", |
| 416 | + "# Deblend sources\n", |
409 | 417 | "sourceDeblendTask = SourceDeblendTask(schema=schema)\n", |
410 | 418 | "\n", |
| 419 | + "# Measure source properties\n", |
411 | 420 | "config = SingleFrameMeasurementTask.ConfigClass()\n", |
412 | 421 | "sourceMeasurementTask = SingleFrameMeasurementTask(schema=schema,\n", |
413 | 422 | " config=config,\n", |
|
685 | 694 | "source": [ |
686 | 695 | "## 3.0 Footprints\n", |
687 | 696 | "\n", |
688 | | - "To quote [Bosch et al. (2017)](https://arxiv.org/pdf/1705.06766.pdf), \n", |
| 697 | + "Object footprints are an integral component of the high-level CCD processing tasks (e.g., detection, measurement, and deblending). To quote [Bosch et al. (2017)](https://arxiv.org/pdf/1705.06766.pdf), \n", |
689 | 698 | "\n", |
690 | 699 | "> Footprints record the exact above-threshold detection region on a CCD. These are similar to SExtractor’s “segmentation map\", in that they identify which pixels belong to which detected objects\n", |
691 | 700 | "\n", |
692 | | - "As you might expect, this means footprints are integral to high-level CCD processing tasks—like detection, measurement, and deblending—which directly impact science results. Because footprints are so closely related to these very important processes, we will take a look at them in this notebook.\n", |
693 | | - "\n", |
694 | | - "In the quote above, an analogy was drawn between footprints and segmentation maps, as they both identify above threshold pixels. As we first introduce footprints, we will concentrate on this similarity as it gives us a place to start understanding the location and geometeric properties of footprints. " |
| 701 | + "This quote draws an analogy between footprints and segmentation maps, since they both identify pixels with values above some threshold. This is a useful similarity, since it gives us a place to start understanding the properties of footprints." |
695 | 702 | ] |
696 | 703 | }, |
697 | 704 | { |
698 | 705 | "cell_type": "markdown", |
699 | 706 | "metadata": {}, |
700 | 707 | "source": [ |
701 | | - "We will use the `detectFootprints` method in `SourceDetectionTask` to find and store the detected footprints in the image" |
| 708 | + "The result of the `SourceDetectionTask` stores the footprints associated to detected objects" |
702 | 709 | ] |
703 | 710 | }, |
704 | 711 | { |
|
736 | 743 | "source": [ |
737 | 744 | "### 3.1 Heavy Footprints\n", |
738 | 745 | "\n", |
739 | | - "To extract the actual pixel values that correspond to the ones in the span, we need an additional step. At the moment, our footprints can tell you if a pixel belongs to it or not, but are not accessing pixel values on the image. To remedy this, we will turn our footprint into a `HeavyFootprint`. HeavyFootprints have all of the qualities of Footprints, but additionally 'know' about pixel level data from the image, variance, and mask planes." |
| 746 | + "At the moment, our footprints indicate which pixels they consist of, but not the values of those pixels from the image. To extract the actual values of the pixels that correspond to the ones in the footprint span, we need to convert our footprint into a `HeavyFootprint`. HeavyFootprints have all of the qualities of Footprints, but additionally 'know' about pixel level data from the image, variance, and mask planes." |
740 | 747 | ] |
741 | 748 | }, |
742 | 749 | { |
|
781 | 788 | "cell_type": "markdown", |
782 | 789 | "metadata": {}, |
783 | 790 | "source": [ |
784 | | - "Now we can use the spanset to reassemble the image array into the footprint. Above we saw that the image array is a 1D numpy array-but the footprint itself is 2 dimensional. Fortunately, the span set has an `unflatten` method that we will use, which can rearrange the image array into the proper 2 dimensional shape. If you want to change the colormap, see [matplotlib colormap options](https://matplotlib.org/stable/tutorials/colors/colormaps.html)." |
| 791 | + "Now we can use the span set to reassemble the image array into the footprint. Above we saw that the image array is a 1D numpy array, but the footprint itself is 2 dimensional. Fortunately, the span set has an `unflatten` method that rearranges the image array into the proper 2 dimensional shape. If you want to change the colormap, see [matplotlib colormap options](https://matplotlib.org/stable/tutorials/colors/colormaps.html)." |
785 | 792 | ] |
786 | 793 | }, |
787 | 794 | { |
|
790 | 797 | "metadata": {}, |
791 | 798 | "outputs": [], |
792 | 799 | "source": [ |
793 | | - "plt.rcParams['figure.figsize'] = (6.0, 6.0)\n", |
| 800 | + "plt.figure()\n", |
794 | 801 | "plt.imshow(fps[0].getSpans().unflatten(hfps[0].getImageArray()),\n", |
795 | 802 | " cmap='bone', origin='lower')" |
796 | 803 | ] |
|
831 | 838 | "cell_type": "markdown", |
832 | 839 | "metadata": {}, |
833 | 840 | "source": [ |
834 | | - "The values are the exponent of the bitmask. So pixels only marked detected will be 2^5 = 32. Pixels that are both on the edge and detected will be 2^5 + 2^4 = 48. Now we will visualize this in a similar manner to the imshow exercise we did before, only now we are *only* using data for the footprint because we are using the span." |
| 841 | + "The values are the exponent of the bitmask. So pixels only marked detected will be 2^5 = 32. Pixels that are both on the edge of the original image and detected will be 2^5 + 2^4 = 48. We will visualize the mask plane values in a similar manner as before, except that we will be displaying the values of the mask array." |
835 | 842 | ] |
836 | 843 | }, |
837 | 844 | { |
|
840 | 847 | "metadata": {}, |
841 | 848 | "outputs": [], |
842 | 849 | "source": [ |
843 | | - "plt.figure(figsize=(6, 6))\n", |
844 | | - "ax = plt.gca()\n", |
845 | | - "\n", |
| 850 | + "plt.figure()\n", |
846 | 851 | "im = plt.imshow(fps[0].getSpans().unflatten(hfps[0].getMaskArray()),\n", |
847 | 852 | " origin='lower')\n", |
848 | 853 | "\n", |
849 | 854 | "# Create a new axis, \"cax\" on the right side of the image display.\n", |
850 | 855 | "# The width of cax will be 5% of the axis \"ax\".\n", |
851 | | - "# The padding between cax and ax will be fixed at 0.05 inch.\n", |
| 856 | + "# The padding between cax and ax will be 1% of the axis.\n", |
| 857 | + "ax = plt.gca()\n", |
852 | 858 | "divider = make_axes_locatable(ax)\n", |
853 | | - "cax = divider.append_axes(\"right\", size=\"5%\", pad=0.05)\n", |
| 859 | + "cax = divider.append_axes(\"right\", size=\"5%\", pad=\"1%\")\n", |
854 | 860 | "plt.colorbar(im, cax=cax, ticks=[0, 32, 32+16])" |
855 | 861 | ] |
856 | 862 | }, |
|
0 commit comments