Estimates of spatially complete, observational data-driven planetary boundary layer height over the contiguous United States
- 1Department of Civil, Structural and Environmental Engineering, University at Buffalo, Buffalo, NY, USA
- 2Department of Earth and Environmental Sciences, Boston University, Boston, MA, USA
- 3NASA Langley Research Center, Hampton, VA, USA
- 4Space Sciences Engineering Center, University of Wisconsin-Madison, Madison, WI, USA
- anow at: Department of Geography, Clark University, Worcester, MA, USA
- 1Department of Civil, Structural and Environmental Engineering, University at Buffalo, Buffalo, NY, USA
- 2Department of Earth and Environmental Sciences, Boston University, Boston, MA, USA
- 3NASA Langley Research Center, Hampton, VA, USA
- 4Space Sciences Engineering Center, University of Wisconsin-Madison, Madison, WI, USA
- anow at: Department of Geography, Clark University, Worcester, MA, USA
Abstract. This study aims to generate a spatially complete planetary boundary layer height (PBLH) product over the contiguous US (CONUS) with minimal biases from observational data sets. An eXtreme Gradient Boosting (XGB) regression model was developed using selected meteorological and geographical data fields as explanatory variables to fit the PBLH values that are derived from Aircraft Meteorological DAta Reporting (AMDAR) hourly profiles at 13:00–14:00 local solar time (LST) during 2005–2019. The PBLH prediction by this work as well as PBLHs from three reanalysis datasets (ERA5, MERRA-2, and NARR) were evaluated by independent PBLH observations from spaceborne lidar (CALIPSO), airborne lidar (High Spectral Resolution Lidar, HSRL), and in-situ aircraft profiles from the DISCOVER-AQ campaigns. Compared with PBLH products from reanalyses, the model prediction from this work shows in general closer agreement with the reference observations. The reanalysis datasets show significant high biases in the west CONUS relative to the reference observations. One direct application of this dataset is that it enables sampling PBLH at the sounding locations and times of sensors aboard satellites with overpass time in the early afternoon, e.g., the A-train, Suomi-NPP, JPSS, and Sentinel 5-Precursor satellite sensors. Since both AMDAR and ERA5 are continuous at hourly resolution, future work is expected to extend the observational data-driven PBLHs to other daytime hours for the TEMPO mission.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(1669 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
Journal article(s) based on this preprint
Zolal Ayazpour et al.
Interactive discussion
Status: closed
-
RC1: 'Comment on amt-2022-235', Anonymous Referee #2, 26 Sep 2022
Overall Recomendation
This is a well written and presented manuscript that describes a novel approach to estimate PBL height (PBLH) over the CONUS region that extends discrete aircraft (AMDAR) measurements spatially using machine learning and the inherent relationships between PBLH and other land-atmosphere variables within reanalysis products (ERA5). There is a strong community need for improved PBL products across space and time, including PBLH, that extend beyond traditional (synoptic radiosonde, field campaign, or limited spaceborne) observations. Capturing the diurnal cycle in terms of PBL evolution is also a high priority for the scientific community with many potential applications (air quality, NWP, etc.). The approach seems to work well overall, from a large-scale, seasonal perspective when compared with reanalysis and observations (spaceborne and aircraft). This manuscript will ultimately be valuable to the community, and the work is appropriate for this journal and audience. I do have some major concerns and suggestions related to addressing the uncertainties in some of the PBLH products, and also including a focus on sub-seasonal variability (day-day) in the profiles and PBLH estimates.
Major Comments
The main issue and limitation of this work is in the validation of this new PBLH product. Because it is novel, and there is such a need, there are not any observationally-based 'truth' products to wok with that don't each have their own large uncertainties and limitaitons. CALIPSO is used here as 'truth', but has significant issues in terms of estimating an automated PBLH products (dependent on PBL regime, signal to noise, elevated aerosol gradients, clouds, etc.). So the anlayses and intercomparisons are more relative than absolute, in terms of comparing the new AMDAR/ML PBLH vs. models (reanalyses) vs. observations (CALIPSO and aircraft-based lidar). As a sanity check, this comparison is useful, but it does not sufficiently reflect whether the new PBLH product is accurate on day-day or diurnal timescales.
Another concern is related to the diversity in the PBLH estimation techniques used in all these products that are being evaluated. L45 (paragraph). Can the authors say something about the methods used in these comparisons? Using CALIPSO implies that PBLH is based on aersol backscatter gradients, which is quite distinct from what each model PBLH is based upon thermodynamically (not to mention their respective PBLH approach that is mentioned earlier). Neither is actually 'wrong' or 'right', as they are looking for the top of the mixed layer or the top of the PBL turbulence, or T and RH gradients. The conclusions presented here suggest that CALIPSO aerosol-based PBLH is the 'truth' but also AMDAR-based thermodynamic PBLH is the 'truth', and both cannot be the case. These are relative intercomparisons that show that the models deviate from other estimates, but has it been shown when looking at T and q profiles that the models actually do 'overestimate' the true PBLH?
To reiterate the comment about the diversity of PBLH methods in the models and observations, what are the implicatons of comparing PBLH derived in 6 different ways (AMDAR, ERA, NARR, M2, plus CALIPSO, and HSRL)? There are tendencies from each method (TKE, RiB, etc.) in terms of the PBLH they capture and under what regimes they perform well/poorly. None is perfect, but comparing across all of them is problematic in a blanket sense.
Fig. 6: It would be nice to see a more detailed, nuanced analysis zooming into regions, and sub-seasons (even day-day). This generally looks like a good result for XGB, but as we know a lot of important variability and biases can be masked out on the seasonal timescale. Could the authors provide this even if in supplemental form?
Section 5: Given the diversity in models, observations, and PBLH estimation approaches discussed above, it would be helpful to include a more direct analysis of individual profiles (not just seasonal composites, or CONUS evaluations). Examining individual profiles would enable a direct comparison across all of these, and a visual aid to actually look at T, q, and aerosol profiles and their estimated PBLH. This would provide insight as to their behavior, and also provide to the reader a 2D vertical perspective of what this paper is all about and how much these can differ. The challenge is in selecting/sampling the locations and times, but that could be done in a single figure with multiple panels, regions, etc. after the authors perform a search of different locations and regimes that they feel are representative. No overall conclusions would be made based on these, but it likely would provide the insight and demonstrate the variability to the reader. Might also expose what CALIPSO is doing.
Minor Comments
Intro: Strong aerosol and AQ motivation here in terms of PBLH. The authors could also mention importance of PBLH for convection, shallow to deep convection, LCL deficit, etc. in terms of the thermodynamic pathways and feedbacks (ultimately on precipitation). Also the PBL mediation of surface fluxes (H and LE), soil moisture, vegetation, entrainment feedbacks. There is a big component of PBLH importance that isn't discussed here and would make the general impact of this work and dataset more robust.
L36: Another reference that looked at PBLH in M2, NARR, and CFSR: https://doi.org/10.1175/JCLI-D-14-00680.1
L79: Also potential applicability to drifting orbits of existing satellites (e.g. EOS) that cover other parts of the diurnal cycle.
L99: Many models and applicaitons have used 0.25. Can the authors justify using 0.5 here? A direct comparison of the results when using 0.5 vs. 0.25 seems important, given the sensitivty to the assumption and the impact on the ultimate product of PBLH.
Section 2.3: For CALIPSO, as McGrath-Spangler has shown, it is very difficult to automate PBLH retrieval due to elevated aerosols, clouds, signal to noise, etc. such that they were only able to confidenly produce seasonal climatologies. Because you are estimating your own CALIPSO-based PBLH here on a daily basis, there is likely much greater uncertainty due to these factors on the day-day level.
Fig. 5: I'm glad the authors included this detailed assessment of the variables more influential in the ML training. This is where the L-A interaction component becomes scientifically interesting and can be learned from. Fig. 5 makes sense (to me) overall in terms of which variables most impact/drive CBL growth, focused on surface heating and buoyancy (as well as the diurnal pattern/memory of the PBL growth itself). Identifying which is most land-driven (vs. water/ocean) makes sense as well. Given the focus of soil moisture in L-A interaction research, It would have been interesting to include soil moisture itself, given its role in controlling the Bowen ratio and surface heating (which are strongly infliential as seen here). Also, it is interesting that LHF doesn't emerge along with H. Using EF or Bowen ratio might have been a strong as well if included.
XGB Training: Did the authors test the sensitivity of the training result to the dataset used (ERA5 vs. M2, for example)? As presented, the results are strongly dependent on the relationship of AMDAR vs. ERA5 in terms of PBLH.
L319: 'Complement' CALIPSO(?), or is the HSRL superior in terms of identifying a robust PBLH? Does this put the quality of CALIPSO PBLH in question? There may be something to say here about both estimates being based on aerosol backscatter gradients, but yielding such different results vs. XGB.
Section 4.3: This is also interesting, in that the HSRL is based on aersol backscatter, but the spirals are manually based on other variables, yet yield similar results that are quite different from CALIPSO. Again, I would suggesting considering more rigorously the uncertainty in the CALIPSO estimates.
Conclusions: Rather short conclusions that could offer more applicability to the community, and what this dataset could be used for (and how confidently), as well as what new PBL profile observations would be most valuable in the future (e.g. spaceborne or more routine suborbital measurements) to reduce the uncertainty in the estimates applied here.
-
AC1: 'Reply on RC1', Kang Sun, 26 Nov 2022
The comment was uploaded in the form of a supplement: https://amt.copernicus.org/preprints/amt-2022-235/amt-2022-235-AC1-supplement.pdf
-
AC1: 'Reply on RC1', Kang Sun, 26 Nov 2022
-
RC2: 'Comment on amt-2022-235', Anonymous Referee #1, 10 Oct 2022
Getting the best estimate of the PBLH is a worthwhile endeavor, which can help with our basic understanding of processes associated with the PBL and can be used to help modelling. The manuscript is suitably organized, is generally well written, and includes appropriate figures. As with any empirical technique, the interpretation needs to be done in a way that clearly points out the limitations and bias of choices. In addition, the case must be made that these results are robust. In particular, are the results actually generalizable or is it too dependent on what is or is not included in both the method (e.g., parameter selection) and training data (e.g. which airports or years used). This is a common issue in applying any sort of empirical method, but it is really important to be clear about this. Otherwise, one could repeat these steps, but alter a couple of choices, and come up with different results.
Major comments:
Input data: Data is preferentially excluded. For instance, if the AMDAR PBLH is too far away from the ERA5 estimate it is thrown out, so as stated around line 115, “…which accounted for about one third of all AMDAR data, half of data under stable condition, and only 10% of data under convective condition.” This has many implications for all subsequent analysis. Importance of the ERA5 PBLH (at times 0, -1, and -2) for permutation and SHAP feature is extremely large (Fig. 5). Throwing out all “bad” AMDAR data contributes to that importance, and basically implies overfitting to the ERA5 PBLH. The method itself accounts for overfitting, but if the input data is already filtered to get rid of ‘bad’ data before the method is applied, it will artificially create a ‘better’ fit.
Comparisons: Fig. 3 gives distributions of PBLH from various datasets at various locations, times, and sample sizes. If the point of the figure is to show how different places and times have different distributions of PBLH, is this really necessary? If the point of this figure is to compare distributions of PBLH obtained from different data sets, then the data sets must use the same locations and times for a fair comparison. Otherwise, the differences seen in the plot have no meaning since the differences could just be a result of when it was sampled. As it is now in Fig. 3a, CALIPSO and AMDAR have extremely different distributions, so the results of essentially no relationship in Fig. 7 is not surprising.
Mountain West: Given the high average PBLH in the mountain west compared to the rest of the country, the variance is likely to be much larger as well. This has a couple major implications. First, any differences between data sets are likely dominated by differences in the mountain west. Has this been assessed with this data set? This could be done fairly easily in two ways. Either use only the eastern or western half of CONUS and repeat the analysis, or normalize by PBLH. Again, because this is an empirical method, the results could be much different by sector.
PBLH Reference: With PBLH, as we are all aware, there is no ‘gold standard’ that is a reliable reference for comparison given limitations in spatial or temporal resolution, retrieval method, etc. When comparing XGB with the reanalysis and CALIOP, it is not clear if the same time periods are used. For instance, AMDAR used 2005 to 2019 AMDAR (Line 189), but CALIOP from 2006 to 2013 (Line 150). So do all these comparisons use a consistent period of time? If not, this may lead to biases from using different times.
Tuning and Training: Selecting 800 trees with a depth of 8, which is a large amount, still results in a rather large IQR for the test set, even considering differences of sample size. If this were just an issue with large variance, at least the IQRs would overlap. None of the IQRs between training and testing overlap (and even the 97.5 percentiles barely overlap!), suggesting little utility of using this method outside of the training data. This really points to some large underlying flaw, which could be related to a number of factors.
Minor
Line 125: A good reason to use AMDAR and ERA5 is that they can both use the bulk Richardson number to find PBLH. Even though a critical Ri of 0.5 was used in a previous study with AMDAR, why shouldn’t this work use a consistent critical Ri?
Line 252: Using year as a factor in the final model is a surprising feature since there is no physical basis for this. This suggests that if extending to a new year of 2022, it is not possible to use relationships developed in this model, so it calls the generality or robustness of the model into question.
Fig. 5: Because the BL height at time 0, -1, and -2 is so important in this model, do you think that a linear trend would work just as well to get the BL height? If so, the simpler model is better.
Section 4.1: Using CALIPSO as a benchmark seems problematic; there are many issues with PBLH retrievals from CALIPSO, and Fig. 7 shows that there is really no agreement at all with any data set to CALIPSO.
Line 340: Yes, a natural next step is to extend it to other times, but the above issues would be much worse given the added difficulty of defining the nocturnal boundary layer.
-
AC2: 'Reply on RC2', Kang Sun, 26 Nov 2022
The comment was uploaded in the form of a supplement: https://amt.copernicus.org/preprints/amt-2022-235/amt-2022-235-AC2-supplement.pdf
-
AC2: 'Reply on RC2', Kang Sun, 26 Nov 2022
Peer review completion




Interactive discussion
Status: closed
-
RC1: 'Comment on amt-2022-235', Anonymous Referee #2, 26 Sep 2022
Overall Recomendation
This is a well written and presented manuscript that describes a novel approach to estimate PBL height (PBLH) over the CONUS region that extends discrete aircraft (AMDAR) measurements spatially using machine learning and the inherent relationships between PBLH and other land-atmosphere variables within reanalysis products (ERA5). There is a strong community need for improved PBL products across space and time, including PBLH, that extend beyond traditional (synoptic radiosonde, field campaign, or limited spaceborne) observations. Capturing the diurnal cycle in terms of PBL evolution is also a high priority for the scientific community with many potential applications (air quality, NWP, etc.). The approach seems to work well overall, from a large-scale, seasonal perspective when compared with reanalysis and observations (spaceborne and aircraft). This manuscript will ultimately be valuable to the community, and the work is appropriate for this journal and audience. I do have some major concerns and suggestions related to addressing the uncertainties in some of the PBLH products, and also including a focus on sub-seasonal variability (day-day) in the profiles and PBLH estimates.
Major Comments
The main issue and limitation of this work is in the validation of this new PBLH product. Because it is novel, and there is such a need, there are not any observationally-based 'truth' products to wok with that don't each have their own large uncertainties and limitaitons. CALIPSO is used here as 'truth', but has significant issues in terms of estimating an automated PBLH products (dependent on PBL regime, signal to noise, elevated aerosol gradients, clouds, etc.). So the anlayses and intercomparisons are more relative than absolute, in terms of comparing the new AMDAR/ML PBLH vs. models (reanalyses) vs. observations (CALIPSO and aircraft-based lidar). As a sanity check, this comparison is useful, but it does not sufficiently reflect whether the new PBLH product is accurate on day-day or diurnal timescales.
Another concern is related to the diversity in the PBLH estimation techniques used in all these products that are being evaluated. L45 (paragraph). Can the authors say something about the methods used in these comparisons? Using CALIPSO implies that PBLH is based on aersol backscatter gradients, which is quite distinct from what each model PBLH is based upon thermodynamically (not to mention their respective PBLH approach that is mentioned earlier). Neither is actually 'wrong' or 'right', as they are looking for the top of the mixed layer or the top of the PBL turbulence, or T and RH gradients. The conclusions presented here suggest that CALIPSO aerosol-based PBLH is the 'truth' but also AMDAR-based thermodynamic PBLH is the 'truth', and both cannot be the case. These are relative intercomparisons that show that the models deviate from other estimates, but has it been shown when looking at T and q profiles that the models actually do 'overestimate' the true PBLH?
To reiterate the comment about the diversity of PBLH methods in the models and observations, what are the implicatons of comparing PBLH derived in 6 different ways (AMDAR, ERA, NARR, M2, plus CALIPSO, and HSRL)? There are tendencies from each method (TKE, RiB, etc.) in terms of the PBLH they capture and under what regimes they perform well/poorly. None is perfect, but comparing across all of them is problematic in a blanket sense.
Fig. 6: It would be nice to see a more detailed, nuanced analysis zooming into regions, and sub-seasons (even day-day). This generally looks like a good result for XGB, but as we know a lot of important variability and biases can be masked out on the seasonal timescale. Could the authors provide this even if in supplemental form?
Section 5: Given the diversity in models, observations, and PBLH estimation approaches discussed above, it would be helpful to include a more direct analysis of individual profiles (not just seasonal composites, or CONUS evaluations). Examining individual profiles would enable a direct comparison across all of these, and a visual aid to actually look at T, q, and aerosol profiles and their estimated PBLH. This would provide insight as to their behavior, and also provide to the reader a 2D vertical perspective of what this paper is all about and how much these can differ. The challenge is in selecting/sampling the locations and times, but that could be done in a single figure with multiple panels, regions, etc. after the authors perform a search of different locations and regimes that they feel are representative. No overall conclusions would be made based on these, but it likely would provide the insight and demonstrate the variability to the reader. Might also expose what CALIPSO is doing.
Minor Comments
Intro: Strong aerosol and AQ motivation here in terms of PBLH. The authors could also mention importance of PBLH for convection, shallow to deep convection, LCL deficit, etc. in terms of the thermodynamic pathways and feedbacks (ultimately on precipitation). Also the PBL mediation of surface fluxes (H and LE), soil moisture, vegetation, entrainment feedbacks. There is a big component of PBLH importance that isn't discussed here and would make the general impact of this work and dataset more robust.
L36: Another reference that looked at PBLH in M2, NARR, and CFSR: https://doi.org/10.1175/JCLI-D-14-00680.1
L79: Also potential applicability to drifting orbits of existing satellites (e.g. EOS) that cover other parts of the diurnal cycle.
L99: Many models and applicaitons have used 0.25. Can the authors justify using 0.5 here? A direct comparison of the results when using 0.5 vs. 0.25 seems important, given the sensitivty to the assumption and the impact on the ultimate product of PBLH.
Section 2.3: For CALIPSO, as McGrath-Spangler has shown, it is very difficult to automate PBLH retrieval due to elevated aerosols, clouds, signal to noise, etc. such that they were only able to confidenly produce seasonal climatologies. Because you are estimating your own CALIPSO-based PBLH here on a daily basis, there is likely much greater uncertainty due to these factors on the day-day level.
Fig. 5: I'm glad the authors included this detailed assessment of the variables more influential in the ML training. This is where the L-A interaction component becomes scientifically interesting and can be learned from. Fig. 5 makes sense (to me) overall in terms of which variables most impact/drive CBL growth, focused on surface heating and buoyancy (as well as the diurnal pattern/memory of the PBL growth itself). Identifying which is most land-driven (vs. water/ocean) makes sense as well. Given the focus of soil moisture in L-A interaction research, It would have been interesting to include soil moisture itself, given its role in controlling the Bowen ratio and surface heating (which are strongly infliential as seen here). Also, it is interesting that LHF doesn't emerge along with H. Using EF or Bowen ratio might have been a strong as well if included.
XGB Training: Did the authors test the sensitivity of the training result to the dataset used (ERA5 vs. M2, for example)? As presented, the results are strongly dependent on the relationship of AMDAR vs. ERA5 in terms of PBLH.
L319: 'Complement' CALIPSO(?), or is the HSRL superior in terms of identifying a robust PBLH? Does this put the quality of CALIPSO PBLH in question? There may be something to say here about both estimates being based on aerosol backscatter gradients, but yielding such different results vs. XGB.
Section 4.3: This is also interesting, in that the HSRL is based on aersol backscatter, but the spirals are manually based on other variables, yet yield similar results that are quite different from CALIPSO. Again, I would suggesting considering more rigorously the uncertainty in the CALIPSO estimates.
Conclusions: Rather short conclusions that could offer more applicability to the community, and what this dataset could be used for (and how confidently), as well as what new PBL profile observations would be most valuable in the future (e.g. spaceborne or more routine suborbital measurements) to reduce the uncertainty in the estimates applied here.
-
AC1: 'Reply on RC1', Kang Sun, 26 Nov 2022
The comment was uploaded in the form of a supplement: https://amt.copernicus.org/preprints/amt-2022-235/amt-2022-235-AC1-supplement.pdf
-
AC1: 'Reply on RC1', Kang Sun, 26 Nov 2022
-
RC2: 'Comment on amt-2022-235', Anonymous Referee #1, 10 Oct 2022
Getting the best estimate of the PBLH is a worthwhile endeavor, which can help with our basic understanding of processes associated with the PBL and can be used to help modelling. The manuscript is suitably organized, is generally well written, and includes appropriate figures. As with any empirical technique, the interpretation needs to be done in a way that clearly points out the limitations and bias of choices. In addition, the case must be made that these results are robust. In particular, are the results actually generalizable or is it too dependent on what is or is not included in both the method (e.g., parameter selection) and training data (e.g. which airports or years used). This is a common issue in applying any sort of empirical method, but it is really important to be clear about this. Otherwise, one could repeat these steps, but alter a couple of choices, and come up with different results.
Major comments:
Input data: Data is preferentially excluded. For instance, if the AMDAR PBLH is too far away from the ERA5 estimate it is thrown out, so as stated around line 115, “…which accounted for about one third of all AMDAR data, half of data under stable condition, and only 10% of data under convective condition.” This has many implications for all subsequent analysis. Importance of the ERA5 PBLH (at times 0, -1, and -2) for permutation and SHAP feature is extremely large (Fig. 5). Throwing out all “bad” AMDAR data contributes to that importance, and basically implies overfitting to the ERA5 PBLH. The method itself accounts for overfitting, but if the input data is already filtered to get rid of ‘bad’ data before the method is applied, it will artificially create a ‘better’ fit.
Comparisons: Fig. 3 gives distributions of PBLH from various datasets at various locations, times, and sample sizes. If the point of the figure is to show how different places and times have different distributions of PBLH, is this really necessary? If the point of this figure is to compare distributions of PBLH obtained from different data sets, then the data sets must use the same locations and times for a fair comparison. Otherwise, the differences seen in the plot have no meaning since the differences could just be a result of when it was sampled. As it is now in Fig. 3a, CALIPSO and AMDAR have extremely different distributions, so the results of essentially no relationship in Fig. 7 is not surprising.
Mountain West: Given the high average PBLH in the mountain west compared to the rest of the country, the variance is likely to be much larger as well. This has a couple major implications. First, any differences between data sets are likely dominated by differences in the mountain west. Has this been assessed with this data set? This could be done fairly easily in two ways. Either use only the eastern or western half of CONUS and repeat the analysis, or normalize by PBLH. Again, because this is an empirical method, the results could be much different by sector.
PBLH Reference: With PBLH, as we are all aware, there is no ‘gold standard’ that is a reliable reference for comparison given limitations in spatial or temporal resolution, retrieval method, etc. When comparing XGB with the reanalysis and CALIOP, it is not clear if the same time periods are used. For instance, AMDAR used 2005 to 2019 AMDAR (Line 189), but CALIOP from 2006 to 2013 (Line 150). So do all these comparisons use a consistent period of time? If not, this may lead to biases from using different times.
Tuning and Training: Selecting 800 trees with a depth of 8, which is a large amount, still results in a rather large IQR for the test set, even considering differences of sample size. If this were just an issue with large variance, at least the IQRs would overlap. None of the IQRs between training and testing overlap (and even the 97.5 percentiles barely overlap!), suggesting little utility of using this method outside of the training data. This really points to some large underlying flaw, which could be related to a number of factors.
Minor
Line 125: A good reason to use AMDAR and ERA5 is that they can both use the bulk Richardson number to find PBLH. Even though a critical Ri of 0.5 was used in a previous study with AMDAR, why shouldn’t this work use a consistent critical Ri?
Line 252: Using year as a factor in the final model is a surprising feature since there is no physical basis for this. This suggests that if extending to a new year of 2022, it is not possible to use relationships developed in this model, so it calls the generality or robustness of the model into question.
Fig. 5: Because the BL height at time 0, -1, and -2 is so important in this model, do you think that a linear trend would work just as well to get the BL height? If so, the simpler model is better.
Section 4.1: Using CALIPSO as a benchmark seems problematic; there are many issues with PBLH retrievals from CALIPSO, and Fig. 7 shows that there is really no agreement at all with any data set to CALIPSO.
Line 340: Yes, a natural next step is to extend it to other times, but the above issues would be much worse given the added difficulty of defining the nocturnal boundary layer.
-
AC2: 'Reply on RC2', Kang Sun, 26 Nov 2022
The comment was uploaded in the form of a supplement: https://amt.copernicus.org/preprints/amt-2022-235/amt-2022-235-AC2-supplement.pdf
-
AC2: 'Reply on RC2', Kang Sun, 26 Nov 2022
Peer review completion




Journal article(s) based on this preprint
Zolal Ayazpour et al.
Zolal Ayazpour et al.
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
355 | 104 | 15 | 474 | 7 | 8 |
- HTML: 355
- PDF: 104
- XML: 15
- Total: 474
- BibTeX: 7
- EndNote: 8
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(1669 KB) - Metadata XML