[wrfems] Using the adaptive time step with nested runs (Was: Major Temperature Problem)

Don Van Dyke don.vandyke at noaa.gov
Tue Mar 27 16:24:23 MDT 2012


Bob,

Thanks again very much for working so hard on this problem.  I'm a big 
fan of the adaptive time stepping (providing it works correctly of 
course) because it allows us to truly maximize the number of runs that 
we can do with our current hardware.  It really does cut down 
significantly on model run times when it's working properly.

-Don

On 3/27/2012 10:06 AM, Robert Rozumalski wrote:
>
> Good morning all,
>
> I'm just following up to Don's issue regarding a serious problem with 
> the near-surface fields when running a nested simulation with an 
> adaptive time step.
>
> Symptom of the problem:
>
>   Many near surface fields within a nested domain, such as 2m 
> temperature, stop updating during the course of a simulation. This 
> problem may occur from
>   the beginning, or start at  some point during a simulation.  It also 
> may start/stop multiple times over an entire simulation.  This issue 
> tends to be sporadic,
>   sometimes occurring  over consecutive simulations and then disappear 
> for an extended period of time.
>
>   The problem only appears when the adaptive time step is used in a 
> nested simulation.
>
>
> After days of scouring through the code and running many dozen test 
> simulations, I've determined that the nested domain tendencies coming 
> out of the
> LSM and PBL schemes are not being added to the model perturbation 
> fields.  It appears this problem is due to the timing between 
> ever-changing model
> time step and the frequency of calls to the physics schemes.  My guess 
> is that the tendency fields are being zero'd out before they are added 
> to the
> model perturbation fields but I'm not completely certain if this is 
> the exact cause.
>
> I've been able to correct the problem by changing the frequency of 
> calls to the physics schemes (CUDT, BLDT and RADT) and modifying the 
> adaptive
> time step  parameters for Don's case but I need to run some additional 
> simulations in order to develop some general configuration guidelines.
>
> I hope to have this information available this afternoon.
>
> Bob
>
>
>
>
> On 3/19/12 12:16 PM, Don Van Dyke wrote:
>> Thanks for your help.  Unfortunately, I only saved the grib files 
>> from a failed run and didn't think to save the raw netCDF files, so I 
>> don't have any failed netCDF files to look at for the moment.  The 
>> 12z run today worked fine.  The only change we've made so far is to 
>> remove the RUC from the initialization and just go with the NAMPTILE 
>> and NAMPTILELSM with the sport SSTs for initialization.  If another 
>> run fails, I'll try to get the netCDF copied off to look at.  I 
>> really hope the cause is not the adaptive time-step somehow, since 
>> that's the only way we can cram 4 runs per day on that computer.  
>> However, we've had the adaptive running for around a month now 
>> without any problems at all.
>>
>> This issue brought up another question with the RUC data.  The RUC is 
>> scheduled to be discontinued quite soon and replaced by the RAP.  
>> Will be able to use the RAP data right away, or will we need to wait 
>> for another build and resort to using something other than RUC in the 
>> meantime?  Thanks again!
>>
>> -Don
>>
>> Robert Rozumalski wrote:
>>>
>>> Hello Don,
>>>
>>> I've attempted to run a simulation using your configuration and got 
>>> an interesting result, a segmentation fault after 23 hours.
>>>
>>> Typically, a segmentation fault that occurs well into a simulation 
>>> suggests a problem with the WRF code. It's possible that you
>>> are getting the same errors without a seg fault since the fault will 
>>> occur when the model is attempting to access a block of
>>> memory on a system that is not accessible.  In your case it may be 
>>> that wrf is accessing memory from an allowed area but
>>> it's not the correct area, so the simulation continues.
>>>
>>> For a failed run, take a look at the domain 2 WRF netCDF output 
>>> files in the  wrfprd directory  with "ncview":
>>>
>>>    %  ncview <netCDF file>
>>>
>>>        Look at:   2D Vars -> 2T    (2m temp)
>>>
>>> Start at the beginning of the run and work forward in time. Do you 
>>> see anything unusual?
>>>
>>> I suspect you will see an increase in the amount of white area with 
>>> time. The white indicates that the data are bad.
>>>
>>> I'm working  to determine the cause of the issue.
>>>
>>> Bob
>>>
>>>
>>>
>>> On 3/18/12 6:19 AM, Don Van Dyke wrote:
>>>> Bob and others,
>>>>
>>>> We have developed a major temperature problem with our local WRF 
>>>> here at WFO Tallahassee in the last 2 days.  For some unknown 
>>>> reason, the surface and boundary layer temperatures are 
>>>> occasionally not rising during the daylight hours for our inner 4 
>>>> km nest.  We run a 12 km/4 km configuration.  What is even stranger 
>>>> about this problem is that the 12 km outer nest is just fine, and 
>>>> the problem seems intermittent.  The nest is configured as a 
>>>> one-way nest though, so I guess the problem must be isolated to the 
>>>> 4 km version of the model somehow.  I first noticed this problem 
>>>> with the 12z March 17th run, then the subsequent 18z and 00z runs 
>>>> were fine, then the problem reappeared on the 06z March 18th run.  
>>>> I've attached some screenshots as an example.  On the 12z March 
>>>> 17th run, the 4km model forecast areas of fog all day for the 18th 
>>>> with highs near 60 while the 12 km parent nest was normal with 
>>>> highs in the 80s.  I've also attached the log files from the 06z 
>>>> March 18th run.   We've never seen this happen before, and we're at 
>>>> a loss to explain why it's only affecting the 4 km nest and not the 
>>>> 12 km run, and why the problem seems to be intermittent.  Also, to 
>>>> my knowledge, nothing has changed on our computer other than I 
>>>> recently added crons for 00z and 12z runs.  (We previously only did 
>>>> 06z and 18z runs.)  However, I did not change anything with the 
>>>> model configuration.  This just suddenly started happening.  Thanks 
>>>> for any help anyone can give!
>>>>
>>>> Don Van Dyke
>>>> General Forecaster
>>>> NWS Tallahassee, FL
>>>>
>>>>
>>>> _______________________________________________
>>>> wrfems mailing list
>>>> wrfems at comet.ucar.edu
>>>
>>>
>>> -- 
>>> Robert A. Rozumalski, PhD
>>> NWS National SOO Science and Training Resource Coordinator
>>>
>>> COMET/UCAR PO Box 3000   Phone:  303.497.8356
>>> Boulder, CO 80307-3000
>>>
>>>   
>>> ------------------------------------------------------------------------ 
>>>
>>>
>>> _______________________________________________
>>> wrfems mailing list
>>> wrfems at comet.ucar.edu
>>
>>
>> _______________________________________________
>> wrfems mailing list
>> wrfems at comet.ucar.edu
>
>
> -- 
> Robert A. Rozumalski, PhD
> NWS National SOO Science and Training Resource Coordinator
>
> COMET/UCAR PO Box 3000   Phone:  303.497.8356
> Boulder, CO 80307-3000
>
>
>
> _______________________________________________
> wrfems mailing list
> wrfems at comet.ucar.edu
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailman.comet.ucar.edu/pipermail/wrfems/attachments/20120327/d652a970/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: don_vandyke.vcf
Type: text/x-vcard
Size: 356 bytes
Desc: not available
URL: <https://mailman.comet.ucar.edu/pipermail/wrfems/attachments/20120327/d652a970/attachment.vcf>


More information about the wrfems mailing list