In Renowden’s latest apologia at Hot Topic it is quite difficult to discern Brett Mullan’s arguments through the thicket of abuse and misdirection created by Renowden. But I think these are the debating points he’s trying to make, lined up with the passages in which he makes them.
Point 1
When he says:
Let me pose a question. What does Dedekind think Rhoades and Salinger were doing in their 1993 paper? Indulging in a purely theoretical exercise? In fact, they developed their techniques by working on what became the Seven Station Series (7SS), and from 1992 onwards the 7SS was compiled using RS93 methods properly applied.
We’ll call that Debating point 1. From 1992 onwards the 7SS was recalculated using the Rhoades & Salinger (1993) measurement techniques.
Point 2
Just to be clear, when I said in the original post that the use of one or two year periods is not adequate, I was using the RS93 terminology of k=1 and k=2; that is, k=2 means 2 years before and after a site change (so 4 years in total, but a 2-year difference series which is tested for significance against another 2-year difference series).
Page 3 in the 1992 NZ Met Service Salinger et al report. The final paragraph clearly states k=2 and k=4 were used.
Top of page 1508 in Peterson et al 1998: “Homogeneity adjustments of in situ atmospheric climate data: a review”, International J. Climatology, 18: 1493-1517. Clearly states k=1, 2 and 4 were considered.
Debating point 2. From 1992 onwards the 7SS was recalculated using k=2 and k=4 for the comparison periods.
Point 3
During the discovery process before the High Court proceedings, Barry Brill and Vincent Gray examined a set of storage boxes at NIWA — dubbed the “Salinger recall storage boxes” — that contained (amongst other things) all of Jim Salinger’s original calculations for the 1992 reworking of the 7SS.
Debating point 3. All of Jim Salinger’s original calculations for the 1992 version were made available during discovery in the High Court proceedings.
Point 4
Dedekind should be aware that NIWA did consider max and min temperatures — which is essential if you are only going to apply adjustments if they achieve statistical significance. The evidence is there in the Technical Notes supplied to his co-author Barry Brill two years before dFDB 2014 was submitted to EMA. It’s even in the 7SS Review document NIWA produced explaining the process they used to create the latest 7SS. The Review may emphasise the mean temperature shifts but NIWA obviously had to have calculated the max and min shifts for the Review to mention them at all. Mullan (2012) also considers max and min temperatures when applying RS93, and shows why it is important to do so.
Dedekind should, therefore, be well aware that NIWA did not use “old” techniques for the new 7SS, and that they calculated adjustments for maximum and minimum temperatures as well as mean temperatures.
Debating point 4. In compiling Mullan et al. (2010) minimum/maximum data were looked at as well as the mean data used for the publication. These calculations were supplied to BOM.
Point 5
Shifts to maximum and minimum temperatures were calculated by NIWA for the 2010 Review; The statistical significance of all shifts was calculated too. The significance tests were done relative to each comparison (reference) site, rather than evaluating an overall significance level after combining sites as RS93 did.
Debating point 5. The statistical significance of each adjustment was calculated during preparation of the 2010 Review.
Point 6
Estimating anomalies is certainly the correct approach in place of using climatology. But it doesn’t appear Dedekind has done this for Masterton in dFDB 2014. Table 3 in the paper shows no adjustment made for the 1920 site move, but if you apply RS93 k=2 — their preferred method — this would change to -0.3ºC and have to be applied because it meets their statistical significance test.
Debating point 6. The de Freitas et al. (2014) paper does not seem to use correct techniques for infilling missing data at Masterton in May 1920.
Point 7
Dedekind tries [sic] hand wave away the 11SS as having been “thoroughly debunked elsewhere”, but doesn’t link to any debunking. The fact is that the raw station data from rural sites with long records that require no adjustments show strong warming. In other words, the warming seen in the 7SS is not an artefact of site changes or urban warming. That is an important matter, and should have been addressed in dFDB 2014.
Debating point 7. The 11SS is an important record and has not been debunked.
Point 8
Brett Mullan’s 2012 paper Applying the Rhoades and Salinger Method to New Zealand’s “Seven Stations” Temperature series (Weather & Climate, 32(1), 24-38) deals with the correct application of the methodology described in Rhoades and Salinger’s 1993 paper.
At the very least, dFDB 2014 should have addressed the existence of Mullan’s paper, and explained why the application of RS93 in that paper is not preferable to their interpretation of it.
Debating point 8. The de Freitas et al. (2014) paper should have discussed the relevant literature, including Mullan (2012).
Point 9
Dedekind makes much of the fact that the paper does refer to one paper on SSTs around New Zealand — but skips over the essential point: that the SST evidence confirms that warming is occurring faster than they calculate.
Debating point 9. SST around New Zealand is warming faster than the 0.28°C/century shown in the de Freitas et al. (2014) paper.
None of these claims stand up to scrutiny. They will all be disproven here in the next few days.
Views: 286
Nicely laid out – now we’re getting down to it.
Point 8 “Brett Mullan’s 2012 paper Applying the Rhoades and Salinger Method to New Zealand’s “Seven Stations” Temperature series (Weather & Climate, 32(1), 24-38) deals with the correct application of the methodology described in Rhoades and Salinger’s 1993 paper.”
I don’t think it does (see below) but let’s get one thing straight from the outset. The NIWA 7SS as it currently stands is Mullan et al (2010) referred to as M10 (note it’s a NIWA report – not an independent paper):
Mullan, A.B; Stuart, S.J; Hadfield, M.G; Smith, M.J (2010).
Report on the Review of NIWA’s ‘Seven-Station’ Temperature Series
NIWA Information Series No. 78. 175 p.
https://www.niwa.co.nz/sites/niwa.co.nz/files/import/attachments/Report-on-the-Review-of-NIWAas-Seven-Station-Temperature-Series_v3.pdf
The ‘Statistical Audit’ (2011) comprehensively proved Mullan et al (2010), and previous efforts, did NOT compile the 7SS using RS93 methods properly applied. This automatically disqualifies Renowden et al’s argument in Point 1:
Point 1 ….. from 1992 onwards the 7SS was compiled using RS93 methods properly applied.
‘Statistical Audit’ here:
https://www.climateconversation.org.nz/docs/Statistical%20Audit%20of%20the%20NIWA%207-Station%20Review%20Aug%202011.pdf
So after the revelations of the ‘Statistical Audit’, something had to be done about the situation. Hence Mullan (2012) which is the Weather and Climate paper – not a NIWA report. It does not supersede the 7SS as it stands, that’s still M10.
So citing Mullan (2012) is something of a strawman because Mullan et al (2010), the report, and Mullan (2012), the paper, are very different in approach and the latter does not formally supersede the former. Mullan (2012) is really a peripheral debate away from the issues of Mullan et al (2010) but let’s look at it referring as M12, see here:
http://www.metsoc.org.nz/publications/journals
M12, not the 7SS as it stands, DOES attempt to apply RS93. But take a look at the correlation and weighting comparisons for Masterton 1920, ‘Statistical Audit’ SI vs M12:
SI, M12 correlation comparison
Thorndon
0.73, 0.79
Albert Park
0.58, 0.67
Christchurch Gardens
0.68, 0.97
Taihape
0.88. 0.78
SI, M12 weighting comparison
Thorndon
0.24, 0.21
Albert Park
0.09, 0.11,
Christchurch Gardens
0.18, 0.48,
Taihape
0.49, 0.20
Very different results – why?
SI here (1920 Y-series page 50 pdf):
https://www.climateconversation.org.nz/docs/Statistical%20Audit%20of%20the%20NIWA%207-Station%20Review%20Aug%202011%20SI.pdf
Both SI and M12 state identical “weighted difference between the y(i) series and the base series y(0)” expressions (Z) but M12 leaves out 2 entire steps in “2. Data and Method (b) Rhoades and Salinger methodology”:
1) The correlation calculation “between each differenced series y(i) (i=1,2,…,n) and y(0″ – Pi
2) The “weights … computed using the 4th power of the correlations” – Wi
M12 does not state any correlation expression used, let alone that of RS93 i.e. there is no way of knowing how M12 arrived at the correlation values stated in M12. Peer review at Weather and Climate missed that.
Looking at the SI Y-series for Masterton 1920 Christchurch Gardens vs Waingawa, no way is there a 0.97 correlation as M12 calculates. The “Missing May” issue might have some bearing on the Y-series correlations, but M12 does not show a Y-series graph. Mi2 shows Figure 2:
“R&S temperature adjustments as a function of k for the May 1920 site change in the Masterton reference series, using four comparison sites (Albert Park, Thorndon, Christchurch Gardens and Taihape).”
Except M12 does not actually detail the 2 fundamental R&S steps above when calculating the adjustments (says nothing about de-trending either). Leads me to think, given the values, that M12 may have calculated correlations from inter-station rather than Y-series as per RS93, does anyone concur?
If I’m right and M12 is wrong then the paper should be retracted.
>”Peer review at Weather and Climate missed that.”
As did Stuart, S.J (see Mullen et al, 2010)
[M12] – “The author [Brett Mullen] thanks Stephen Stuart (NIWA) for a careful review of the manuscript”
Taking the comparison a little further by including M10:
SI, M10, M12 correlation comparison
Thorndon
0.73, N/A, 0.79
Albert Park
0.58, N/A, 0.67
Christchurch Gardens
0.68, N/A, 0.97
Taihape
0.88. N/A, 0.78
SI, M10, M12 weighting comparison
Thorndon
0.24, 0.00, 0.21
Albert Park
0.09, 0.00, 0.11,
Christchurch Gardens
0.18, 0.00, 0.48,
Taihape
0.49, 0.00, 0.20
>”M12 does not actually detail the 2 fundamental R&S steps above when calculating the adjustments (says nothing about de-trending either). Leads me to think, given the values, that M12 may have calculated correlations from inter-station rather than Y-series as per RS93, does anyone concur?”
M12 states:
2. Data and Method
(b) Rhoades and Salinger methodology
“Rhoades and Salinger (1993) suggest different
possibilities for the weighting of separate
adjustments from the multiple comparison
sites. In their worked example, they specify
weights that depend on the fourth power of
the inter-station correlations.”
Wrong.
de Freitas et al (2014) states:
5 Method [“RS93”]
5.1 Description
“First, the difference series y(i) are calculated, for 12-month (k=1) and 24-month (k=2) cases;
that is: [see equation]
“The differencing “is intended to remove any seasonal effect,
and, in the absence of a trend or a real effect due to the site
change, yt (i) would be a random variable with zero mean” [9]
(RS93, p. 904).”
“Once all the y(i) series have been assembled, the correlations
ρi are calculated (using k=1 in this case, although other
values of k are permissible [13] (RS93, p.906)) between each
differenced series”
The difference series y(i) (the “Y-series”) is not simply “inter-station correlations”. Do I understand this correctly?
Ah, found it. M12 states:
“Note that the correlations should be calculated
from the inter-annual difference series yi
rather than the raw series, in order to
minimise the effect of site changes at the
target site. R&S appear to calculate the
correlations from just the first year of
differences (i.e., k = 1). Since the correlations
will change (sometimes substantially) as more
years of data are considered, we use all 12*k
months in estimating as the ‘standard’
weighting assumption.”
12*(k = 1) = 12 months which is exactly the same as the Audit SI and deF14. M12 doesn’t state the RS93 equation for correlations (ρi ) neither does it graph the Y-series so that may be where the difference is, Audit SI vs M12, upthread.
Why the radical difference in correlation values (ρi ) for Masterton 1920 upthread?
[M12] – “Note that the correlations should……”
Yes should. But did Brett Mullan (and Stephen Stuart) actually do what “should” be done. I think not, either:
a) Mullan and Stuart intentionally chose not to compile Y-series because the inter-station correlations gave them the values they wanted, or,
b) Mullan and Stuart simply forgot Y-series should be compiled.
The absence of M12 Y-series information indicates either way.
>”….did Brett Mullan (and Stephen Stuart) actually do what “should” be done [?]”
Lets assume that rather than a) and b) above Mullan did compile Y-series. There’s no way of knowing from M12 how he actually did it. Sure the yi expression is stated but more elaboration is needed.
de Freitas et al (2014) provides the elaboration:
“……..if the station change occurred at the end of
December 1975, y1 for any station (when k=1) is the January
1976 temperature minus the January 1975 temperature. The
term y2 is February 1976 minus February 1975, up to December
1976 minus December 1975. For k=2, y1 is January 1976
minus January 1974, y2 is February 1976 minus February
1974, up to December 1976 minus December 1974.”
M12 says nothing about this. So in for example the Masterton 1920 change, was the M12 yi series correct for Christchurch Gardens when k =1? Masterton details here:
https://www.niwa.co.nz/sites/niwa.co.nz/files/import/attachments/Masterton_CompositeTemperatureSeries_13Dec2010_FINAL.pdf
The 1920 change occurred April. y1 for Christchurch Gardens is the May 1920 temperature minus the May 1919 temperature. The term y2 is June 1920 minus June 1919, up to April 1921 minus April 1920.
Is this what Mullan did? Who knows? I’m inclined to think it wasn’t so now there’s possibilities a), b), and c).
>”y1 for Christchurch Gardens is the May 1920 temperature minus the May 1919 temperature. The term y2 is June 1920 minus June 1919, up to April 1921 minus April 1920. Is this what Mullan did?”
The Audit SI Waingawa vs Christchurch Gardens yi paired series is:
y1, 0.6, -0.1 ……May1920 minus May 1919
y2, 1.3, 0.6 …….June 1920 minus June 1919
y3, -0.4, -0.1……July 1920 minus July 1919
y4, -1.4, -2.4 …..Aug 1920 minus Aug 1919
y5, -0.2, 0.3 ……Sept 1920 minus Sept 1919
y6, -0.5, 0.3 ……Oct 1920 minus Oct 1919
y7, 0.1, 1.1 …….Nov 1920 minus Oct 1919
y8, 1.1, 1.3 …….Dec 1920 minus Dec 1919
y9, 0.9, 0.3 …….Jan 1921 minus Jan 1920
y10, -2.0, -0.4 …Jan 1921 minus Jan 1920
y11, -0.3, -0.6 …Feb 1921 minus Jan 1920
y12, -2.1, -1.9 …Mar 1921 minus Mar 1920
The CliFlo raw monthly data May 1919 to March 1921 will confirm this paired series or otherwise. If confirmed, Mullan cannot have done this because there is obviously not a 0.97 correlation of Christchurch Gardens with Waingawa using the above paired series.
>”The CliFlo raw monthly data May 1919 to March 1921 will confirm this paired series or otherwise”
Christchurch Gardens (agent 4858) May 1919 to March 1921
Waingawa Workshop Road (Site 4, agent 2473) May 1919 to Apr 1920 ?
Waingawa Essex Street (Site 5, agent 2473 from Jun 1920 to March 1921 ?
OK, got 4858 data from CliFlo easy so can verify the yi series for that.
But what data to extract for Waingawa? 2473 as above? The break is in the middle so I’m not sure I’m doing this right. Got 2473 out anyway.
>”If confirmed, Mullan cannot have done this because there is obviously not a 0.97 correlation of Christchurch Gardens with Waingawa using the above paired series.”
OK, have confirmed both yi series after fixing my date and eye errors.
Mullan, Stuart, and the Weather and Climate peer review have got this wrong.
>”Debating point 6. The de Freitas et al. (2014) paper does not seem to use correct techniques for infilling missing data at Masterton in May 1920.”
Whatever infilling for May makes very little difference in the Y-series because there’s no change to April and June either side or to the rest of the series either i.e. non-May months determine the correlation much more than May does. It’s M12’s radical Y-series correlations that throw up the the M12 graph Figure 2: “R&S temperature adjustments as a function of k”, where the May treatment appears to validate an adjustment of a little more than -0.3 C which (only just) excludes zero from the confidence interval (M10 adj -0.21 includes zero therefore invalid).
Better treatment of May doesn’t validate anything, it’s the correlations that are the issue. M12 has to prove the correlations first but there’s only the correlation values for Masterton 1920 — no Y-series data or graph. The Audit SI graphs every Y-series for all adjustments in all 7 locations. I cannot find Supplementary Information for M12 that would enable all 7 location Y-series (at least 20 adjustments) to be compared. Audit SI vs M12.
Debating point 6 is therefore of no consequence, move on to Debating point 8 – that’s the point-of-contention as I see it wrt M12. Correlations not applied in M10 of course so that’s the real point-of-contention.
>”Debating point 6 is therefore of no consequence”
M12 agrees:
4. Discussion and Conclusions
“……the R&S method can simply
skip over missing months with very little
effect on the final adjustment”
>”Debating point 7. The 11SS is an important record and has not been debunked.”
Poor reasoning given the methodology is as for M10 7SS, which has been shredded.