Just three days ago Christopher Booker posted The fiddling with temperature data is the biggest science scandal ever which has collected 21,586 comments (h/t Vincent Gray).
The Article is worth reading but most of the comments are not. They are mostly irrelevant and appear to have been mass-produced by opponents of science in order to sabotage the topic (although, like Vincent, I’m not sure about this, as I’m not about to read them all).
Booker’s article describes the examination by Paul Homewood at Not a Lot of People Know That of adjustments to some weather stations contributing to various national temperature records. Homewood concludes the adjustments almost invariably act to increase the warming or decrease the cooling: perhaps up to 35% extra warming is created like this. This spurious warming is the foundation on which the global warming edifice has been built.
Caution is advised and it’s a bit early to be drawing dramatic conclusions.
Still, it’s interesting.
Views: 298
21,000 comments of which many are just trolling by the looks of it.
Impressive, nevertheless
Quite. Which led me to say they “appear to have been mass-produced by opponents.” Little worms.
Pingback: Climategate II? | Scottish Sceptic
>”Booker’s article describes the examination by Paul Homewood at Not a Lot of People Know That”
Paul’s latest is ‘Cooling The Past In New Zealand’, particularly in respect to Gisborne Aero:
https://notalotofpeopleknowthat.wordpress.com/2015/02/09/cooling-the-past-in-new-zealand
I’ve made some comments that are duplicated in ‘Temperature records’ starting here:
https://www.climateconversation.org.nz/open-threads/climate/climate-science/temperature-records/comment-page-1/#comment-1283169
The GISS adjustments to Gisborne Aero are inexplicable, BEST make none over a period that GISS make 7. Paul has not realized exactly how GISS have adjusted the series, he thought just one 0.7 adjustment at 1975 – not so. There’s 7 adjustments of 0.1 increments over 1963 – 2002. I’m hoping Paul will see it from what I’ve presented.
>”Caution is advised and it’s a bit early to be drawing dramatic conclusions.”
I think it is way beyond that now and Booker is right (and WUWT is AWOL). Paul Homewood’s post that triggered it all was in response to GISS’ “hottest ever year”. That post is upthread of the NZ post in ‘Temperature records’:
‘Massive Tampering With Temperatures In South America’
By Paul Homewood, January 20, 2015
One of the regions that has contributed to GISS’ “hottest ever year” is South America, particularly Brazil, Paraguay and the northern part of Argentina. In reality, much of this is fabricated, as they have no stations anywhere near much of this area, as NOAA show below.
Nevertheless, there does appear to be a warm patch covering Paraguay and its close environs. However, when we look more closely, we find things are not quite as they seem.
Continues>>>>>>>
https://www.climateconversation.org.nz/open-threads/climate/climate-science/temperature-records/comment-page-1/#comment-1279772
Following that (see thread above) has been San Diego, Bolivia, Reykjavik, Valentia Observatory SW Ireland, Arctic, and now New Zealand.
GISS has some very difficult explaining to do.
‘Temperature Adjustment Scandal Goes Viral’
By Paul Homewood, February 9, 2015
[See WordPress graph]
The genie is well and truly out of the bottle now.
I have now had more than 50,000 views in the last two days, as the story of temperature adjustments has taken hold. Many, many more will have read the accounts in the media from Britain to America and Australia.
Regardless of the validity or otherwise of some of the adjustments, this is a story that has been hidden from the public for much too long.
https://notalotofpeopleknowthat.wordpress.com/2015/02/09/temperature-adjustment-scandal-goes-viral/
The Scottish Sceptic pingback above provides a list of “likeminded articles”:
http://scottishsceptic.co.uk/2015/02/10/climategate-ii/
For example Climate etc: Berkeley Earth: raw versus adjusted temperature data
BEST – “”As we will see the “biggest fraud” of all time and this “criminal action” amounts to nothing.”
My reply – “Could not be more wrong” followed by details of Gisborne Aero BEST vs GISS (in moderation at this time):
http://judithcurry.com/2015/02/09/berkeley-earth-raw-versus-adjusted-temperature-data/#comment-673008
Yes, and nice to be included. Keep up the good fight, Rich. You’re doing stuff nobody else wants to do or has time for, but who applaud when you’ve done it. I’m finalising the next parcel of the Commissioner’s critique. Here’s hoping it’s not completely upstaged by events elsewhere.
>”The Article is worth reading but most of the comments are not. They are mostly irrelevant and appear to have been mass-produced by opponents of science in order to sabotage the topic”
Not much effect on the poll though. 88% agree with me of 114274 votes:
http://renderer.qmerce.com/interaction/54d78df0a2eb4c424343c9f9
Can’t remember the options (can’t access them now having voted) but I think I voted that Booker was right. Anyone that hasn’t voted should be able to see the options in the poll embedded in the Booker article.
Yes, that’s true. I voted, too, and noted 88% in favour of “climate scientists have done a poor job” or something to that effect. I obviously should have taken the time to read the comments, but there was so much off-topic rubbish I lost interest.
The poll question asked was “Do you think Scientists have exaggerated the threat of global warming?”
According to a guy (Agnostic) at Climate Etc.
That was it. I had to think for a minute, but I got the answer.
Richard,
88% looks to be a result!
To me, a layman with no quals in science – all of my professional training is in the Arts and Education – but being a typical Kiwi bloke who can’t function without a workshop, I have to know how stuff works – this scandal has been quite obvious for at least a decade. Climategate 1 was the ‘AHA!’ event for me, particularly Prof Acton of UEA’s comment to one of the lofty gentlemen who chaired one of the fatuous ‘enquiries’ into UEA, namely’ You played a blinder, Sir!
I learned many years ago through bitter experience that beautifully-spoken chaps in nicely-cut suits who have impressive titles and lots of letters after their names can be just as venal and dishonest as Charles Dickens’ Fagan or any one of Fagan’s apprentice pickpockets.
While working in London during the early years of this century I became an active commenter on the Guardian’s CiF, but decided it wasn’t worth the aggro and abuse that was directed at me after challenging obvious enviro activists such as the infamous Bluecloud!
I have become very weary of people, particularly from the Performing Arts world, who regard their own success as a licence to indulge in ridiculous protests launched by various NGOs which are quite obviously rooted in really silly ideas based on unscientific misunderstandings, such as the current opposition to fraccing in the UK.
I am at last becoming hopeful that the very obvious climate scam is finally drawing to a close!
Yes, it’s a great result. Thoughtful comments, Alexander, thanks.
Interesting perspective from a comment at Climate Etc that hadn’t crossed my mind until I read it:
[AK ] – “Given that the supposed greenhouse effect of CO2 depends on concentration changes well after pre-1900, at best (heh!) the effects of “BEST’s homogenization” is to offer a tiny bit more credence to the LIA, and interpretations of recent warming as part of the rebound from it.”
http://judithcurry.com/2015/02/09/berkeley-earth-raw-versus-adjusted-temperature-data/#comment-673544
If we consider the present as “fixed” as per the homogenization process due to better quality control, better sites, AWS, etc i.e. the reference level is the present, then adjustments that make the past cooler are in effect indicating that the LIA was cooler than we thought as AK points out. That’s if we subscribe to the process being valid.
This perspective reverses the warmist argument. In effect they’re asserting that the LIA was real, and was much cooler than the present.
Zeke (Hausfather, BEST)
[A] >”Technically the series is cut at breakpoints and every segment is treated as an individual station for the purposes of constructing the underlying temperature field. However, the general point is that it is the mean temperature of the segment, rather than its start or endpoints, that is relevant when combining it with other stations to estimate the temperature field.”
[B] >”We also produce “adjusted” records for each station, though these are not actually used for the Berkeley temperature product and are solely for those interested in data from that specific station. These records are combined by aligning the mean values of each subsegment of the station record, as shown in the example above.”
OK, case study. 1971 break PUERTO CASADO, Paraguay:
http://berkeleyearth.lbl.gov/stations/157455
Segments 1951-1971 (1) and 1971-2006 (2):
http://berkeleyearth.lbl.gov/auto/Stations/TAVG/Figures/157455-TAVG-Alignment.png
I assume that segment (1) in [A] is an “adjusted” series, but an inference can be drawn from [B] that it is NOT adjusted, simply separated from (2). Can you clarify EXACTLY what the process is please Zeke?
The mean difference between (1) and (2) is about 0.8. If this adjustment has actually been made it is non-trivial i.e. 0.8 is a massive adjustment requiring a rock solid basis. That basis is the station history, local factors, the statistical methodology and criteria, etc, all of which must be able to be dissected step-by-step.
BOM was caught out by this at Rutherglen. They relied on automation to detect an empirical break. When questioned they couldn’t say from the local station history why the break occurred i.e. they had neglected human input in their process. They were eventually able to deduce from the records that the site moved from the north side of the rise to the south side. I accept their reasoning. But I don’t accept their adjustment. Max stayed exactly the same but Min changed by over 1.5 I think is was. The adjustment was on the basis of other sires, not the local conditions. A change in Min of that size from one side of a rise to the other when Max remains the same is highly suspect. The methodology could be proved, or otherwise, simply by re-installing an AWS site on the north side and taking at least a years data. Nowhere has this ever been done to my knowledge.
[B} definitely states an “adjustment” which I estimate to be about 0.8 above. GISS makes a 0.78 adjustment at the same break. I don’t question the process of adjustment for site moves if the station history details the move or it can be ascertained as at Rutherglen. I do question the rationale of “empirical break” adjustment without recourse to station history. And I certainly question your rationale for a 0.8 adjustment at 1971 Puerto Casado, especially if the adjustment is for [A] in addition to [B] and if you are in fact using segment means.
Surely you are not using the entire respective segment means (1) and (2) to arrive at the 0.8 step? That is ludicrous if so. This is the central issue in the New Zealand 7SS controversy that went to court (NZCSET v NIWA). The Judge’s duty was not to decide questions of science but to decide questions of fact. He failed to do his duty. The NZCSC rigourously followed the established method of Rhoades & Salinger (1993) in compiling the 7SS series, NIWA departed arbitrarily from the R&S93 method so that they cannot now cite the basis of their method. NIWA’s 7SS is the NZ data for CRUTEM4/HadCRUT4. The NZCSC method has now entered the literature (De Freitas, Dedekind, and Brill, 2014) despite the Judge’s decision against NZCSET.
Point is, the R&S93 method uses statistical accept/reject criteria for k = 1 and k = 2 i.e. 12 and 24 months either side of the break, 1 and 2 years of monthly data. Puerto Casado above is monthly data. Clearly 2 years either side of the 1971 break does not support a 0.8 adjustment. Or any adjustment, the data matches over the 4 year overlap. There was no adjustment in (2) for the similar break at 1987. The segment means are irrelevant.
The statistical accept/reject criteria for R&S93 is in the Appendix here:
‘Statistical Audit of the NIWA 7-Station Review’
https://www.climateconversation.org.nz/docs/Statistical%20Audit%20of%20the%20NIWA%207-Station%20Review%20Aug%202011.pdf
http://judithcurry.com/2015/02/09/berkeley-earth-raw-versus-adjusted-temperature-data/#comment-673768
This is fascinating. I especially like “The NZCSC method has now entered the literature (De Freitas, Dedekind, and Brill, 2014) despite the Judge’s decision against NZCSET.” It’s a viewpoint I hadn’t seen, though surely NIWA are moving to remedy their neglected publishing obligation. Certainly at the moment the NZCSC owns the scientific high ground in NZ temperature adjustments, there’s no doubt about that. We wrote the book on it!
Maybe you should read an informed opinion on the Christopher Booker article instead:
http://www.realclimate.org/index.php/archives/2015/02/noise-on-the-telegraph/
According to Mosher, the adjustments don’t make any difference to the overall result, as the adjustments cancel out
This does rather suggest to me that maybe we should not bother with the adjustments.
At least that would rule out data fiddling if we only used raw data
Simon,
Rasmus makes valid points, but spoils his case by declining to examine the adjustments. First, he shifts the argument to the GISS dataset then, for some reason, tests the ‘one-way adjustment’ allegation in Paraguay by comparing the GISS data with NADC and Nordli et al. What’s got into him? He has established precisely nothing about the Paraguay data! I’m surprised you haven’t challenged him already!
>”First, he shifts the argument to the GISS dataset”
To be fair, the initial Homewood criticisms were specific to the GISS dataset. not BEST.
It’s been very odd that BEST have leaped to GISS defense with a defense of BEST. This has been noted at Climate Etc. Meanwhile, crickets from GISS.
I’ve been busy presenting instances of GISS vs BEST (embarrassing). Nothing from the BEST crew so far, just a manic warmy with idiotic responses (Rooter).
>”According to Mosher, the adjustments don’t make any difference to the overall result, as the adjustments cancel out. This does rather suggest to me that maybe we should not bother with the adjustments. At least that would rule out data fiddling if we only used raw data.”
Yes others saw that too. Mosher shot himself in the foot with that one.
>”….tests the ‘one-way adjustment’ allegation in Paraguay by comparing the GISS data with NADC and Nordli et al”
hidethedecline (@hidethedecline) | February 11, 2015 at 6:25 pm |
Zeke writes: “These records are combined by aligning the mean values of each subsegment relative to the regional expectation (e.g. based on comparisons to nearby stations) of the station record… .”
What Zeke is communicating with us all is this: “Our pre-conceived personal opinion arbitrary regional expectations are the primary parameter of our work. All combining and adjustments must fit our expectations. We derive our expectations from GHCN and GISS. That is why BEST looks just like them. This is called validation in climate science.”
I wonder if any of the BEST guys are wine drinkers. If you know anything about wine you know about terroir – one hill delivers better grapes than the next hill. Locality matters. Deriving a ‘global mean’ that is divorced from locality is something but it’s not science.
http://judithcurry.com/2015/02/09/berkeley-earth-raw-versus-adjusted-temperature-data/#comment-673744
# # #
Zeke Hausfather (BEST) seems to be saying that the entire segment mean is adjusted relative to “regional expectation” (their circular reasoning) instead of an R&S93 type k =1 and k=2 criteria at the breakpoint which is 2 yrs at the end of the segment, not the segment mean.
I’ve asked for clarification from Zeke but haven’t had an answer addressed to me yet. He did post this:
Zeke Hausfather | February 11, 2015 at 12:26 pm |
A slight correction: I should have said “These records are combined by aligning the mean values of each subsegment relative to the regional expectation (e.g. based on comparisons to nearby stations) of the station record, as shown in the example above”
That does state explicitly that the BEST adjustment method is by the mean value of the segment relative to the “regional expectation”. I’m astounded by this.
I think I’ll ask again in a thread header just to make sure this is the BEST process. If it is it is entirely bogus.
>”Rasmus makes valid points, but spoils his case by declining to examine the adjustments.”
Same at Climate Etc. Case studies of adjustments at specific breaks where BEST can be compared to GISS and BOM and NIWA are being avoided like the plague by the BEST crew.
But it’s GISS that’s worst. I’ve compared the Auckland treatment by GISS, BEST, NIWA, and NZCSC at Climate Etc. GISS comes out looking like fools because they have their final Auckland Airport series beginning in 1879. I pointed out that the Wright Bros first flight was 1903 and Auckland Airport became operational in 1966.
GISS Gisborne Aero is simply fraud (see upthread link to ‘Temperature records’).
RC links to Victor Venema’s post:
http://variable-variability.blogspot.de/2015/02/evil-nazi-communist-world-government.html
[Booker] – “temperature graphs for three weather stations in Paraguay against the temperatures that had originally been recorded. In each instance, the actual trend of 60 years of data had been dramatically reversed, so that a cooling trend was changed to one that showed a marked warming. ”
[Venema] – “Three, I repeat: 3 stations. For comparison, global temperature collections contain thousands of stations. CRUTEM contains 4,842 quality stations and Berkeley Earth collected 39,000 unique stations.”
Venama misses the point entirely (or is that redirects?). It was those 3 Paraguay stations that produced the extrapolated Central America warming that enabled GISS to shout 2014 was warmest ever. The GISS Paraguay adjustments must stand up to intense scrutiny, Venema doesn’t see that, but sceptics are only just getting started on it and it’s not pretty.
Another of Mosher’s arguments is BEST make their adjustments because “algorithm”.
It was pointed out to him that that was not good enough.
RC, This comment has taken hours to finalise, but I think my criticism remains. Rasmus mentions “three hand-picked stations from Paraguay – out of thousands” but not BEST and seems intent on defending the GISS dataset. I don’t have time to analyse the RealClimate article in detail, but it has errors.
For example, it refers to Variable Variability, by Victor Venema, who says, “Three, I repeat: three stations. For comparison, global temperature collections contain thousands of stations. CRUTEM contains 4,842 quality stations and Berkeley Earth collected 39,000 unique stations. No wonder some are strongly adjusted up, just as some happen to be strongly adjusted down.”
So he’s saying that it doesn’t matter if some stations are poorly treated. He misses the point. When a true scientist’s mistake is pointed out he corrects it. He never says it doesn’t matter because he doesn’t know what might matter.
When Rasmus says, “The purports about systematic one-way adjustments can easily be tested by comparing the trends in the GISS data with the independent North Atlantic Climate Data (NACD),” why does he avoid examining the adjustments to those three stations? His comparisons with both Nordli et al. and NACD are irrelevant to the adjustments to the Paraguay record.
>”He [Venema] misses the point”
Yes he misses the critical points. See my comment re Venema upthread a bit. Note that Venema is neither BEST nor GISS, he is Meteorological Institute of the University of Bonn.
>”His [Rasmus] comparisons with both Nordli et al. and NACD are irrelevant to the adjustments to the Paraguay record.”
Yes. They are all studiously avoiding the specific adjustments. Paraguay was critical for GISS and their 2014 warmest year. The 3 sites were smeared over Central America and that was where the 2014 boost came from. The difference between the treatment of Puerto Casado by BEST and GISS in the recent adjustments just shows how absurd the certainty is over Central America in 2014, let alone global.
“They are all studiously avoiding the specific adjustments.”
Amazing.
Also according to Mosher, UHI is abrupt (huh?).
Willis Eschenbach thinks this is because BEST scalps the UHI sawtooth drop. So instead of correcting for UHI, BEST make it worse. Not sure if this is a valid criticism though.
Can’t demonstrate this with Auckland because BEST stay with Albert Park instead of moving to Mangere. If BEST corrected Albert Park they would have an Auckland trend about the same or even less than NZCSC.
‘Circularity of homogenization methods’
by David R.B. Stockwell PhD, October 15, 2012
I read with interest GHCN’s Dodgy Adjustments In Iceland [hotlink] by Paul Homewood on distortions of the mean temperature plots for Stykkisholmur, a small town in the west of Iceland by GHCN homogenization adjustments.
The validity of the homogenization process is also being challenged in a talk I am giving shortly in Sydney, at the annual conference of the Australian Environment Foundation on the 30th of October 2012, based on a manuscript [hotlink] uploaded to the viXra archive, called “Is Temperature or the Temperature Record Rising?”
The proposition is that commonly used homogenization techniques are circular — a logical fallacy in which “the reasoner begins with what he or she is trying to end up with.” Results derived from a circularity are essentially just restatements of the assumptions. Because the assumption is not tested, the conclusion (in this case the global temperature record) is not supported.
I present a number of arguments to support this view.
First, a little proof. If S is the target temperature series, and R is the regional climatology, then most algorithms that detect abrupt shifts in the mean level of temperature readings, also known as inhomogeneities, come down to testing for changes in the difference between R and S, i.e. D=S-R. The homogenization of S, or H(S), is the adjustment of S by the magnitude of the change in the difference series D.
When this homogenization process is written out as an equation, it is clear that homogenization of S is simply the replacement of S with the regional climatology R.
H(S) = S-D = S-(S-R) = R
While homogenization algorithms do not apply D to S exactly, they do apply the shifts in baseline to S, and so coerce the trend in S to the trend in the regional climatology.
The coercion to the regional trend is strongest in series that differ most from the regional trend, and happens irrespective of any contrary evidence. That is why “the reasoner ends up with what they began with”.
More>>>>>>
http://wattsupwiththat.com/2012/10/15/circularity-of-homogenization-methods/
Going by Zeke Hausfather so far at Climate Etc, this is directly applicable to BEST.
RC,
“That does state explicitly that the BEST adjustment method is by the mean value of the segment relative to the “regional expectation”. I’m astounded by this.”
Incredible. Let’s keep it up.
>”Incredible. Let’s keep it up.”
My latest attempt to call out Zeke Hausfather is here:
http://judithcurry.com/2015/02/09/berkeley-earth-raw-versus-adjusted-temperature-data/#comment-673897
Bottom line:
“Therefore, the 0.8 BEST and GISS adjustments to Puerto Casado, Paraguay, 1951-1971 data are invalid. As is the BEST process in this case.”
Check this out:
KARASJOK-MARKANNJARGA http://berkeleyearth.lbl.gov/stations/157290
Brandon Shollenberger writes:
“That’s a funny example. If I’m not mistaken, that is 87 “empirical breakpoints” in 138 years. That’d average out to what, one every 19 months?
What’s really weird to me is how little they matter. It would take all of what, two breakpoints to achieve the same changes?”
There’ll be signals in the KARASJOK-MARKANNJARGA data that correspond to a combination of phenomena: monsoons, ENSO, or something.
BEST have destroyed the signals by regional adjustment..
That’s what regional and global averaging does, it destroys local signals. Case-in-point, the 11 yr solar cycle. Evans and Eschenbach can’t find the signal in temperature because they look at global averages and use the wrong analysis techniques. Meanwhile other analysts have found it at local surface level all over the world and up through the troposphere in the re-analysis datasets.
‘Berkeley Earth: raw versus adjusted temperature data’
Posted on February 9, 2015 | 948 comments
by Robert Rohde, Zeke Hausfather, Steve Mosher
http://judithcurry.com/2015/02/09/berkeley-earth-raw-versus-adjusted-temperature-data/
Just short of 1000 comments and it’s only just getting started. Little of which addresses the Homewood/Booker/Delingpole issue which was in respect to GISS/GHCN.
BEST came out all bushy tailed, but instead of allaying the controversy, all they’ve done is drawn attention to BEST deficiencies. Well, they wanted to get in on the act, they wanted the attention, they’re getting it.
Now the ticklish questions are arising I don’t think they’re as enthusiastic. Mosher’s still active but steadfastly avoiding questions re the specific adjustments. If they don’t address them with very good answers, and they will have to be good, BEST’s credibility is shot.
Tonyb is asking what could he give Booker or Delingpole or any other journalist from the BEST response or the comments that would address the concerns about Paraguay, Arctic, Etc. There’s nothing yet.
Just the opposite, more issues raised than before.
>”Just short of 1000 comments”
A lot of waffle as usual unfortunately. Makes it difficult to get to crux. Which might save BEST.
This time. Maybe.
RC,
I haven’t followed the details, but this might help. You might have seen Bob Dedekind’s post at WUWT describing how automatic corrections create a rising trend.
Yes was aware of Bob’s post thanks RT but thanks for the refresher. I’m just not sure the case is valid against BEST in particular but I don’t know the details of the Mosher-Willis exchange either – it’s ongoing. Mosh is challenging for examples from the BEST datasets but I haven’t seen any examples presented yet.
This is one of the peripheral issues bubbling away but I’m not convinced (yet) by my vague grasp of the BEST situation that BEST is actually making the situation worse – I really don’t know.
It’s a minor issue in my opinion (don’t want to engage in it) when compared to the “empirical break” situation. In BEST, GISS, BOM, and probably NCDC US (CONUS I think) empirical breaks are the dominant adjustments but I haven’t looked at the US situation. In the US it’s Menne & Williams (2009) which BOM adapts for ACORN in Australia. So what goes for Australia probably goes for the US too.
Gonzo [Climate Etc]
“How can any USA data set which shows the 1930’s as being cooler than supposedly the hottest decade and hottest year evah reconcile with this heat wave index of the US?”
http://www.epa.gov/climatechange/images/indicator_figures/heat-waves-figure1.gif
Yes, how?
Does a heat wave necessarily affect the annual average? Or can it hang around a few weeks, harm a lot of people and leave without raising it much? It depends a lot on how the heat wave index is constructed.
Wow, that graph has been well-hidden!
It’s impressive all right.
>”It depends a lot on how the heat wave index is constructed.”
Problem in OZ is that they keep redefining the index to accommodate recent events that wouldn’t register in the old index. Lot on this at JoNova. No sign of that in the US EPA graph.
>”Does a heat wave necessarily affect the annual average?”
It must because the annual average aggregates daily measurements which would include sites within the heat event. Thing is to relate the heat index to monthly rather than annual. Oddly, 1936/7 is exactly opposite to the heat index in CONUS Monthly:
http://s29.postimg.org/473ylf2c7/Conus_USHCN_vs_2002_Version_Apr14.png
There must have been very, VERY cold days in those months to counteract the heat waves.
Either that or the heat waves were localized enough not to effect the continent-wide US temperatures. Or something.
BTW, note the change of trend between CONUS V 2002 and V2 Current. They managed to cool the past a bit.
CONUS from beginning of 2005:
https://wattsupwiththat.files.wordpress.com/2014/06/uscrn_average_conus_jan2004-april20141.png
Which was why the warmest year evah had to be found in Central America from 3 sites in Paraguay.
As to the globe, heat waves and cold snaps make no difference. As to a continent, they may make no difference, but as to an island they probably make all the difference. Not usually too important in the larger view, though.
Central [South] America
‘Germany’s Warming Happens To Coincide With Late 20th Century Implementation Of Digital Measurement’
By P Gosselin on 14. January 2015
The last couple of days I posted on an 8.5 year side-by-side test conducted by German veteran meteorologist Klaus Hager, see here and here. The test compared traditional glass mercury thermometer measurement stations to the new electronic measurement system, whose implementation began at Germany’s approximately 2000 surface stations in 1985 and concluded around 2000.
Hager’s test results showed that on average the new electronic measurement system produced warmer temperature readings: a whopping mean of 0.93°C warmer. The question is: Is this detectable in Germany’s temperature dataset? Do we see a temperature jump during the time the new “warmer” system was put into operation (1985 – 2000)? The answer is: absolutely!
– See more at: http://notrickszone.com/2015/01/14/germanys-warming-happens-to-coincide-with-late-20th-century-implementation-of-digital-measurement/#sthash.5UOA7DNM.VlGhQUvG.dpuf
Temperature Monitoring Station: AUGSBURG (BEST) http://berkeleyearth.lbl.gov/stations/14205
Worth a look.
For the record:
richardcfromnz | February 11, 2015 at 12:25 am |
Rooter
>”Homewood did of course not mention that.”
Homewood did of course not mention that because he was referring to GISS/GHCN, not BEST.
The BEST site move adjustments are 1971 (about 0.8) and 2006 (about 0.6) ish, total cumulative adjustment say 1.4. Only 2 adjustments made.
http://berkeleyearth.lbl.gov/stations/157455
GISS/GHCN raw and final:
http://data.giss.nasa.gov/tmp/gistemp/STATIONS/tmp_308860860004_1_0/station.txt
http://data.giss.nasa.gov/tmp/gistemp/STATIONS/tmp_308860860000_14_0/station.txt
GISS/GHCN 1951 total cumulative adjustment -1.7, about 0.3 greater than BEST
1967 cum adj -1.78, 1972 cum adj is 1.0 (missing data at 1971). 1971 site move step is therefore 0.78, same as BEST.
Except if you look at the raw data in the 5 yrs immediately prior to 1971, the trajectory of the data was on the way down and actually matches the data on the other side of 1971 at 1971. So why was a -0.8 adjustment required by both GISS/GHCN and BEST when the data was little different immediately either side of 1971?
There was a bigger break in the opposite direction at about 1987 that GISS/GHCN adjust for but BEST does not (see # below). And another at 2004 that neither GISS/GHCN nor BEST adjust for.
1980 cum adj is -0.3, about 0.3 less than BEST
1985 cum adj is +0.11, opposite to BEST (# see above)
1990 cum adj is -1.15, about 0.5 more than BEST
1995 cum adj is -0.65, similar to BEST
2004 cum adj is -0.01, about 0.6 less than BEST
2006 cum adj is -0.01, about 0.6 less than BEST
Missing data precludes nominal 5 yr intervals.
GISS/GHCN do not make the -0.6 2006 site move adj that BEST makes. But they obviously make several more adjustments than BEST do.
Now do you see the problems here Rooter?
http://judithcurry.com/2015/02/09/berkeley-earth-raw-versus-adjusted-temperature-data/#comment-673472
For the record (also in ‘Temperature records’):
richardcfromnz | February 10, 2015 at 4:30 am | Reply
>”As we will see the “biggest fraud” of all time and this “criminal action” amounts to nothing.”
Could not be more wrong. See Paul Homewood’s post on New Zealand, in particular the GISS vs BEST comparison of Gisborne Aero adjustments starting here:
https://notalotofpeopleknowthat.wordpress.com/2015/02/09/cooling-the-past-in-new-zealand/#comment-37569
BEST make no adjustments over the period of GISS adjustments, 1963 – 2002:
GISBORNE AERODROME AWS
Breakpoint Adjusted Annual Average Comparison
http://berkeleyearth.lbl.gov/stations/157058
GISS make 7 adjustments over that period:
At 1963 the cumulative adjustment is 0.7
At 1968 the cumulative adjustment is 0.6
At 1972 the cumulative adjustment is 0.5
At 1975 the cumulative adjustment is 0.4
At 1980 the cumulative adjustment is 0.3
At 1982 the cumulative adjustment is 0.2
At 1986 the cumulative adjustment is 0.1
At 2001 the cumulative adjustment is 0.1
At 2002 the cumulative adjustment is 0.0
For example, in GISS monthly adj series (see graph below for raw monthly),
The GISS Gisborne Aero 1973 cumulative adjustment is 0.5
1973 monthly raw (top) vs adjusted (bottom)
19.4 18.5 16.2 14.6 12.7 10.0 8.6 10.5 12.3 14.2 17.2 17.2
18.9 18.0 15.7 14.1 12.2 9.5 8.1 10.0 11.8 13.7 16.7 16.8
0.5 difference for every month
The 1974 – 1977 range of common cumulative adjustment is 0.4
1974 monthly raw (top) vs adjusted (bottom)
17.7 20.6 15.1 14.8 11.2 10.1 10.1 8.9 12.1 13.6 15.5 17.8
17.3 20.2 14.7 14.4 10.8 9.7 9.7 8.5 11.7 13.2 15.1 17.4
0.4 difference for every month
1977 monthly raw (top) vs adjusted (bottom)
18.4 18.9 17.8 14.5 10.9 10.1 9.4 10.4 10.2 13.4 14.9 17.5
18.0 18.5 17.4 14.1 10.5 9.7 9.0 10.0 9.8 13.0 14.5 17.2
0.4 difference for every month
The 1978 cumulative adjustment is 0.3
1978 monthly raw (top) vs adjusted (bottom)
19.2 19.5 17.6 16.4 12.0 10.0 9.7 10.3 11.3 12.0 16.0 18.0
18.9 19.2 17.3 16.1 11.7 9.7 9.4 10.0 11.0 11.7 15.7 17.7
0.3 difference for every month
Apparently, according to GISS (but not BEST), there were 2 distinct 0.1 steps from 1978 to 1977 and from 1974 to 1973. Similarly for the other ranges of common cumulative adjustments.
There is no justification for these 2 steps (or the others) in view of the raw monthly data series (and no site moves): http://climate.unur.com/ghcn-v2/507/93292-zoomed.png
GISS has some explaining to do.
http://judithcurry.com/2015/02/09/berkeley-earth-raw-versus-adjusted-temperature-data/#comment-673008
>”The BEST site move adjustments are 1971 (about 0.8) and 2006 (about 0.6) ish”
Refers to PUERTO CASADO, Paraguay:
http://berkeleyearth.lbl.gov/stations/157455
I have watched the wee act that Mosher and Hausfather perorm for some years. Mosher particularly is a master of the crushing put-down but tends to go a bit silent/evasive if he has not got an adequate rejoinder.
I suspect that neither of them are doing Good Stuff, however anyone wishes to define this.
When BEST was launched with great fanfare my BS meter flickered madly. I thought they would be worth keeping a very sceptical eye upon.
At least we know BEST’s rationale Alexander, even if the actual scalpel “algorithm” is a bit mysterious. Their “regional expectation” and “empirical break” approach is bogus, but at least we know what it is and why. And their website is very good. “Raw” is not necessarily raw though, and “Site move” is not necessarily a site move either.
GISS is another story. I’ve never seen the GISTEMP homogenization and adjustment methods laid out or scrutinized anywhere (not to say it hasn’t been done somewhere). There’s probably a paper (one of Hansen’s I’m guessing) but whatever it is, there’s not much knowledge of it. I’ll look it up eventually.
Problem is, GISS appear to made all manner of adjustments, some match BEST but most don’t. Some are the GHCN adjustments, some are post-GHCN. And their post-GHCN Gisborne Aero is bizarre for example.
BEST are the guys doing the PR but in terms of BEST, GISS are silent about GISTEMP.
Case of devil-you-know (a bit) vs devil-you-don’t (at all).
Continuing the Puerto Casado, Paraguay, 1971 “Station Move” break case study.
The nearest stations spanning the break AND the segments 20 yrs either side because the first segment is 20 yrs (second segment longer), are:
Station Name, Distance (km), Earliest, Most Recent
PONTA PORA, 226.08, Jan 1949, Jul 2013
http://berkeleyearth.lbl.gov/stations/152857
BAHIA NEGRA, 230.19, Jan 1941, Jan 2013
http://berkeleyearth.lbl.gov/stations/157457
MARISCAL ESTIGARRIBIA, 280.89, Jan1951, Oct2013
http://berkeleyearth.lbl.gov/stations/157456
Next station satisfying the criteria is CORUMBA BRAZIL, 362.68 km away. The above 3 stations represent the nearest “regional expectation” for Puerto Casado.
Keep in mind that both BEST and GISS have made a large 0.8 adjustment at 1971 requiring considerable justification. It would be good if the “Station Move” could be confirmed, or otherwise, from the station history for example. And a statistical cross-break analysis of the 24 months either side as required by R&S93 (method presented upthread) would be good too. BEST don’t indulge in those tests so let’s look at their “regional expectation” data instead.
Pora Pora has an “empirical break” of about 0.8 half way along the first Puerto Casado segment.
Bahia Negra is a dogs breakfast. “Record Gap” 1953 – 1956 and a 0.5 “empirical break” at 1968 right next to the 1971 break, both of which in the first Puerto Casado segment. And a 0.8 “empirical break” at 1973 right next to the 1971 break and a huge “Record Gap” 1978 – 1990, both of which in the second Puerto Casado segment.
Mariscal Estigarribia has a 1.2 “empirical break” at 1970 in the first Puerto Casado segment right next to the 1971 break. And a 0.8 “empirical break” at 1982 in the second Puerto Casado segment.
Apart from that, the “regional expectation” for BEST’s 0.8 adjustment to Puerto Casado 1971 looks to be rock solid (Sarc).
I have no idea what the justification for the equivalent GISS adjustment is.
http://judithcurry.com/2015/02/09/berkeley-earth-raw-versus-adjusted-temperature-data/#comment-674475
Re Puerto Casado 1971 break adjustment above.
Begs the question: were each of Pora Pora, Bahia Negra, and Mariscal Estigarribia adjusted before or after they were all used as the primary “regional expectation” for Puerto Casado 1971?
Which came first, the chicken or the egg?
‘Are heatwaves in Australia becoming more frequent, hotter or lasting longer?’
By: Geoffrey H Sherrington
We test this hypothesis:
Heatwaves in Australia are becoming more frequent, hotter and are lasting longer because of climate change.
(The claim was made in a Climate Council report of Jan 2014. From other publications, it seems to be perceived wisdom among authorities from the Australian Bureau of Meteorology and CSIRO, who help to guide national policy.)
[…]
The result is a mixed bag. This method of analysis suggests that later heatwaves have been dominantly longer in Perth, because only 1 case appears before the half-way point. Conversely, early heatwaves are dominant in Hobart, with not a strong signal in Adelaide, Melbourne or Sydney.
The hypothesis that heatwaves are becoming longer is not supported by this analysis of these 5 important cities.
CONCLUSIONS
It is easy to raise objections to the methodology of this analysis.
It is not easy to explain why perceived wisdom supports the opening hypothesis of longer, hotter and more frequent heatwaves, when this simple exercise falsifies it in the first instance.
http://joannenova.com.au/2015/02/heatwaves-in-australia-not-longer-not-more-common-why-wont-bom-and-abc-say-that/
Get your head around this:
Steven Mosher commented on Berkeley Earth: raw versus adjusted temperature data.
in response to Ragnaar:
The pictures everyone is drawing from the first sawtooth figure to this one
misprepresent what the algorithm does even though
1. That algorithm is described in the papers
2. The code exists.
Back on sept 7 2007 ( or so ) when Hansen released his code, there was a whole team of skeptics who looked at the code, got it running ( partially) and some even implemented there own versions in different languages Jean S in matlab, SteveMc in R. In other words we DID WORK to understand what the code was doing ( see steveMC on hansen step 1 and step 2 )
This is why releasing code is important.
A) so that people can SEE what you did. Not draw pictures of what they Think you did
B) so that people can build on, criticize, test, retest, what was done.
C) to free researchers from the task of holding class to explain what they did. Early on people argued that Hansen and Jones should NOT release code, because if they did, then skeptics would pester them with demands to EXPLAIN THE CODE. The code is the explaination. As I argued in 2007 and continue
to argue today, the code is the best explaination.
That said I will give you a brief overview of what the algorithm does.
Step one.
A temperature field is created T = C+W temperature = T, Climate = C, W = weather.
Where climate is a function of Lat, Alt and Time, and weather is the residual which is kriged.
Next. A station is assigned a quality based on its agreement with the EXPECTED FIELD
“In addition to point outliers, climate records often vary for other reasons that can affect an individual record’s reliability at the level of long-term trends. For example, we also need to consider the possibility of gradual biases that lead to spurious trends. In this case we assess the overall “reliability” of the record by measuring each record’s average level of agreement with the expected field at the same location.”
Each station is given a rating based on its agreement with the expected field.
They are NOT compared to neighbors. So every chart you see like yours is wrong from the start. and everyone who shows this kind proves to me that they havent read the paper or the code. The code is provided to PREVENT critics from wasting their time with strawmen. It is provided to ENCOURAGE STRONGER arguments against the method.
Now, After the weights for each station are determined the must be fed into
The estimate for W!
Recal T = C + W.
90%+ of the temperature for a location is determined by the latitude and altitude of the station. The remainder, the residual is W or the weather.
However, we know that W contains more than Weather. It contains Weather and station bias. So we seek to MINIMIZE the bias in the weather by applying the quality weights.
At this stage we recalculate the weights for kriging the weather.
Stations ARE NEVER ADJUSTED. PERIOD.
stations are not compared to their neighbors. They are compared to the field created by C+W. They are then given weights by the level of their agreement. Next, The WEATHER FIELD is recalculated.
And in the end you have T=C+W1 where W1 is the adjusted WEATHER.
This approach AS WE NOTE has several assumptions. Those assumptions actually tell people where the best counter arguments are.
“Implicit in the discussion of station reliability considerations are several assumptions. Firstly, the local
weather field constructed from many station records,is assumed be a better estimate of the underlying
temperature field than any individual record was. This assumption is generally characteristic of all averaging techniques; however, this approach cannot rule out the possibility of large scale systematic biases. Our reliability adjustment techniques can work well when one or a few records are noticeably inconsistent with their neighbors, but large scale biases affecting many stations could cause the local comparison methods to fail. ”
What’s that mean. If you had a local weather field that had a lot of UHI bias in it, and a few rural records.. the approach would downweight the rural. If you had a lot of microsite bias in the weather field and only a few reliable stations in an area, the good stations would be downweighted. In short, We explain for people where their best argument is and that happens to Be the best argument that someone like Anthony Watts gets or the guy who works with him evan Jones.
Next assumption is that station quality is constant over time. Its not. and this is where the scalpel comes it. We Slice records but slicing doesnt change a record. Slicing does ONE THING. When a record is sliced we Simply ask the question? Over this segment, did the record quality change?
“Secondly, it is assumed that the reliability of a station is largely invariant over time. This will in general be false; however, the scalpel procedure discussed in the main text will help here. By breaking records into multiple pieces on the basis of metadata changes and/or empirical discontinuities, it creates the opportunity to assess the reliability of each fragment individually. A detailed comparison and contrast of our results with those obtained using other approaches that deal with inhomogeneous data will be presented elsewhere.”
Points.
1. Stations are NOT ADJUSTED.
2. Stations are given quality Weights depending on the agreement
with the field
3. Quality ratings are calculated by segment. if a station moves
we ask “does this change agreement with the field”
4. The weather field, which is the Residual after climate is removed
is ADJUSTED.
5. The final T= C+W is calculated.
After that is done, you can ask the question.
Suppose this station had the highest quality.. what would it have looked like?
That is a prediction of what perfect station would have looked like.
The field is not constructed from adjusted stations.
stations are not adjusted.
http://judithcurry.com/2015/02/09/berkeley-earth-raw-versus-adjusted-temperature-data/#comment-675203
Methinks Mosher speaks with forked tongue.
Mosher – we change the weather, not the station.
Not sure I really follow this either
[Me] – “Next station satisfying the criteria is CORUMBA BRAZIL, 362.68 km away. The above 3 stations represent the nearest “regional expectation” for Puerto Casado.”
[Mosher] – “Wrong.”
Because……….?
Keke has said upthread that the entire segment means either side of an “empirical break” at a station are what is compared to the “regional expectation”. These segment means are graphed for every BEST station and show the relative segment differences when the series is reconstructed i.e. the series is not reconstructed for the BEST process but BEST has reconstructed each site “solely for those interested in data from that specific station”. Keke:
[A] >”Technically the series is cut at breakpoints and every segment is treated as an individual station for the purposes of constructing the underlying temperature field. However, the general point is that it is the mean temperature of the segment, rather than its start or endpoints, that is relevant when combining it with other stations to estimate the temperature field.”
[B] >”We also produce “adjusted” records for each station, though these are not actually used for the Berkeley temperature product and are solely for those interested in data from that specific station. These records are combined by aligning the mean values of each subsegment of the station record, as shown in the example above.”
Zeke Hausfather | February 11, 2015 at 12:26 pm |
A slight correction: I should have said “These records are combined by aligning the mean values of each subsegment relative to the regional expectation (e.g. based on comparisons to nearby stations) of the station record, as shown in the example above”
So now we know to look at the quality of the segments of the comparator stations that make up the “regional expectation” that correspond to the target station segments either side of the break in order to ascertain whether the break is valid, or not.
In respect to Puerto Casado,
1) There are no other stations nearer than the 3 above to the (supposed) Puerto Casado 1971 break that satisfy the overlap criteria i.e. have data. These are the “neighbouring” or “”nearby” stations as Zeke terms them.
2) Given 1), the 3 “neighbour” stations above MUST represent the NEAREST climatology to Puerto Casado. But that’s still no guarantee that the microclimates are similar. To determine comparator-target compatibility, a statistical cross-break analysis like R&S93 upthread is required. This is the root of the NZCSC v NIWA controversy in NZ. NZCSC adhere to the methodology, NIWA doesn’t. BEST do not even apply such a test in any way.
3) Going farther away from the 3 e.g. CORUMBA BRAZIL, 362.68 km away and farther still, becomes less and less relevant and more and more removed from the microclimate of Puerto Casado UNLESS the test in 2) is carried out and satisfied for “remote” stations..
And again, is the comparator station data that corresponds to the cross-break segments of the target, a raw series or a reconstructed series as above (cite/quote the process documentation)?
http://judithcurry.com/2015/02/09/berkeley-earth-raw-versus-adjusted-temperature-data/#comment-675283
My reply to Brandon Schollenberger:
Brandon
>”BEST uses the nearest 300 stations within 2500km.”
Yes I realize this but for this case study it makes sense to start at the nearest (“nearby” – Zeke) and work out to look at the quality, or otherwise, of the “regional expectation” for the stations with highest weighting. Preferably well within 100km (2500km is absurd), but when that’s not available then it’s first nearest and so on.
>”That likely means far more than the three stations you list would be used. Individuals ones might not get as much weight in the calculations as the others since BEST de-weights stations by distance,”
Yes, as above.
>”but the total effect of them could easily be greater than the effect of the three stations you mention.”
If so, the local climatology is discarded, all local climate signals destroyed.
>”That said, it’s important to understand stations don’t need to cover 20 years on either side like your restriction implies.”
The case study is Puerto Casado 1971. As I’ve explained upthread, the earlier segment is 20 years (1951 – 1971), the later segment longer. I’ve used 20 yrs because Zeke has stated explicitly that the means of the target segment are applied to the regional expectation, the earlier segment is 20 yrs the later longer so 20 it is for the purposes of this case study. But when you look at the comparator station segments that correspond to the target segments (e.g. 1951 – 1971) the quality of the comparators is not good (to say the least) by this methodology.
>”Breakpoints can be estimated in the BEST homogenization process with far less than 40 years of overlap. There are probably nearby stations which were used you didn’t list.”
Well yes. And other methodologies far less too. R&S93 stipulates 12 and 24 months either side for the adjustment statistical accept/reject criteria.
Except again, Zeke has explicitly stated that it is the segment means rather than the start or end points of the segments (e.g. 24 months as above) that BEST applies. Zeke:
“…..the general point is that it is the mean temperature of the segment, rather than its start or endpoints, that is relevant when combining it with other stations to estimate the temperature field.”
If there is methodology that statistically determines break adjustments with only 24 months of data (2 yr overlap) from a handful of local stations (i.e. “neighbouring”, “nearby”) and the odd remote then going to 10, 20 or 40 yr overlap with “remote” comparator stations up to 2500km away is bizarre.
In the NZ controversy, It is NIWA’s departure from the R&S93 method (along with non-UHI adjustment and annual data instead of monthly) i.e. preferring remote stations and say 10 yr overlaps to neighbouring stations and 2 yr overlap, that produces a trend in the NZ 7SS that is 3 times greater than by applying the R&S93 criteria of neighbouring and 2 yr overlap as followed by NZCSC in their ‘Statistical Audit’ of the 7SS. An audit which is now in the literature as De Freitas, Dedekind, and Brill (2014), as I’ve mentioned previously, despite the judge’s decision in NZCSET v NIWA.
http://judithcurry.com/2015/02/09/berkeley-earth-raw-versus-adjusted-temperature-data/#comment-675303
>”I’ve made some comments that are duplicated in ‘Temperature records’ starting here:
https://www.climateconversation.org.nz/open-threads/climate/climate-science/temperature-records/comment-page-1/#comment-1283169”
Nested comments have been removed since and the system moved the comment to page 2 of ‘Temperature records’. New link is:
https://www.climateconversation.org.nz/open-threads/climate/climate-science/temperature-records/comment-page-2/#comment-1283169
I’ve linked this thread and the BEST Puerto Casado case study to Paul Homewood’s original GISS Puerto Casado post here (with explanations and pointing to BEST-GISS comparisons):
https://notalotofpeopleknowthat.wordpress.com/2015/01/20/massive-tampering-with-temperatures-in-south-america/#comment-38198
Both BEST and GISS have to survive this one case study or their respective methods crash and burn. Then we move to the next case study which is BEST’s other empirical break at Puerto Casado (2006) that BEST adjusts for but GISS doesn’t i.e. someone survives, the other crashes and burns.
Karl Becker, February 9, 2015 4:25 pm
Paul,
Kudos to you for highlighting shoddy data analysis. I dug into this (just a little) and it is even worse than you thought!
I am familiar with NASA’s Goddard Institute for Space Studies (GISS) Surface Temperature Analysis (STA) data which is the data source referenced here. Your point that in South America, there were historically not a lot of reporting stations is very well taken. And, as with all the surface temperatur data stations, measurements have a lot of variation due to local weather. Finally, you sometimes move stations or upgrade sensors, and that affects the measurements. So typically you would correct the data by (1) considering neighboring measurements or (2) applying a correction factor based on a time series comparison before and after the known move/upgrade.
So around 2008-2010 NASA made a change: “Originally, only documented cases [of sensor changes] were adjusted, however the current procedure used by NOAA/NCDC applies an automated system that uses systematic comparisons with neighboring stations to deal with undocumented instances of artificial changes.”
So no neighboring stations exist in South America, so how would you determine these errors ? “However, it is likely that the largest contribution to the margin of error is given by the temporal and spatial data gaps. That particular margin was estimated as follows: All computations were first made replacing the observed data by complete model data. Then the calculations were repeated after discarding model data where the corresponding observations were missing. Comparisons of the two results were used to obtain an estimate for that margin of error.” [ibid]
In simplistic terms where there are no neighbouring stations to compare to, GISS uses the model data as a stand-in for those neighbors. And the model has built-in temperature escalation. That is extremely poor Data Analysis, and likely means the entire STA results set has an upwards bias.
The Data Scientist
https://notalotofpeopleknowthat.wordpress.com/2015/01/20/massive-tampering-with-temperatures-in-south-america/#comment-37521
# # #
I think this explains the post-GHCN GISS adjustments to Gisborne Aero in NZ (see upthread).
My reply to Steven Mosher:
Steven
>”1. you have ad hoc rules.”
Not so. For a case study of a specific break, in this case Puerto Casado 1971, the rules are as per Zeke’s process i.e. the means of the respective segments are the target station’s data used to compare to the comparator stations. The 1951 – 1971 segment is 20 years, the 1971+ segment is longer, so what is wrong with 20 yrs? And Zeke explicitly uses the term “nearby” so that’s obviously where to start.
>”2. you havent done any sensitivity analysis…..”
I presented a sensitivity analysis long ago upthread (R&S93) that is applied to the 24 months of data either side of the break. Zeke explicitly states that BEST don’t consider the “end points” of the segments. Whole homogenized temperature series are compiled by statistical analysis of the “end points” only and accepted, for example, by CRU for CRUTEM4 (e.g. NZ 7SS). Are you dismissing ALL end point analysis (i.e. all the supporting literature) in your preference for segment mean analysis?
>”3. you havent tested your method on other parts of the world”
That’s next. Both BEST and GISS have to survive this one case study (Puerto Casado 1971) or their respective methods crash and burn. Then we move to the next case study which is BEST’s other empirical break at Puerto Casado (2006) that BEST adjusts for but GISS doesn’t i.e. someone survives that, the other crashes and burns.
>”4. you havent tested your method against synthetic data to see how it performs.”
Real data will suffice in the meantime.
>”The NZ NWS had humans sit down and select stations.”
No, Salinger selected 7 stations. NIWA adopted the series, as did CRU.
>”They had humans make decisions about adjustments.”
No, the humans in NIWA’s Report On The Review of the 7SS made arbitrary decisions about adjustments that were contrary to the established methodology of R&S93. Consequently NIWA are now unable to cite the basis of their method.
>”A second group also had humans sit down and select stations and make adjustments”
Wrong. The NZCSC in their ‘Statistical Audit’ of NIWA’s 7SS used EXACTLY the same stations as NIWA (there are 7 established for the 7SS). Then the NZCSC applied the established methodology of R&S93 to EXACTLY the same breaks as NIWA but the established method returned different adjustments than what NIWA made.
>”Then the groups used different methods.”
Exactly. This is the point I made above. NIWA departed arbitrarily from the established R&S83 method preferring remote comparator stations to neighbouring and overlaps in excess of 2 yrs, similar to BEST.
>”A) we use all the data.”
No you don’t. In NZ not even close. Examples, In your Auckland series you stay with the UHI/Sheltering contaminated Albert Park as your primary series, not pulling in Manger Treatment Station, and Mangere Airport. Both NIWA and NZCSC drop Albert Park and move to MTS and MA. For Hamilton, NIWA has Ruakura Research in their 11SS but BEST doesn’t pull it in. Instead you pull in Auckland and Tauranga both of which are coastal microclimates, Hamilton is hinterland. Both of these examples have already been described way upthread.
>”B) we use an “adjustment” approach that allows us to.
1. establish rules and use them for every case…..”
Yes, that’s what we’re investigating. Starting with a case study of just one simple single break. You have to survive this first, as does GISS, before we move on to the rest of Puerto Casado, which gets problematic when BEST adjusts but GISS doesn’t, and vice versa.
http://judithcurry.com/2015/02/09/berkeley-earth-raw-versus-adjusted-temperature-data/#comment-675331
[Me] – “2) Given 1), the 3 “neighbour” stations above MUST represent the NEAREST climatology to Puerto Casado.”
[Mosher] – “The climate is different than the weather.”
You’re seriously saying that 20 years either side of the break, a 40 year overlap, is weather, not climate?
Isn’t 30 years the convention for climate?
Now we have climate convention change.
BTW, you still haven’t answered this:
Begs the question: were each of Pora Pora, Bahia Negra, and Mariscal Estigarribia adjusted before or after they were all used as the primary “regional expectation” for Puerto Casado 1971?
Which came first, the chicken or the egg?
http://judithcurry.com/2015/02/09/berkeley-earth-raw-versus-adjusted-temperature-data/#comment-675335
Me replying to Brandon Schollenberger:
Brandon
>”. BEST can find a breakpoint by examining a 10 year period even if that 10 year period is within a 40+ year period it doesn’t find a breakpoint in. BEST didn’t just compare 1951 on when looking for that breakpoint. It compared 1951 on, 1952 on, 1953 on, etc.”
Firstly, be careful not to conflate break detection with subsequent adjustment accept/reject criteria.
Secondly. 1971 is NOT an “empirical break”, it is a “Station Move”:
http://berkeleyearth.lbl.gov/auto/Stations/TAVG/Figures/157455-TAVG-Alignment.png
A station move is usually determined from the station history. I don’t know how BEST identifies station moves such as this but I don’t think it is from the station history i.e. their primary detection is by automated break-point analysis. Why and how a break detected in this way can be attributed to a station move without recourse to the station history is a mystery to me. But I’d like to be enlightened.
Thirdly, The length of the period prior to the breakpoint is VERY relevant to the adjustment process in this case. Look at the reconstructed series above, the mean of the segment prior to 1971 (the target series) is the mean of the ENTIRE 20 yrs of the segment – NOT the 5 years that would be used by a 10 yr break detection period. And I repeat, 2 yrs of monthly data either side of a break is sufficient for statistical adjustment accept/reject criteria.
>”I believe the minimum period BEST can use when searching for breakpoints is 10 year. If so, they only need 10 years of data around a point to decide there’s a brekapoint there. That means you’d need to include almost any stations which have data for the year of the breakpoint”
Fine. That’s to detect a break in the target series (and see below re the comparator data adjacent to the break). But that’s NOT how BEST make the adjustment. To do that they split off the ENTIRE segment (20 yrs in this case – NOT 5 yrs), take the mean, and that’s the target data for the “new” station. So the comparator data must correspond to the 20 yrs of target data 1951 – 1971. It’s the corresponding comparator data that I’ve identified upthread from the “nearby” stations. And it’s not good. Just for starters, Pora Pora has a break right in the middle of the corresponding segment.
And I hope you’ve taken a long hard look at the comparator data where local breaks occur right next to the 1971 target break but on either side i.e. conventional cross-break statistical analysis might turn up some interesting results. One of the possibilities being that the 1971 break is not actually a break because it was common to the neighbours.
http://judithcurry.com/2015/02/09/berkeley-earth-raw-versus-adjusted-temperature-data/#comment-675343
An error in my comments above. 2 yrs either side of a break is a 4 yr overlap obviously, not 2.
Me replying to Brandon:
Brandon
>”You said you were checking to see if the breakpoint is valid or not.”
The whole process, break accept/reject and adjustment accept/reject. Both GISS and BEST make a 0.8 adjustment at 1971 on the basis that they have identified a break AND that it requires a 0.8 adjustment.
It’s highly questionable, given the target and comparator data, that a break exists. THEN it’s highly questionable that it requires a massive 0.8 adjustment.
And just because a break is actually detected doesn’t necessarily mean an adjustment MUST be made. BOM for an example from Australia, doesn’t adjust for breaks less than 0.3. With that in mind, take a look at at GISS post-GHCN adjustments to Gisborne Aero NZ:
At 1963 the cumulative adjustment is 0.7
At 1968 the cumulative adjustment is 0.6
At 1972 the cumulative adjustment is 0.5
At 1975 the cumulative adjustment is 0.4
At 1980 the cumulative adjustment is 0.3
At 1982 the cumulative adjustment is 0.2
At 1986 the cumulative adjustment is 0.1
At 2001 the cumulative adjustment is 0.1
At 2002 the cumulative adjustment is 0.0
There is no valid reason for adjustments of this nature. And there is no resemblance to the BEST adjustments: http://berkeleyearth.lbl.gov/stations/157058
More here:
https://www.climateconversation.org.nz/open-threads/climate/climate-science/temperature-records/comment-page-2/#comment-1283206
Karl Becker explained this (possibly) at Paul Homewood’s:
“In simplistic terms where there are no neighbouring stations to compare to, GISS uses the model data as a stand-in for those neighbors. And the model has built-in temperature escalation.”
More here:
https://notalotofpeopleknowthat.wordpress.com/2015/01/20/massive-tampering-with-temperatures-in-south-america/#comment-37521
Clearly, if you look at the data 2 yrs either side of the 1971 “Station Move” (and even 5 yrs) there’s little or no support for a break:
http://berkeleyearth.lbl.gov/stations/157455
And definitely no support for a 0.8 adjustment.
http://judithcurry.com/2015/02/09/berkeley-earth-raw-versus-adjusted-temperature-data/#comment-675382
[Steven Mosher] – “stations are not compared to their neighbors. They are compared to the field created by C+W”
[Zeke Hausfather] – “These records are combined by aligning the mean values of each subsegment relative to the regional expectation (e.g. based on comparisons to nearby stations) of the station record, as shown in the example above”
Well, which is it?
Or is it time for a team huddle?
http://judithcurry.com/2015/02/09/berkeley-earth-raw-versus-adjusted-temperature-data/#comment-675388
Well I would like to see a sound explanation for this.
I am not sure how closely you have been following the various “Temperature Adjustments” posts but you may have missed this. It was first posted on WUWT and then pointed to at Real Science and followed up by Tom Nelson.
http://wattsupwiththat.com/2015/02/09/warming-stays-on-the-great-shelf/#comment-1856325”
http://tomnelson.blogspot.co.uk/2015/02/noaa-settled-science-earth-at-5824f-in.html
This is not some “climate science nobody wo doesn’t know what he is doing” putting stuff together as they like to claim, this is NOAA hoist by their own published data which shows that they have changed the 1997 global Temperature (not just USA) by over 2 degrees C (4 degrees F) in 17 years of adjustments.
How many other summarised Years of NOAA Analysis are out there on the internet that will continue the exposure of what has to be worst “Science” to have ever come out of NASA.
Mosher’s answer to this is “NOAA do estimates of the average temperature. these are NOT averages.
They are estimates based on subsets of data, using a method.
When the method and data change, the estimate will change.
The past isnt cooled. ”
RIGHT.
A C O, They had to get rid of the MWP, the 1940s, so natch 1997 too.
But it’s getting a bit audacious when the recent past is clipped so much. I’m guessing it wont be long until 2014 gets the chop too:
http://www.ncdc.noaa.gov/cag/time-series/global/globe/land_ocean/ytd/12/1880-2014
You know that if 2015, 16, 17, 18 ,19, 20 come in lower than 2014 then 2014 was far too warm.