No warming for up to 25 years, but now…
The indomitable, indefatigable, never-say-die UK Met Office (under the spell of the IPCC) predicts that warming is set to “continue”, even though there’s been no global warming to speak of for about 25 years. Wonderful. In fact, the entire UAH satellite dataset from December 1978 to November 2015 (37 years) shows global warming at a yawn-inducing rate of just 1.14°C per century, well within natural variability. Stupendous.
In strong sycophantic voice, the Guardian trumpets the latest Met Office pronouncement of future warming (repeated uncritically by the Herald this morning) to fortify our delightful delusions of disaster.
Global temperatures will continue to soar over the next 12 months as rising levels of greenhouse gas emissions and El Niño combine to bring more record-breaking warmth to the planet.
The only thing soaring has been an over-active imagination. Surprisingly, the Met Office identifies just two factors causing the earth’s temperature: human emissions of CO2 and El Nino (ENSO): one of so little effect it could be imaginary and the other powerful, wholly unpredictable but natural. Our best efforts against them would be ruinous but leave both of them untouched.
Repeated lie
According to the Met Office’s forecast for the next five years, 2016 is likely to be the warmest since records began.
Ideologically-driven repetition of the lie that current temperatures are unprecedented does not improve the credibility of the lie. Numerous studies show global temperatures throughout the Holocene to have been higher than the present day.
Some global warming deniers have claimed that the current El Niño alone was responsible for making last year a record one, with the effects of carbon emissions being irrelevant. But Smith [Dr Doug Smith, a Met Office expert on long-term forecasting] rejects these claims.
That’s pretty rude. Dr Smith claims people with questions about the Met Office’s explanations are denying global warming. Curiously, I have questions about the next bit, so let’s see how it works. I’m fairly certain my questions neither depend on nor lead to a denial of global warming.
“We have had El Niños before,” he said. “The one in 1997-98 was particularly intense. Nevertheless, global temperatures were less then than they were in 2015 — and that is because background heating caused by increasing carbon dioxide levels in the atmosphere are [sic] higher today than they were [sic] in 1997-98.”
Fair enough. Now here are my questions. How much warming was caused since 1997 (18 years) by human emissions? What was the total increase in global temperatures during that time? I believe the correspondence between atmospheric carbon dioxide and temperature change is nowhere near as certain as Smith makes out.
Watch out for excuses
2017 is likely to see a dip in global temperatures. “We can be pretty sure there will be a drop that year,” added Smith.
After that, temperatures could start to rise again over the rest of the decade. “Whether one of these years — 2018, 2019, 2020 — overtakes 2016 in terms of temperature is very hard to predict at this stage,” said Smith. “We are looking quite far into the future, after all.”
Here come the excuses. He starts using vague fudges such as “likely”, “could” and “hard to predict”. Then an unexpected killer blow for the warmist arguments: “We are looking quite far into the future, after all.”
So he presumes to lecture us for creating dangerous temperatures over the next 85 years, but begs our forgiveness if he gets the temperatures wrong over just five years? I’d say his climate ignorance is exceeded only by his audacity. His long-term predictions must have greater margins of error than the short-term, but no forgiveness will be possible for getting it wrong—the enormous sacrifice he demands from us makes it impossible.
What could he say? “Sorry I ruined everything.”
UNFCOW
One reason for such uncertainties is a lack of precise knowledge about the heating of the oceans. “If you want to measure climate change you need to have precise information about the total energy of the planet and most of that is stored in the ocean,” said Smith.
The global mean surface temperature was the accepted metric of global warming when the UNFCCC was created. That’s how it was measured: atmospheric temperature at an altitude of two metres. Notice that it’s named the UN Framework Convention on Climate Change, not the UN Framework Convention on Ocean Warming UNFCOW). Only after the surface temperature refused to rise for many years did the warmists shift the emphasis to ocean heat content. That was to save face, whatever they say now, not to more accurately describe the science.
A “lack of precise knowledge” means Dr Smith knows nothing about the ocean heat content. Which is because we don’t have many thermometers in the ocean, especially at great depth, below 2000 metres. He’s making this stuff up.
First, the Met Office (following IPCC) predict that temperatures by 2100 (pdf, 1.6 MB) could be 6°C higher than 1986–2005, and they don’t rule out 7°C or 8°C higher. Of course, that’s impossible, but he says our predictions for the next five years could be wrong (sorry!). Then he finally admits that to “measure” climate change you must know “precisely” the planet’s total energy, most of which is in the ocean, but he does not know how much it is. Which means his predictions for 2100 are fabricated. Concerning the ocean, he’s hopelessly out of his depth.
But what about the context
“Recently temperature rises on the land slowed and people said global warming had stopped. That was never true. The ocean heat content went up all the time.”
So it did, but he should quantify it. Last year, Christopher Monckton used Argo data to show ocean heat content was indeed rising, but only at about 0.23°C per century. This is being caused by the sun, not by our minor additions to a trace atmospheric gas. Lord Monckton adds the following poignant facts about the Argo project to put the numbers into a realistic context.
Actually, it is not known whether the ocean is warming: each of the 3600 automated ARGO bathythermograph buoys takes just three measurements a month in 200,000 cubic kilometres of ocean — roughly a 100,000-square-mile box more than 316 km square and 2 km deep. Plainly, the results on the basis of a resolution that sparse (which, as Willis Eschenbach puts it, is approximately the equivalent of trying to take a single temperature and salinity profile at a single point in Lake Superior less than once a year) are not going to be a lot better than guesswork.
This gives a good picture of the enormous scarcity of temperature readings in the ocean, but warmists place great faith in data they agree with. Finally, the Guardian tries for a knockout punch to have us all believing in the coming cataclysm.
Research catching up with ideology
The release of the Met Office study comes as another group of scientists revealed research that shows the last 30 years were probably the warmest Europe has experienced in more than two millennia. An international team used tree ring records and historical documents to reconstruct yearly temperatures going back 2,100 years and discovered there was no period as warm as the last 30 years.
It’s amazing how research is catching up with ideology. Several recent studies have removed the Medieval warm period, the Roman warm period and the Minoan warm period—although some evidence shows almost the entire Holocene (pdf, 1.3 MB) was warmer than today. Yet the IPCC, in its first report in 1990, showed them clearly (https://www.ipcc.ch/ipccreports/far/wg_I/ipcc_far_wg_I_full_report.pdf, p.202). Hard to believe the data in papers from around the world supporting the temperatures in those periods were all incorrect.
It’s more likely the IPCC looked elsewhere for different data, since higher temperatures only 700 years ago makes one’s claim of unprecedented temperatures seem false, if temperatures are not unprecedented they’re highly unlikely to be dangerous, and no doubt the IPCC tries very hard not to mislead.
Views: 193
Ah yes, the serially bogus “decadal” forecast issued by the Met Office each year. It’s now only for 5 years at a time. They have to issue a new one each year because the previous forecast is always ridiculously high and at a trajectory far greater than the GCMs.
They’ve excelled themselves this year and should be good for chuckles in a years time. They’ve tacked their forecast onto the peak of the 2015 El Nino warming:
Figure 3: Observed (black, from Met Office Hadley Centre, GISS and NCDC) and predicted (blue) global average annual surface temperature difference relative to 1981-2010. Previous predictions starting from November 1960, 1965, …, 2005 are shown in red, and 22 model simulations, from the Coupled Model Intercomparison Project phase 5 (CMIP5), that have not been initialised with observations are shown in green. In all cases, the shading represents the probable range, such that the observations are expected to lie within the shading 90% of the time. The most recent forecast (blue) starts from November 2015. All data are rolling 12-month mean values. The gap between the black curves and blue shading arises because the last observed value represents the period November 2014 to October 2015 whereas the first forecast period is November 2015 to October 2016.
http://www.metoffice.gov.uk/media/image/q/o/fig3_dp2015_fcst_global_t.png
From ‘Decadal forecast’
http://www.metoffice.gov.uk/media/image/q/o/fig3_dp2015_fcst_global_t.png
The previous ANNUAL 5-yr predictions are NOT all shown. Where is last year’s prediction? And the year before that?
Notice too that the observations are NOT just UKMO but a UKMO+GISS+NCDC composite, and only to October 2015 i.e. why don’t they use their own data? Here is HadCRUT4 to end of December 2015:
HadCRUT4
http://www.metoffice.gov.uk/media/image/d/2/hadcrut4_graph_small.jpg
Different anomaly baselines but Indicates that the beginning of 2016 is just on the 0.5 anomaly in the composite prediction graph i.e. just below the top bound of the blue uncertainty. This corresponds to the bottom bound of the blue uncertainty at the end of the prediction October 2020.
The chances of 2020 temperature being greater than or equal to the peak 2015 El Nino level are hopelessly remote. In the intervening time the conditions will first of all return to ENSO-neutral, as will temperatures. This corresponds to about the 0.5 anomaly in HadCRUT4 and about 0.25 in the composite prediction graph.
Then as generally, but not necessarily, there will probably be a La Nina and a temperature response which could be around 0.4 on HadCRUT4 and about 0.15 on the composite graph at end of say 2017. That would be at or below the lower bound of the GREEN GCM range, let alone the blue 5-yr prediction range.
The area of the earth’s surface producing the 2015 spike was only in the Northern Hemisphere and confined to the Pacific, Asia,and Eastern Russia. Now in early 2016 Asia is freezing at record levels in some parts i.e. global temperature will be at the bottom of the blue prediction band in very short time, probably in the first few months of 2016.
The has to be the Met Office’s stupidest 5-yr “decadal” prediction so far.
>”They have to issue a new one each year because the previous forecast is always ridiculously high and at a trajectory far greater than the GCMs.”
Note how the trajectory of the blue prediction has far greater slope than the green GCM trajectory:
http://www.metoffice.gov.uk/media/image/q/o/fig3_dp2015_fcst_global_t.png
How so? And Why?
[Smith] >2017 is likely to see a dip in global temperatures. “We can be pretty sure there will be a drop that year,” added Smith. After that, temperatures could start to rise again over the rest of the decade. “Whether one of these years — 2018, 2019, 2020 — overtakes 2016 in terms of temperature is very hard to predict at this stage,” said Smith.
This is contradictory to their prediction graph. The central estimate for 2017 is at the same level as 2016 would be if they had plotted it. Then maybe not 2018 but the LOWER bound of 2019 and 2020 are BOTH higher than the 2016 level.
Didn’t Smith look at his office graph before he opened his mouth?
And yes, this in respect to 5 years ahead is absurd: ““We are looking quite far into the future, after all”. Is Smith unaware that the green GCM “projections” do not stop at 2020?
>”Dr Doug Smith, a Met Office expert on long-term forecasting”
Short-term not so expert:
UKMO Lowers 5-Year Global Temperature Forecast and Omits the Second 5 Years of the Decadal Forecast
Posted on January 6, 2013
https://bobtisdale.wordpress.com/2013/01/06/ukmo-lowers-5-year-global-temperature-forecast-and-omits-the-second-5-years-of-the-decadal-forecast/
Ooops – Met Office decadal model forecast for 2004-2014 falls flat [Smith et al (2007)]
Anthony Watts / November 21, 2013
http://wattsupwiththat.com/2013/11/21/ooops-met-office-decadal-model-forecast-for-2004-2014-falls-flat/
Laughing Stock Met Office…2007 “Peer-Reviewed” Global Temperature Forecast A Staggering Failure
By P Gosselin on 24. June 2014
http://notrickszone.com/2014/06/24/laughing-stock-met-office-2007-peer-reviewed-global-temperature-forecast-a-staggering-failure/#sthash.KW9tbjxB.dpbs
Then they got lucky with the El Nino:
Met Office New Decadal Forecast
January 30, 2014
https://notalotofpeopleknowthat.wordpress.com/2014/01/30/met-office-new-decadal-forecast/
Except the blue band is actually the upper and lower limits of a bunch of arbitrary squiggles spat out by the model:
MET- Office: New four year ‘decadal’ forecast spaghetti
January 30, 2014 by tallbloke
https://tallbloke.wordpress.com/2014/01/30/met-office-new-four-year-decadal-forecast-spaghetti/
>”UKMO Lowers 5-Year Global Temperature Forecast and Omits the Second 5 Years of the Decadal Forecast Posted on January 6, 2013″
That was when the Met Office surreptitiously published their radically lowered forecast on Christmas Eve 2012 when people were busy doing other stuff. A week into the new year and they were sprung.
1.14°C per century is hardly yawn-inducing. 2°C since pre-industrial times is considered by many to be the maximum ‘safe’ warming that can occur and we are already over 1°C.
In answer to your question, 2015 remains the warmest year once natural (ENSO + Volcanic + solar) variation are corrected for:
https://tamino.wordpress.com/2016/01/29/correcting-for-more-than-just-el-nino/
You seem to be accusing the Met Office of lying, without saying what was factually incorrect. Discerning a trend is different from making an exact prediction about a single instance of time. Monckton’s point is trite, 3600 data points are sufficient, what is lacking is the long time series for comparison.
In other news, a fire started by lightning is “climate change”. From SMH:
‘Fire burns Tasmanian Wilderness World Heritage Area’
Professor Bowman said the latest fire was different.
When bushfires hit the Blue Mountains in October 2013 he was wary of linking them to man-made climate change due to the large historic variability in Australian fire seasons, but understanding of fire ecology had developed rapidly, and scientists were now more confident global warming was increasing fire risk.
“This is completely consistent with predictions. It was lit by lightning and it is incredibly dry and warm in western Tasmania by historic standards,” he said.
“I think I would be being unethical and unprofessional if I didn’t form the diagnosis and say what it is – climate change. Under the current rate of warming I think this ecosystem will be gone in 50 years.”
http://www.smh.com.au/environment/like-losing-the-thylacine-fire-burns-tasmanian-wilderness-world-
heritage-area-20160131-gmi2re.html
# # #
Much has changed since 2013 apparently.
Nuts, broken link. Try this:
http://www.smh.com.au/environment/like-losing-the-thylacine-fire-burns-tasmanian-wilderness-world-heritage-area-20160131-gmi2re.html
Simon >”2015 remains the warmest year…..”
But where? Gavin Schmidt and Tom Karl had some trouble with this:
‘Annual Global Analysis for 2015’ – January 2016
Gavin A. Schmidt
Director, NASA’s Goddard Institute for Space Studies
Thomas R. Karl
Director, NOAA’s National Centers for Environmental Information
http://www.nasa.gov/sites/default/files/atoms/files/noaa_nasa_global_analysis_2015.pdf
Except, scroll down to page 3:
USA (CONUS) 2nd warmest year
Africa, Europe 2nd warmest year
N. America 5th warmest year
Austria, France, Germany, Netherlands among five warmest years
Oceania 6th warmest year
Australia 5th warmest year
Argentina 2nd warmest year
Asia, S. America warmest year [Except Argentina above]
Spain, Finland warmest year [Except Europe as a whole was only 2nd warmest above]
Then scroll down to page 10:
•Middle Troposphere (37 yr record)
–UAH: 3rd warmest
–UW-UAH: 3rd warmest
–RSS: 4th warmest
–UW-RSS: 3rd warmest
–NESDIS STAR: 5th warmest
•Lower Troposphere (37 yr record)
–UAH: 3rd warmest
–RSS: 3rd warmest
•Radiosonde data (58 yr record)
–~5,000 ft(850 mb): 2nd warmest
–~10,000 ft(700 mb): 3rd warmest
–~18,000 ft(500 mb): warmest
–~30,000 ft(300 mb): 2nd warmest
–~40,000 ft(200 mb): 14th warmest
# # #
NZ was 27th warmest since 1909 according to NIWA. That leaves Asia (and eastern Russia missed by Schmidt and Karl, and NH Pacific). But now Asia is freezing with places like Vietnam and tropical Taiwan experiencing record low temperatures.
So not only was the 2015 warmest ever record NOT global but it is now totally irrelevant in the regional area that produced it. Asia would gladly have last years temperatures right now.
Grant Foster’s HadCRUT4 El Nino “correction” is bogus:
https://tamino.files.wordpress.com/2016/01/cru.jpeg
The El Nino spike only occurred in CRUTEM4 in November and December of 2015:
http://woodfortrees.org/plot/crutem4vgl/from:1997
And yet Foster retains the 2 month spike representing the entire year – what a phony.
And according to GISS the spike was a Northern Hemisphere-only phenomenon anyway:
Annual Mean Temperature Change for Three Latitude Bands [GISTEMP]
http://data.giss.nasa.gov/gistemp/graphs_v3/Fig.B.gif
In the Southern Latitudes the anomaly was actually DOWN on recent years i.e. there was no “record-shattering global warm temperatures in 2015” (NASA GISS) in the SH Extratropics (Southern Latitudes).
Simon >”Discerning a trend is different from making an exact prediction about a single instance of time”
The Met Office do neither ever since their disastrous 2007 effort (Smith et al upthread). Their “forecast” is just spaghetti i.e. a bunch of random squiggles. This was their 2014 graph:
https://tallbloke.files.wordpress.com/2014/01/meto2014-18.png
Obviously that’s not a god look so now they just blob the lot – no trend or points. A kid with a crayon could do that.
Wrong link to the Met Office ‘Decadal Forecast’ in the first comment. Should be:
From ‘Decadal forecast’
http://www.metoffice.gov.uk/research/climate/seasonal-to-decadal/long-range/decadal-fc
Includes caveats:
Just a normal La Niña would return the climate to a flat trajectory within the next two years. A ” very sudden return to La Niña” would kill the negligible 15 yr warming trend stone dead i.e. cooling instead.
>”ten year global average warming rates are likely to return to late 20th century levels within the next two years”
This is just wishful thinking. Thing is, if the current flat ENSO-neutral trajectory continues to 2020 the entire man-made climate change conjecture would be killed stone dead.
>”the recent slowdown in surface warming is still an active research topic and trends over a longer (15 year) period will take longer to respond” [Met Office]
>”Just a normal La Niña would return the climate to a flat trajectory within the next two years”
>”Thing is, if the current flat ENSO-neutral trajectory continues to 2020 the entire man-made climate change conjecture would be killed stone dead.”
The trend in the HadCRUT4 5 yr mean of 15 years of data (i.e. smoothing ENSO spikes either way) is already flat from 2001 even BEFORE accounting for the return to ENSO-neutral conditions and a possible La Nina:
http://woodfortrees.org/plot/hadcrut4gl/from:2001/to:2016.00/mean:60/plot/hadcrut4gl/from:2001/to:2016.00/mean:60/trend
-0.000773352 per year
This is highly problematic for the warmies, they’re in for a rude shock over the next 2 years.
>”A ” very sudden return to La Niña” would kill the negligible 15 yr warming trend stone dead i.e. cooling instead”
Wood For Trees does not have up to date HadCRUT4 (including 2015 El Nino spike). Using the Met Office’s 10 yr criteria (see below), the unsmoothed CRUTEM4 trend is +0.0153066 per year (+0.153 per decade) with 2015 El Nino data included (15 yr is +0.127 per decade):
hhttp://woodfortrees.org/plot/crutem4vgl/from:2006/plot/crutem4vgl/from:2006/trend
The Met Office says ”ten year global average warming rates are likely to return to late 20th century levels within the next two years”
1970 to 2000 is +0.26 per decade in CRUTEM4. By 2017, with ENZO-neutral or a La Nina, how is a +0.15 ten year rate (or a +0.127 per decade 15 yr rate) going to increase to +0.26 per decade in just 2 yrs?
They seem to be anticipating another strong El Nino back-to-back with 2015/6. News to me.
[RT] >”Now here are my questions.
[1] How much warming was caused since 1997 (18 years) by human emissions?
[2] What was the total increase in global temperatures during that time?”
[Simon} >”In answer to your question, 2015 remains the warmest year …………”
Simon. You haven’t answered either of RT’s questions. You’ve answered your strawman’s question.
The Met Office speaks with forked tongue:
a) ”ten year global average warming rates are likely to return to late 20th century levels within the next two years”
b) ”trends over a longer (15 year) period will take longer to respond”
If they predict that the ten year trend will change radically in only 2 years (a), the 15 year trend would change considerably in the same 2 years too (b). One does not change independently of the other in different timeframes unless the radical change occurs in the additional 5 years of (b) PRIOR to the ten year period (a). The latter (b) just adds 5 years to the former (a) and any radical change is (a) is of necessity simultaneous in (b).
And again, wishful thinking in (a).
>”If they predict that the ten year trend will change radically in only 2 years (a), the 15 year trend would change considerably in the same 2 years too (b).”
There’s very little difference between 10 yr and 15 yr in CRUTEM4 as upthread:
+0.153 per decade – last 10 yr.
+0.127 per decade – last 15 yr.
+0.26 per decade – 1970 to 2000
They are seriously deluded if they think +0.153 will increase to +0.26 in only 2 years, contrary to ENSO, without having to think about how +0.127 will “respond” if it did.
Simon says
“1.14°C per century is hardly yawn-inducing. 2°C since pre-industrial times is considered by many to be the maximum ‘safe’ warming that can occur and we are already over 1°C.”
Are we over 1C? I thought the warming since pre industrial was around 0.8C
A rate of change higher than this over a shorter time frame is still consistent with this.
IRI/CPC Mid-Month Plume-Based ENSO Forecast Probabilities
Season La Niña Neutral El Niño
JFM 2016 ~0% ~0% 100%
FMA 2016 ~0% ~0% 100%
MAM 2016 ~0% 1% 99%
AMJ 2016 ~0% 32% 68%
MJJ 2016 10% 62% 28%
JJA 2016 32% 54% 14%
JAS 2016 47% 44% 9%
ASO 2016 53% 37% 10%
SON 2016 58% 32% 10%
http://iri.columbia.edu/our-expertise/climate/forecasts/enso/current/
58% chance of a La Niña by around October this year but in the face of this contra indication, the Met Office is implying, inexplicably, that the ten year trend in the global temperature metric, say CRUTEM4 upthread, will increase by a factor of 1.7 in the next 2 years and the 15 year trend will double following.
I’m seeing a gaping credibility gap in the Met Office prediction.
The last La Nina was Weak-to Moderate in 2011-12 following a Moderate El Nino. There was a Moderate La Nina 2007-08 following a Weak El Nino and a Moderate La Nina in 1999-00 following a Very Strong El Nino. The last Strong La Nina was 1988-89 following a Moderate El Nino. There has never been a Very Strong La Nina that I know of – yet. The current El Nino is Very Strong,
A Very Strong La Nina easily has the potential to wipe out the last 15 yr +0.127 per decade trend (which has still to increase a little before neutral conditions return), and a Strong to decrease it somewhat.
Even a Weak or Moderate La Nina will in no way increase the 15 yr trend, it might sustain it but it certainly wont double it.
>”Notice too that the observations are NOT just UKMO but a UKMO+GISS+NCDC composite, and only to October 2015 i.e. why don’t they [UKMO] use their own data?”
The following graph of 5 yr means illustrates why HadCRUT4 should NOT be combined with GISTEMP and NCDC as the Met Office has done:
http://woodfortrees.org/plot/gistemp/from:2001/mean:60/plot/gistemp/from:2001/mean:60/trend/plot/hadcrut4gl/from:2001/mean:60/plot/hadcrut4gl/from:2001/mean:60/trend
GISTEMP Least squares trend line; slope = 0.00560873 per year 2003.5 to 2013.5
HadCRUT4 Least squares trend line; slope = -0.000773352 per year 2003.5 to 2012.92
WoodForTrees HadCRUT4 data is not up to date but it is obvious that these are radically different profiles (different baselines are irrelevant to this), and GISTEMP+NCDC will have the overwhelming influence in a composite profile.
This is basically the data without ENSO noise.
Re decadal forecasts in particular:
‘Initialization practices disqualify UN IPCC global circulation models from use for most climate change forecast purposes’ [read “re” initialization]
Guest essay by Michael G. Wallace Hydroclimatologist, Albuquerque, NM
[…]
A recent paper by Suckling and Smith (2013) , covers some aspects of this new purposing and initializing of the GCMs. Notably the article points to the practice of re-initialization of the GCMs’ boundary conditions with recent observations. The authors state in simple language that
[…]
The modeled Pacific Ocean surface signature patterns of temperature in both hemispheres including the equatorial zones decay rapidly, and are poorly represented by the GCMs after only a year. Re-initialization appears to apply to those temperatures, if I’m not mistaken. Only then can the models lurch forward again for a few months of potentially plausible forecasting.
[…]
Now that the process of CMIP validation via initializations has been institutionalized, climate change scientists apparently believe that there is no further need for actual experiments, calibration reports, or any other customary transparency. Rather, so long as the money flows, the latest observations will continue to be fed into the CMIP machine at one end. The other end will continue to emit long term climate forecasts for all locations, modalities and scales. The deployment of these inaccurate results will continue then to grow in cost and scope around the world.
http://wattsupwiththat.com/2016/02/03/initialization-practices-disqualify-un-ipcc-global-circulation-models-from-use-for-most-climate-change-forecast-purposes/
# # #
>”The deployment of these inaccurate results will continue then to grow in cost and scope around the world.”
Not at the CSIRO it wont:
Worked themselves out of a job.
Chortles at JoNova:
‘CSIRO wipes out climate division — 350 scientists to go — since it’s “beyond debate” who needs em?’
http://joannenova.com.au/2016/02/csiro-wipes-out-climate-division-350-scientists-to-go-since-its-beyond-debate-who-needs-em/#more-47613
#4 Graeme No.3 February 4, 2016 at 4:06 pm
“It seems this is something they didn’t forecast either.” [55 “Likes”]
# # #
Apparently it’s OK for engineers (some of which is modeling) and construction workers whose jobs end when a project is finished but climate modelers? No, that’s a continual work-in-progress – there’s CMIP6 to be cranked out, then CMIP7, after that CMIP8……..
The Department of Settled Science was never a sustainable academic model
From JoNova post:
Prof Will Steffen suddenly admits “we” don’t know the basic operation of the climate system:
Professor Will Steffen is an Emeritus Professor at ANU and a Climate Councillor at the Climate Council of Australia.
# # #
Don’t know the basics yet? So what exactly are we “deniers” of then?
We don’t know the basics but we are 100% certain that we need to “take action”, is the executive summary
‘Warmest January in satellite record leads off 2016’ [UAH]
Compared to seasonal norms, the warmest average temperature anomaly on Earth in January was over north central Russia, near the small town of Volochanka. January temperatures there averaged 7.20 C (almost 13 degrees F) warmer than seasonal norms. Compared to seasonal norms, the coolest average temperature on Earth in January was over the northern Pacific Ocean, where the average January 2016 temperature was 2.78 C (just over 5 degrees F) cooler than normal.
Warmest Januaries
2016 0.54 C
1998 0.49 C
Warmest NH Januaries
2016 0.70 C
2010 0.55 C
Warmest SH Januaries
1998 0.58 C
2013 0.51 C
2010 0.41 C
2016 0.39 C
Warmest Januaries in the tropics
1998 1.13 C
2016 0.85 C
http://wattsupwiththat.com/2016/02/04/warmest-january-in-satellite-record-leads-off-2016/
# # #
Still a Northern Hemisphere phenomenon, warmest in north central Russia. But coolest over the northern Pacific Ocean?
UK Met Office Research News
Earth’s Energy Imbalance: a fundamental measure of global climate change
January 2016 – International climate research scientists are calling for better monitoring of Earth’s energy imbalance, with improved ocean observations to play a key role
How do we measure the rate of global warming? Traditionally, we tend to think of Earth’s global surface temperature as the iconic indicator of climate change. However, in a new perspective piece in Nature Climate Change, scientists argue that there is a more fundamental measure of climate variability and the rate of global change: Earth’s energy imbalance.
As the article explains, all the energy that enters or leaves the climate system does so radiatively at the top of Earth’s atmosphere (Figure 1). Under an equilibrium climate, the solar radiation absorbed by the Earth is balanced by emitted longwave radiation. Increased greenhouse gases reduce the emitted longwave radiation and give rise to Earth’s energy imbalance, leading to excess solar energy accumulating in the climate system. This is the most fundamental driver of observed climate change, and the various climate impacts that we are familiar with – such as warmer surface temperatures, sea level rise and loss of land-based ice – are all symptoms of the energy imbalance.
[…]
Lead author of the paper and co-chair of CONCEPT-HEAT, Karina von Schuckmann, summarised the challenges ahead:
“Advancing our capability to monitor the Earth’s Energy Imbalance means increasing our knowledge on the status of global climate change – and the global ocean plays a crucial role. A concerted multi-disciplinary and international effort is needed to improve our ability to monitor this fundamental metric defining global warming.”
Kevin Trenberth, co-author and co-chair of CONCEPT-HEAT adds:
“Earth’s energy imbalance is a measure of how much climate change is underway and it is vital information for future projections of climate. Yet our ability to track the imbalance and determine its absolute value and changes over time is fraught with difficulties, and different approaches are not yet fully reconciled. We need a better observing and processing system, and the synthesis of the multiple techniques to obtain a much better climate information system.”
http://www.metoffice.gov.uk/research/news/2016/earths-energy-imbalance
An imperative to monitor Earth’s energy imbalance
K. von Schuckmann, M. D. Palmer, K. E. Trenberth, A. Cazenave, D. Chambers, N. Champollion, J. Hansen, S. A. Josey, N. Loeb, P.-P. Mathieu, B. Meyssignac & M. Wild
Nature Climate Change 6, 138–144 (2016) doi:10.1038/nclimate2876
Received 24 June 2015 Accepted 22 October 2015 Published online 27 January 2016
http://www.nature.com/nclimate/journal/v6/n2/full/nclimate2876.html [Paywall]
# # #
[Trenberth] >”Yet our ability to track the imbalance and determine its absolute value and changes over time is fraught with difficulties, and different approaches are not yet fully reconciled.”
Something else that is not yet fully reconciled is TOA imbalance from man-made climate change theory (2.4+ W.m-2) vs TOA imbalance from observations (0.6 W.m-2).
>”How do we measure the rate of global warming? Traditionally, we tend to think of Earth’s global surface temperature as the iconic indicator of climate change. However, in a new perspective piece in Nature Climate Change, scientists argue that there is a more fundamental measure of climate variability and the rate of global change: Earth’s energy imbalance.”
A new perspective?
This is BS. IPCC FAQs are clear that earth’s energy imbalance is the primary climate change criteria and metric. There is nothing “new” about this:
FAQ 2.1, Box 1: What is Radiative Forcing?
[A] – “The word radiative arises because these factors change the balance between incoming solar radiation and outgoing infrared radiation within the Earth’s atmosphere. This radiative balance [‘measured at the top of the atmosphere’] controls the Earth’s surface temperature”
And,
[B] – “When radiative forcing [‘measured at the top of the atmosphere’] from a factor or group of factors is evaluated as positive, the energy of the Earth-atmosphere system will ultimately increase, leading to a warming of the system. In contrast, for a negative radiative forcing, the energy will ultimately decrease, leading to a cooling of the system”
https://www.ipcc.ch/publications_and_data/ar4/wg1/en/faq-2-1.html
“Dr Church said it was true climate change was proven, but more detail was needed if the world was going to adapt”
http://www.smh.com.au/environment/misleading-inaccurate-and-in-breach-of-paris-john-church-criticises-csiro-cuts-20160205-gmmopl.html
on responding to the CSIRO cuts.
How does one “prove” climate change?
Given that “climate change” isn’t even defined as a consistent concept, it would appear to be a little hard to “prove” it.
Anyway, proof is something that is generally used in mathematics. Science does not generally deal in proofs, more in falsifiable conjectures.
GISTEMP monthly mean has gone over the peak in January 2016:
http://data.giss.nasa.gov/gistemp/graphs_v3/Fig.C.gif
NH phenomenon anyway but downhill from here on for a couple of years in a normal rational world.
In Warmer World, Gavin Schmidt claimed all but 0.07C of the 2015 El Nino spike for AGW. The monthly mean is down about that much already in January 2016. If Schmidt is right (doubtful) the monthly mean should not fall any further. We”l know in a month’s time.
>”In Warmer World, Gavin Schmidt claimed all but 0.07C of the 2015 El Nino spike for AGW.”
Stefan Rahmstorf in the SMH:
Also an interesting take on what an El Nino is from the SMH (speculation unsupported by any evidence in the IPCC assessments)
http://www.smh.com.au/environment/climate-change/global-temperatures-leap-higher-in-january-smashing-records-20160215-gmuv8f.html
# # #
I await the next Climate Clown installment with eager anticipation.
Rahmstorf claims “more than 80 per cent” of the El Nino spike for AGW. Then 5 paragraphs later he says:
“As the El Nino event winds down over the coming months we can expect somewhat lower global temperatures again for a while, but the global warming trend will continue until we phase out fossil fuels,”
Is he saying expect even lower than the 80% level? If so he implies AGW cannot sustain the level he claimed previously. Why not?
El Nino temperature spikes return to neutral after ENSO-neutral conditions return. That means the entire spike is wiped out. Then if a La Nina follows, which is a strong possibility, temperatures go below neutral.
What then for AGW?
Too much fun.
STATISTICAL FORECASTING How fast will future warming be?
Terence C. Mills © Copyright 2016 The GlobalWarming Policy Foundation
Summary
The analysis and interpretation of temperature data is clearly of central importance
to debates about anthropogenic globalwarming (AGW). Climatologists currently rely
on large-scale general circulation models to project temperature trends over the coming
years and decades. Economists used to rely on large-scale macroeconomic models
for forecasting, but in the 1970s an increasing divergence between models and
reality led practitioners to move away from such macro modelling in favour of relatively
simple statistical time-series forecasting tools, which were proving to be more
accurate.
In a possible parallel, recent years have seen growing interest in the application of
statistical and econometric methods to climatology. This report provides an explanation
of the fundamental building blocks of so-called ‘ARIMA’ models, which are widely
used for forecasting economic and financial time series. It then shows how they, and
various extensions, can be applied to climatological data. An emphasis throughout
is that many different forms of a model might be fitted to the same data set, with
each one implying different forecasts or uncertainty levels, so readers should understand
the intuition behind the modelling methods. Model selection by the researcher
needs to be based on objective grounds.
ARIMA models are fitted to three representative data sets: the HADCRUT4 global
surface series, the RSS global lower troposphere series and the Central England Temperature
(CET) series. A clear finding presents itself for the two global temperature
series. Irrespective of the model fitted, forecasts do not contain any trend, with longhorizon
forecasts being flat, albeit with rather large measures of imprecision even
from models in which uncertainty is bounded. This is a consequence of two interacting
features of the fitted models: the inability to isolate a significant drift or trend
parameter and the large amount of overall noise in the observations themselves compared
to the fitted ‘signals’. The CET exhibits season-specific trends, with evidence of
long-term warming in the winter months but not in the summer.
[…]
Figure 5: HADCRUT4 and forecasts from fitted ARIMA (0, 1, 3) model
Monthly data, January 2011–December 2014 with forecasts out to December 2020
accompanied by 95% forecast intervals.
Figure 6: HADCRUT4 and forecasts from fitted segmented trend model
Monthly data, January 2011–December 2014 with forecasts out to December 2020
accompanied by 95% forecast intervals.
Figure 7: RSS and forecasts from fitted ARIMA (0, 1, 1) model
Monthly data, January 2011–December 2014 with forecasts out to December 2020
accompanied by 95% forecast intervals.
Figure 8: RSS and forecasts from fitted segmented trend model
Monthly data, January 2011–December 2014 with forecasts out to December 2020
accompanied by 95% forecast intervals.
Figure 9: CET and forecasts
Monthly data, January 2011–December 2014. Forecasts per ‘multiplicative ARIMA plus
deterministic seasonal trends’ model out to December 2020 accompanied by 95%
forecast intervals.
http://www.thegwpf.org/content/uploads/2016/02/Forecasting-3.pdf
# # #
Richard Betts doesn’t like this one little bit:
Terence C. Mills just elevated himself to warmist enemy #1.
‘Planet overheating? Not according to historical records’
Written by Ben Webster, The Australian on 23 February 2016.
The global average temperature is likely to remain unchanged by the end of the century, contrary to predictions by climate scientists that it could rise by more than 4C, according to a leading statistician.
British winters will be slightly warmer but there will be no change in summer, Terence Mills, Professor of Applied Statistics at Loughborough University, said in a paper published by the Global Warming Policy Foundation.
He found that the average temperature had fluctuated over the past 160 years, with long periods of cooling after decades of warming. Dr Mills said scientists who argued that global warming was an acute risk to the planet tended to focus on the period from 1975-98, when the temperature rose by about 0.5C.
He used simple statistical methods, normally used to predict economic trends, to forecast future temperatures. He took into account all the fluctuations in the temperature since 1850 and found no evidence to support the increase predicted by the Intergovernmental Panel on Climate Change (IPCC), a UN scientific body.
He found the average winter temperature in central England, which has the world’s longest temperature records going back to 1659, had increased by about 1C over 350 years. Based on that change, he forecast an additional increase of about 0.25C by 2100. He said the average temperature would continue to be “buffeted about by big shocks” caused by natural events, such as the El Nino weather phenomenon.
He said that his analysis, unlike computer models used by the IPCC to forecast climate change, did not include assumptions about the rate of warming caused by rising emissions.
“It’s extremely difficult to isolate a relationship between temperatures and carbon dioxide emissions,” he said.
The Global Warming Policy Foundation is a think tank founded by Lord Lawson of Blaby, a former chancellor, to challenge mainstream climate change theory. The foundation paid Professor Mills £3,000 to write the report, which it said was its standard fee.
http://www.climatechangedispatch.com/planet-overheating-not-according-to-historical-records.html
# # #
Now sit back and watch the sparks fly. This media attention must be anathema to warmies.
Too much fun.
Problem for the warmists. The rebuttal to Karl et al has the following author list:
John C. Fyfe,
Gerald A. Meehl,
Matthew H. England,
Michael E. Mann,
Benjamin D. Santer,
Gregory M. Flato,
Ed Hawkins,
Nathan P. Gillett,
Shang-Ping Xie,
Yu Kosaka and
Neil C. Swart
How is this even possible? See:
Nature: Making sense of the early 2000’s warming slowdown
Posted on February 24, 2016 | by Judith Curry
https://judithcurry.com/2016/02/24/nature-making-sense-of-the-early-2000s-warming-slowdown/#more-21204
It gets worse. JC quotes Ed Hawkins’ blog post:
Youza!
For the record, much chortling in the Guardian from Nuccitelli and Schmidt, Betts and Hawkins (the latter co-authors of Fyfe et al just upthread and see below) re the Mills/GWPF statistical forecast model (see just upthread.
Gavin’s mirth (see Figure’s 5 & 6):
This in respect to the 2015 El Nino spike. Schmidt’s own GISTEMP monthly shows the peak has passed and current temps well on the way back into the predicted range. Mirth will be short-lived in other words.
Schmidt claimed all but 0.07C of the El Nino spike for AGW (see upthread and ‘2015 warmest’ post) so he’ll have problems of his own very soon. Expect some back-peddling.
Nuccitelli says in respect to the Figure at bottom of page:
Except the “physics-based climate models” he refers to from Mann et al are NOT the actual global climate models. The Figure caption is “Global mean temperature series (red) along with five different Monte Carlo surrogates based on forced signal + ARMA noise realizations (gray) using CMIP5 all-forcing experiments. Photograph: Mann et al. (2016)”
Mann et al (2016) is this paper:
‘The Likelihood of Recent Record Warmth’
Michael E. Mann, Stefan Rahmstorf, Byron A. Steinman, Martin Tingley & Sonya K. Miller
http://www.nature.com/articles/srep19831
Although their statistical ARIMA models use “CMIP5 all-forcing”, that is NOT the global climate model profile. So for the Nuccitelli article’s purposes (and Mann et al’s), the GCMs are thrown under a bus.. Also, the Mann et al statistical models will not forecast the 2015 El Nino spike either.
Another current and very topical paper, Fyfe et al (including Mann as co-author), does compare GCM trends to observation trends (see below) and it is obvious the models are running too warm. This is a major point of the Fyfe et al paper. Judith Curry features Fyfe et al here (well worth reading):
‘Nature: Making sense of the early 2000’s warming slowdown’ [Curry]
https://judithcurry.com/2016/02/24/nature-making-sense-of-the-early-2000s-warming-slowdown/
JC links to Ed Hawkin’s blog re Fyfe et al here:
‘Making sense of the early-2000s warming slowdown’ [Hawkins]
http://www.climate-lab-book.ac.uk/2016/making-sense/#comment-2008
Perfectly obvious from Figure 1 (d & f ) that the GCM CMIP5 30 and 50 yr model trends are too warm.
Nuccitelli’s Guardian article will be a very useful “bring-up” over the next 4-5 years to 2020. Fyfe et al is already being cited in the comment thread – that must hurt.
>”For the record, much chortling in the Guardian from Nuccitelli and Schmidt, Betts and Hawkins ……….. re the Mills/GWPF statistical forecast model”
Forgot the link to the Nuccitelli/Guardian article:
‘Lord Lawson thinktank’s report ignores everything we know about climate science’
http://www.theguardian.com/environment/climate-consensus-97-per-cent/2016/feb/29/think-tank-throws-out-centuries-of-physics-climate-scientists-laugh-conservative-media-fawns
‘Now Even Michael Mann Admits The ‘Pause’ In Global Warming Is Real; Throws Allies To Wolves’
By James Delingpole, 28 Feb 2016
The “Pause” in global warming is real – not an urban myth concocted by evil ‘deniers’ – a study [Fyfe et al – see link below] has found, signalling the development of a major schism within the climate alarmist camp.
Though the paper’s findings are not controversial – few serious scientists dispute the evidence of the temperature datasets showing that there has been little if any global warming for nearly 19 years – they represent a tremendous blow to the climate alarmist “consensus”, which has long sought to deny the “Pause’s” existence.
First, the study was published in Nature Climate Change a fervently alarmist journal which rarely if ever runs papers that cast doubt on the man-made-global-warming scare narrative.
Secondly, it directly contradicts a widely-reported study produced by the National Oceanic and Atmospheric Administration (NOAA) last year which attempted to deny the existence of the “Pause” (also known as the “hiatus”). This NOAA study was widely mocked, quickly debunked and is now the subject of a Congressional investigation by Rep. Lamar Smith (R-TX). What’s novel about this new study in Nature Climate Change, though, is that it’s not skeptics and Republicans doing the mocking and the debunking: it’s the kind of people who in the past were very much in the alarmist camp, including – bizarrely – none other than Michael “Hockey Stick” Mann, who co-authored the paper.
What we have here, in other words, is signs of a major rift within the climate alarmist camp with different factions adopting different tactics to cope with the failure of their collapsing narrative.
On one side are people like Thomas Karl and Thomas Petersen, the hapless NOAA scientists given the unenviable task of producing that risible paper last year which did its best to deny that the Pause was a thing.
On the other are what might be called the “rats deserting the sinking ship” faction who have produced this new paper for Nature Climate Change, in which finally they concede what skeptics have been saying for many years: that there has been no “global warming” since 1998.
This divergence in the alarmist camp is now going to create a dilemma for all those liberal media outlets – from the BBC to the Guardian to the LA Times – which reported on NOAA’s “death of the pause” study as if it were a reliable and credible thing.
Are they now going to report on the counter-narrative? Or are they going to ignore it and hope no one notices?
The man who would like more than anyone to know the answer to this question is David Whitehouse, Science Editor of the Global Warming Policy Foundation and a former science editor at the BBC (till the point when his skepticism became too much for his employer).
That’s because in 2007, he was one of the first scientists to draw attention to the mysterious slowdown in global warming.
As he recalls in the Spectator:
Whitehouse is too polite to name the alarmist shills and activist attack dogs who have fought so hard over the years to discredit anyone who has dared suggest the existence of a Pause. So I will. But in a separate article. It seems to me that these people are so disgusting, corrupt, nauseating and malign that they shouldn’t simply be tacked on to the end of a news story. They should be made to perform the internet equivalent of Cersei’s Walk of Shame; or, at the very least, to be put in the stocks and pelted with excrement.
In the meantime let us all draw comfort from the fact that a) the alarmists are finally being forced to concede that their skeptic adversaries are right and b) that they are starting to turn on one another. This is the beginning of the end for the alarmist “consensus”. And not before time.
http://www.breitbart.com/big-government/2016/02/28/study-the-pause-in-global-warming-is-real/
‘Making sense of the early-2000s warming slowdown’
John C. Fyfe, Gerald A. Meehl, Matthew H. England, Michael E. Mann, Benjamin D. Santer, Gregory M. Flato, Ed Hawkins, Nathan P. Gillett, Shang-Ping Xie, Yu Kosaka & Neil C. Swart
http://www.nature.com/nclimate/journal/v6/n3/pdf/nclimate2938.pdf
‘Hide The Hiatus: Global News Media Ignore Inconvenient Research Findings’
Date: 25/02/16 Global Warming Policy Forum
A new paper published yesterday in Nature Climate Change [Fyfe et al above] confirms that the so-called global warming hiatus between 1997 and 2014 was real and that claims that it was overstated or never existed are untrue:
Figure ‘Warming Slowdown’
Annual mean and global mean surface temperature anomalies [vs GCM model mean]; source Nature Climate Change 24 Feb 2016
Anomalies are from three updated observational datasets3, 4, 5 and the ensemble mean (black curve) and 10–90% range (darker grey shading) GMST of 124 simulations from 41 CMIP-5 models using RCP4.5 extensions from 200528
http://www.nature.com/nclimate/journal/v6/n3/carousel/nclimate2938-f1.jpg
Most of the world’s major news outlets that reported last year that the warming hiatus never existed have so far ignored the new findings. It will be interesting to see how long they will keep mum.
[See headlines]
BBC NEWS
The Guardian
Reuters
Los Angeles Times
The Sydney Morning Herald
The Wall Street Journal
http://www.thegwpf.com/hide-the-hiatus-global-news-media-ignore-inconvenient-truth/
>”On one side are people like Thomas Karl and Thomas Petersen,”
And Gavin Schmidt.
>”On the other are what might be called the “rats deserting the sinking ship” faction”
John C. Fyfe, Gerald A. Meehl, Matthew H. England, Michael E. Mann, Benjamin D. Santer, Gregory M. Flato, Ed Hawkins, Nathan P. Gillett, Shang-Ping Xie, Yu Kosaka & Neil C. Swart
Except the rats have only jumped on some floating debris, they haven’t mustered the courage to make for dry land just yet.
>”On one side are people like Thomas Karl and Thomas Petersen,” [And Gavin Schmidt]
And Adam Scaife and Peter Stott UK Met Office (see link), and Grant Foster (in same CCG thread at link).
Quotes from an article called “Analysis: How 2015 became the hottest year on record” from CarbonBrief.org
https://www.climateconversation.org.nz/2016/01/hottest-year-ever-was-2015/#comment-1422089
And Stefan Rahmstorf:
https://www.climateconversation.org.nz/2016/01/hottest-year-ever-was-2015/#comment-1430823
So the “sinking ship” faction is at least:
Thomas Karl, Thomas Petersen, Gavin Schmidt, Adam Scaife, Peter Stott, Stefan Rahmstorf and Grant Foster.
And the “rats” faction is at least:
John C. Fyfe, Gerald A. Meehl, Matthew H. England, Michael E. Mann, Benjamin D. Santer, Gregory M. Flato, Ed Hawkins, Nathan P. Gillett, Shang-Ping Xie, Yu Kosaka & Neil C. Swart
This is fun.
>”So the “sinking ship” faction is at least: Thomas Karl, Thomas Petersen, Gavin Schmidt, Adam Scaife, Peter Stott, Stefan Rahmstorf and Grant Foster.”
The Karl et al (2015) author list is as follows:
‘Possible artifacts of data biases in the recent global surface warming hiatus’
Thomas R. Karl1,*, Anthony Arguez1, Boyin Huang1, Jay H. Lawrimore1, James R. McMahon2, Matthew J. Menne1, Thomas C. Peterson1, Russell S. Vose1, Huai-Min Zhang1
– Author Affiliations
1National Oceanographic and Atmospheric Administration (NOAA), National Centers for Environmental Information (NCEI), Asheville, NC 28801, USA.
2LMI, McLean, VA, USA.
http://science.sciencemag.org/content/348/6242/1469.full
I wonder how Schmidt and Rahmstorf will spin this.
GISTEMP February Land anomaly has shot up again after dropping in January but the Ocean anomaly is down. From this GISS page:
Global Temperature — More Figures
http://www.columbia.edu/~mhs119/Temperature/T_moreFigs/
Land
2015.125 1.54
2015.208 1.52
2015.292 0.88
2015.375 1.03
2015.458 1.08
2015.542 0.87
2015.625 1.01
2015.708 1.07
2015.792 1.45
2015.875 1.36
2015.958 1.79
2016.042 1.54
2016.125 2.37 [Feb]
http://www.columbia.edu/~mhs119/Temperature/T_moreFigs/Fig.A4L.txt
Ocean
2015.125 0.52
2015.208 0.57
2015.292 0.63
2015.375 0.68
2015.458 0.68
2015.542 0.71
2015.625 0.72
2015.708 0.76
2015.792 0.81
2015.875 0.80
2015.958 0.77
2016.042 0.80
2016.125 0.74 [Feb]
http://www.columbia.edu/~mhs119/Temperature/T_moreFigs/Fig.A4OERv4.txt
Schmidt claimed all but “0.07C” of the 2015 Land+Ocean spike for AGW and Rahmstorf “more than 80%”.
Problem is, now the Feb 2016 Land anomaly is 0.58 C higher than the Dec 2015 Land anomaly that made news headlines (2.37 – 1.79 = 0.58). This is 1.36C higher than only 6 months previously (2015.625 1.01).
2.37 (Land) is far and away the most spectacular anomaly in the GISTEMP dataset and more than 0.5 on what Schmidt claimed for AGW. So what to do? Double down and claim Feb 2016 too?
I’ve yet to see a news headline on this.
‘Alarmism Cranked Up to Absurd Level’
Bob Tisdale / March 16, 2016
http://wattsupwiththat.com/2016/03/16/alarmism-cranked-up-to-absurd-level/
Some good graphs showing how the climate clowns are hyperventilating about a natural, and Northern Hemisphere, phenomenon. For example”
GISTEMP LOTI Northern Hemisphere and Southern Hemisphere
https://bobtisdale.files.wordpress.com/2016/03/figure-8.png
Model-Data Comparison of 360-Month Trends (Trailing) [GISS LOTI]
https://bobtisdale.files.wordpress.com/2016/03/figure-113.png
And for chuckles, Tamino’s “adjusted” series starting 1950 (why not 1880?)
Grant Foster’s “adjusted” GISTEMP LOTI
https://bobtisdale.files.wordpress.com/2016/03/figure-4-tamino-giss-loti-adjusted-by-model.jpg
Oddly, the 2015/16 El Nino spike remains in entirety. As Tisdale puts it:
What is amusing is that, irrespective of the imposed trend curve (basically just 2 linear trends – look closely), The 21st century ‘hiatus’ is clearly evident from the mid-2000s onwards. All Taminos “adjustment” does is move the pause along in time. This would be even more clear if Foster had actually removed the 2015/16 uptick. In other words, his trend line is missing a 3rd linear trend from about 2005 onward.
Clearly, from 2010 onwards, the bulk of the data is BELOW Foster’s linear trendline.
2010 is significant here because 2010 was where the Foster & Rahmstorf (2011) residual passed through observations i.e. they “removed” very little of that spike too so the chances of Foster removing the 2015/16 spike were always going to be slim.
Foster and Rahmstorf (2011)
http://iopscience.iop.org/article/10.1088/1748-9326/6/4/044022/meta;jsessionid=EBB7865D97EA3ED94514592C4F888189.c5.iopscience.cld.iop.org
Figure 4. Adjusted data sets for all five sources, after removing the estimated influence of el Niño, volcanic eruptions and solar variations.
http://iopscience.iop.org/1748-9326/6/4/044022/downloadFigure/figure/erl408263f4
Obviously, Foster’s NEW “model” is different to the OLD F&R2011 residual in respect to GISTEMP. Note how the 2010 El Nino spike is prominent (warmest) in the OLD residual (GISTEMP, blue line) but is not a feature of the NEW “model”.
Even by publication date of F&R2011 it was obvious that Foster would need a NEW model because addition of the then neutral data would make his residual look silly i.e. the neutral data was flatlining and the pause/hiatus was back – embarrassing. Even a massive spike as did occur was not going to help, no, a NEW “model” was required.
So now Foster & Rahmstorf (2011) is thrown under a bus by Grant Foster himself, no less. Not much courage in his convictions.
Much more on the 2015/16 El Nino spike in “Hottest year ever was 2015”:
https://www.climateconversation.org.nz/2016/01/hottest-year-ever-was-2015/comment-page-1/#comment-1445980
Tamino’s post:
‘Surprise, but not Shock’ – Tamino (Grant Foster)
https://tamino.wordpress.com/2016/03/13/surprise-but-not-shock/
Advanced signal analysis is rather more revealing than Foster’s inept (and therefore non-contiguous) efforts.
For example:
‘Application of the Singular Spectrum Analysis Technique to Study the Recent Hiatus on the Global Surface Temperature Record’. Diego Macias , Adolf Stips , Elisa Garcia-Gorriz. Published: September 10, 2014
DOI: 10.1371/journal.pone.0107222
Provides this:
Analysis of Global Temperature Anomalies
https://images.sciencedaily.com/2014/09/140911092905_1_900x600.jpg
Compare the residual (red line) to Foster’s (latest) residual:
Foster’s March 2016 ‘adjusted’ GISTEMP
https://tamino.files.wordpress.com/2016/03/nasa_adj.jpeg
Foster, in his ignorance, will be out of options around 2020. His blind belief that the data trajectory is his linear residual will be undone in the intervening time. Firstly by a return to neutral conditions after the El Nino. Secondly by a very probable La Nina. Thirdly by a return to neutral after a La Nina. All of which will be BELOW his residual trendline, as was the post-2010 data BELOW his F&R2011 residual trajectory.
Foster dug himself into hole. Now he’s tying himself in knots. Schadenfreude is fun.
My comment on Tamino’s latest residual at WUWT:
http://wattsupwiththat.com/2016/03/16/alarmism-cranked-up-to-absurd-level/comment-page-1/#comment-2167804
HadSST3 Northern Hemisphere vs Southern Hemisphere
From 1990, 5 yr mean, trends from 1995
http://woodfortrees.org/plot/hadsst3sh/mean:60/from:1990/plot/hadsst3nh/mean:60/from:1990/plot/hadsst3sh/from:1995/mean:60/trend/plot/hadsst3nh/from:1995/mean:60/trend
Huge divergence in NH SST from 2000 onwards but SH essentially flat.
Hence the illusion that 21st century warming is “global” and “shocking”.
Including GISTEMP LOTI with HadSST3 NH vs SH is instructive:
http://woodfortrees.org/plot/hadsst3nh/mean:60/from:1990/plot/hadsst3sh/mean:60/from:1990/plot/hadsst3nh/from:1995/mean:60/trend/plot/hadsst3sh/from:1995/mean:60/trend/plot/gistemp/from:1990/mean:60/offset:-0.2/plot/gistemp/from:1995/mean:60/trend/offset:-0.2
GISTEMP LOTI is basically just tracking NH SST. The illusion is complete.
Add HadCRUT4 and HadSST3 global means to the above graph and it becomes perfectly obvious that GMST is skewed by NH Land:
http://woodfortrees.org/plot/hadsst3nh/mean:60/from:1990/plot/hadsst3sh/mean:60/from:1990/plot/hadsst3nh/from:1995/mean:60/trend/plot/hadsst3sh/from:1995/mean:60/trend/plot/gistemp/from:1990/mean:60/offset:-0.2/plot/gistemp/from:1995/mean:60/trend/offset:-0.2/plot/hadcrut4gl/from:1990/mean:60/offset:-0.07/plot/hadsst3gl/from:1990/mean:60
Should be:
“Add HadCRUT4 and HadSST3 global means to the above graph and it becomes perfectly obvious that GMST is skewed by NH Land [this century]”
I don’t know why this is the case. I suppose that because NH SST warmed, NH land warmed also. And there’s more land in the NH than SH.
Conversely, SH SST didn’t warm so SH land didn’t warm either. And there’s less land in the SH than NH.
Goes to show that a globally averaged Land+Ocean metric is a complete misrepresentation of the earth’s surface temperature. The “global record” 2015 anomaly does not show up at all in the SH metric.
The globally averaged Land+Ocean GISS LOTI, HadCRUT4, and NCDC/NCEI metrics are just a useless illusion – except for ideological purposes.
Richard,
You seem to have troubling grasping the concept of mean or average. The existence of data points below the mean does not invalidate the central estimate.
Global climate models predict faster warming in northern boreal regions, which is exactly what is happening.
Simon
>”Richard, You seem to have troubling grasping the concept of mean or average. The existence of data points below the mean does not invalidate the central estimate.”
It does at the end of a series if the series isn’t linear in nature i.e. where a linear trend is statistically inappropriate. For example, Wellington tide guage data (PSMSL) is linear in nature over the entire series in the long-term so it is reasonable to extrapolate the data even though there is some intermediate fluctuation. So yes, you are correct in that case Simon. But if the data is NOT linear in nature, you’re dead wrong.
Grant Foster and Stefan Rahmstorf were caught out in this way by data subsequent to 2010 in their 2011 paper, Hence Foster’s NEW model of residuals which differs from OLD.
I’m assuming you are alluding to my critique of Foster & Rahmstorf (2011) and Tamino’s post because you don’t actually refer to anything in particular. Correct me if I misconstrue, it’s hard to know what you’re on about exactly if you don’t explain yourself.
Clearly temperature data is not linear in nature. The secular trend certainly is not linear over the observational era and there is an oscillatory overlay which is basically a trendless sine wave. But ONLY in NH data. Comments upthread demonstrate that GSMT is skewed by NH temperature over land. The same NH characteristics are NOT evident in the Southern Hemisphere.
>”Global climate models predict faster warming in northern boreal regions, which is exactly what is happening.”
Wonderful Simon. Except climate models also predict faster warming in both the Northern and Southern Hemisphere’s sea surface temperature too. But that is exactly what is NOT happening:
Sea Surface Temperature: Climate Models vs Observations, NH & SH (see graphs)
http://www.c3headlines.com/2011/12/the-utter-failure-of-ipcc-climate-models-prediction-of-ocean-global-warming-is-documented.html
Land is the minor component of GMAST. Sea surface is by far the predominant component e.g. in GISTEMP and HadCRUT4. The GL-NH-SH SST split upthread (HadSST3 at Wood For Trees) clearly shows GL midway between NH and SH and essentially no warming in SH post 2000. All datasets however exhibit an abrupt shift at 1995. The post 2000 warming is in the NH only. Addition of Land to GMAST just skews the metric along the NH SST track (see graph below)
The most recent “record warm” spike has not been a detectable feature in the SH. See GISTEMP SH Extratropics data, 2015 anomaly was fractionally LESS than 1980. In other words, “global” warming post 2000 is simply a NH phenomenon only. Not so in 1998.
Here’s the WFT graph again:
http://woodfortrees.org/plot/hadsst3nh/mean:60/from:1990/plot/hadsst3sh/mean:60/from:1990/plot/hadsst3nh/from:1995/mean:60/trend/plot/hadsst3sh/from:1995/mean:60/trend/plot/gistemp/from:1990/mean:60/offset:-0.2/plot/gistemp/from:1995/mean:60/trend/offset:-0.2/plot/hadcrut4gl/from:1990/mean:60/offset:-0.07/plot/hadsst3gl/from:1990/mean:60
Before you object to the offsets, I’ve aligned the GMAST LOTI datasets with SST at the start of the series simply to demonstrate that the profiles track NH SST after diverging at 2000. If the NH land component was not the factor that skews GMST then GMST would track HadSST GL midway between SST NH and SST SH. But both GISTEMP and HadCRUT4 track HadSST3 NH.
It’s the same with the globally averaged OHC metric. You are effectively just looking at Indian Ocean OHC.
Same with satellite MSL. You are effectively just looking at localized skew. See this paper:
‘Is anthropogenic sea level fingerprint already detectable in the Pacific Ocean?’
H Palanisamy, B Meyssignac,A Cazenave and T Delcroix (2015)
http://iopscience.iop.org/article/10.1088/1748-9326/10/8/084024/pdf
The short answer is “no”. If a signal is ever to be detected we will have to wait decades. Even then it is clear from long-running tide guage trend analysis (see 50 yr analysis NOAA Tides and Currents) that rates of sea level rise were far greater back in the mid 20th century that they are today i.e. for an anthro signal to emerge, it must emerge out of past variation too.
But main point re the paper however is that maybe more than a third of the Pacific as measured by satellite exhibits sea level FALL over 1993 – 2013 in Figure 1(a)/(b), much of that statistically significant. It is the relatively small areas of marked rise that skew the metric.
So now the question is: is the satellite MSL data for the Pacific essentially linear or not on a century scale if part is rising and the other part falling over 20 yrs?
In short, global average is meaningless. And the man-made climate change conjecture is even more meaningless in regional breakdowns. And trends MUST be appropriate and relevant, linear can either be appropriate or erroneous.
Steve Sherwood and Stefan Rahmstorf at Hot Topic:
This is dead wrong. The hotlink is to this Rahmstorf Tweet:
https://twitter.com/rahmstorf/status/698380997222510592
Which contains this graph:
https://pbs.twimg.com/media/CbElthfUAAA1DVS.jpg
Graph looks dodgy. Reason becomes clear by following the link in the Tweet to this paper:
‘The Likelihood of Recent Record Warmth’
Michael E. Mann, Stefan Rahmstorf, Byron A. Steinman, Martin Tingley & Sonya K. Miller (2015)
http://www.nature.com/articles/srep19831
Yes you are looking at GISTEMP. No you are not looking at CMIP5 model output. You are looking at “adjusted” model output i.e. a residual.
The giveaway is this:
It is totally unnecessary to make recourse to climate models in order to eliminate “noise” from GMST e.g. minor fluctuations, ENSO activity, MDV. But Mann et al left GMST as-is, instead “adjusting” the model mean. This is bogus.
The model mean is ENSO-neutral and MDV-neutral. Therefore the model mean should have been left as-is and GMST “adjusted” i.e. a GMST residual, NOT a model mean residual. The model mean already IS a residual. The MDV-neutral GMST residual (the “spline”) is the nominally neutral dates from Macias et al Figure as posted at WUWT (see link upthread and below):
MDV-neutral spline in GMST: 1865 – 1895 – 1925 – 1955 – 1985 – 2015 – 2045
To which we compare the MDV-neutral model mean (NOT the “adjusted” model mean by Mann et al via Rahmstorf):
https://financialpostbusiness.files.wordpress.com/2014/06/fe0617_climate_c_mf.jpeg?quality=60&strip=all
From my WUWT comment:
The models do a good job but only up until 1955. After that the unadjusted CMIP5 MDV-neutral ENSO-neutral model mean SHOULD pass through MDV-neutral ENSO-neutral 1985 and 2015 observations (the GMST spline), clearly it doesn’t.
Sherwood, Mann, Rahmstorf, Steinman, Tingley & Miller are dead wrong.
‘Climate Models Are NOT Simulating Earth’s Climate – Part 2’
Bob Tisdale / February 12, 2016
http://wattsupwiththat.com/2016/02/12/climate-models-are-not-simulating-earths-climate-part-2/
Extratropical Southern Hemisphere SST Model-Data Comparison
https://bobtisdale.files.wordpress.com/2016/02/figure-7-extratrop-s-hem.png
South Pacific SST Model-Data Comparison
https://bobtisdale.files.wordpress.com/2016/02/figure-18-s-pacific.png
Southern Ocean SST Model-Data Comparison
https://bobtisdale.files.wordpress.com/2016/02/figure-21-southern.png
Southern Ocean Trends:
+0.043 C/decade Model Warming
=0.061 C/decade Data Cooling
ALL absolute model temperatures are too warm by 0.8, 0.6, and 1.0 C respectively.
T’he Sun in February 2016 and the latest heat records’
By Frank Bosse and Fritz Vahrenholtr / March 20, 2016
http://wattsupwiththat.com/2016/03/20/the-sun-in-february-2016-and-the-latest-heat-records/
1) The 2015 cooling down to 500m in the IPWR (Figure 6) is by far the most cooling in the ARGO era.
2) The heat that was in the IPWR is now in the troposphere.
3) “Heat trapping greenhouse gasses” will not “trap” the heat in the troposphere, from there it will dissipate to space in accordance with the Kelvin-Planck statement of the Second Law of Thermodynamics.
4) The IPWR will no longer support tropospheric temperature since it has cooled radically i.e. after tropospheric warming in the NH (where the spike occurred) there will be tropospheric cooling. Possibly radical La Nina cooling but not from the IPWR which warms in a La Nina..
5) It is all a natural process of energy accumulation and dissipation.
6) Attempts to conflate this process with AGW are thermodynamically illiterate. The tropospheric “warming”, however radical it may be, is actually part of a dissipatory COOLING process.
‘Concern as hot February shatters global records’
The story is the same around the world: the scourge of global warming is back again with a vengeance
Robin McKie, Published 20/03/2016
http://www.independent.ie/irish-news/news/concern-as-hot-february-shatters-global-records-34555901.html
Wrong for starters. No Southern Hemisphere warming spike:
GISTEMP: Annual Mean Temperature Change for Three Latitude Bands
http://data.giss.nasa.gov/gistemp/graphs_v3/Fig.B.gif
There was a spike in SH SST but that’s on the way back down:
http://woodfortrees.org/plot/hadsst3sh/from:1990
The SH SST trend from 1997 is negligible even with only the warm side of the EL Nino data in i.e. the “hiatus” is still there in SH SST data:
http://woodfortrees.org/plot/hadsst3sh/from:1990/plot/hadsst3sh/from:1997/trend/plot/hadcrut4sh/from:1990/trend
It gets worse:
Where did the “catch-up” heat in the air come from? The sea, which cooled.
Where will the “catch-up” heat in the air go to? To space, the air will cool.
I wonder what these idiots will be saying when neutral conditions return at the end of the El Nino – La Nina sequence?
‘The Hunt For Global Warming: Southern Hemisphere Summary’
Posted on April 13, 2015 by Euan Mearns
In recent months I’ve had a series of posts looking at the temperature histories of a number of land areas in the Southern Hemisphere [1, 2, 3, 4, 5]. This was in response to a post by Roger Andrews where he presented an analysis of about 300 climate stations from the Southern Hemisphere that, when combined, showed substantially less warming than the reconstructions presented by various groups (BEST, GISS, HadCRUT) [6]. I found this to be both intriguing and important and wanted to see if I could replicate Roger’s result.
Figure 1: 174 Stations Southern Hemisphere, Average Temperature Anomalies, GHCN V2
1880 – 2011 +0.23 C, +0.18 C/century
Figure 1 Note that this chart has an expanded Y-axis scale and the grid lines are at 0.1˚C intervals. A regression through all the data using station average as the base indicates warming of +0.18˚C per century, i.e. close to zero. The black trend lines are parallel to the regression and show there are rising tops and bottoms in an overall slowly rising trend. The alternative view is a flat trend from 1880 to the mid-1970s with a step change to warmer temperatures across the mid-1970s cold snap.
http://www.euanmearns.com/wp-content/uploads/2015/04/Shemis-station-aver.png
# The average time-temperature anomaly series for 174 climate stations from New Zealand, Central Australia [1], Southern Africa [2,3], Patagonia [4], Paraguay and Antarctica [5] are presented in Figure 1. A simple regression through the data with no weighting indicates warming of +0.23˚C since 1880 equivalent to +0.18˚C per century. This is substantially less than S hemisphere land temperature reconstructions reported by BEST, GISS and HadCRUT.
# Comparing with Roger Andrews’ reconstruction the difference is less than 0.1˚C. I have managed to replicate Roger’s result.
# My default method is to use each station’s average temperature as a base for calculating anomalies. All anomalies have also been calculated using a fixed base period of 1963 to 1992. Doing so makes no material difference. Warming is reduced to +0.21˚C since 1880 using the fixed base.
# Area weighting the results produces a trend of +0.19˚C warming since 1880. Area weighting lends substantial weight to Antarctica (only 14 records) where the main data series begin in 1954 and possible methodological problems are identified with lending 55% weight to only 14 stations with records that begin in the mid point of the time series.
# The mid-1970s stand out as an anomalous cool period seen in records throughout Central Australia and Southern Africa. It was also cool, although not anomalously so, in S America, Antarctica and New Zealand at this time.
# The structure of the data is one of cyclical warming to circa 1914 followed by cyclical cooling to the mid-1970s followed by cyclical warming to the present day. There is no evidence for warming linked to the 1998 el nino in these data. Nor is there evidence for a pause in warming in the Southern Hemisphere since 1998.
# The data presented here are not intended to provide full cover of the Southern Hemisphere. Coverage will be extended at some point. But there is probably sufficient cover to be representative of the southern hemisphere land and surrounding marine conditions. There is scant evidence for significant warming in these data suggesting global warming is confined mainly to the Northern Hemisphere.
Continues >>>>>>>
http://euanmearns.com/the-hunt-for-global-warming-southern-hemisphere-summary/
# # #
This an exhaustive study including Data, Normalisation, Area Weighting, and the following:
Results
Comparison With Other Reconstructions: CRUTEM4, BEST, GISS, UAH
Appendix A New Zealand
[15 stations from 1880, no homogenization as per NIWA’s VCSN]
Appendix B Paraguay and surrounding areas
Gee, +0.18 C/century for Southern Hemisphere land temperature since 1880. Scary.
From the Mearns study:
CRUTEM4 vs GHCN V2 (Mearns)
http://www.euanmearns.com/wp-content/uploads/2015/04/crutem4-EM.png
BEST vs GHCN V2 (Mearns)
http://www.euanmearns.com/wp-content/uploads/2015/04/crutem4-EM.png
GISS vs GHCN V2 (Mearns)
http://www.euanmearns.com/wp-content/uploads/2015/04/GISS-EM.png
UAH vs GHCN V2 (Mearns)
http://www.euanmearns.com/wp-content/uploads/2015/04/uah-em.png
UAH vs GHCN V2 (Andrews)
http://oi61.tinypic.com/287g483.jpg
Average temperature anomalies, 15 stations from 1880, New Zealand, GHCN V2
http://www.euanmearns.com/wp-content/uploads/2015/04/NZdTtime.png
# # #
UAH vs GHCN V2 (Andrews) is spot-on. +0.2 C divided by 2.9 decades = +0.069 C/decade,
CRU is distorted
GISS is grossly distorted
BEST is worst (junk).
The GMST Illusion
And,
There is no reason for alarm anywhere in the Southern Hemisphere, no temperature spike was registered except briefly in SST.
And an El Nino is a natural COOLING process i.e. the Northern Hemisphere spike is transitory.
The GMST Illusion
‘February’s global temperature spike is a wake-up call’
by Steve Sherwood, of UNSW Australia and Stefan Rahmstorf, of the Potsdam Institute for Climate Impact Research, currently working in Australia,
“What is going on? Are we facing a climate emergency?”
“Fortunately, this is temporary: the El Niño is beginning to subside.”
http://hot-topic.co.nz/februarys-global-temperature-spike-is-a-wake-up-call/#more-14425
# # #
There is no reason for alarm anywhere in the Southern Hemisphere, no temperature spike was registered except briefly in SST.
And an El Nino is a natural COOLING process i.e. the Northern Hemisphere spike is transitory.
COLUMN: How much clarity do we have on transition to La Niña? – Braun
CHICAGO | By Karen Braun
Markets | Fri Mar 18, 2016
The potential for the massive El Niño to transition into La Niña later in the year is one of the hottest topics in commodities markets right now.
These fluctuations of sea surface temperatures in the tropical Pacific Ocean can have drastically different impacts on global weather depending on which phase is present – El Niño, the warm phase, or La Niña, the cool phase. They are one of the few clues to seasonal weather patterns several months or even years in advance.
The short question-and-answer session would look like this: Are we headed for La Niña toward the end of 2016? Looks that way. Will it be a big one? Not sure.
The answers may help dictate whether drought is likely in South America, winter will be cold in the United States or abundant rains will return to Southeast Asia, among other things.
Although strength is yet to be determined, the progression of certain atmospheric and oceanic variables will provide clues on the possible entry to La Niña. Particular insight will be offered by the timing of these events, as it might be the difference between strong La Niña and nothing at all.
It seems unlikely that we will stay in El Niño, though. Of the 14 El Niño events since 1950, excluding this year, only one of them remained El Niño into the following year, while three others lingered in neutral to weak El Niño territory. So the odds favor La Niña, but in meteorology, odds alone are not enough.
A La Niña environment has already begun to develop. Cooler waters are building beneath the surface in the Pacific Ocean and El Niño-supporting trade winds have lessened. But sea surface temperatures, or SSTs, in the defining region of the Pacific remain very warm, so we are still amid a strong El Niño event.
It is helpful to look for historical instances in which El Niño turned into La Niña through the course of a year. This has occurred only a handful of times since 1982, but there are enough similarities among these analogs that we can use them to inform this year’s likely outcome.
Selected analog years suggest that huge dropoffs of SST anomalies into negative, La Niña-defining territory are likely to take place between April and July. These analogs also suggest that when the SST anomalies cross into negative territory later than June, a weaker La Niña event is likely to follow (tmsnrt.rs/1UkgzEC).
Although the journey to La Niña has already begun, there are still some variables that need to fall into place in order to lock in this forecast solidly for the end of 2016.
CHECKED BOXES
El Niño decay is unlikely to proceed without the shutdown of its two key mechanisms: strongly reversed trade winds in the western Pacific Ocean and net warmth below the ocean’s surface. Check, and check.
One of the first anti-El Niño signals was the abrupt strengthening of western Pacific trade winds back to near-normal levels last November. It is probably no mistake then that El Niño reached maximum strength that month (tmsnrt.rs/1UkhtB0).
But this has not weakened El Niño as much as it would seem. Localized but strong westerly bursts of wind over the central and eastern Pacific – the Niño region – have propped up SSTs in recent weeks, though the wind bursts have been less frequent since early March.
Enter the ocean temperature anomalies. Water temperatures just below the surface across the entire Pacific Ocean have turned net cool, and this massive, cold blob is now lurking below the surface waiting for its chance to turn up. The colder the anomaly becomes, the bigger the potential for La Niña becomes (tmsnrt.rs/1Lt5VZC) (tmsnrt.rs/1UkhRzs).
The cold blob can surface in the Niño region with the help of the trade winds. The winds must continue to strengthen, and the pace at which they do so will determine just how soon we might enter La Niña.
If the cold blob can keep losing heat at the same pace as recently, other similar years seem to suggest that Niño 3.4 SST anomalies are likely to cross into negative territory one to two months after western trade winds begin their big upward push. This was demonstrated well in 1998 and 2010, both of which resulted in very strong La Niña events.
The trade winds have been somewhat mixed in strength this month, though, so we may have to wait until April at the earliest to see a large strengthening in the winds. That means that we could see SST anomalies turn negative as early as June, but there are other factors that could hold this back.
UNCHECKED BOXES
Although the cold blob is growing and the trade winds are strengthening, pressure tendencies – the Southern Oscillation portion of ENSO – still highly favor El Niño.
The very warm ocean waters over the past year have led to the persistence of lower air pressure near Tahiti relative to Darwin, Australia. This ultimately helps reverse the trade winds and sustain El Niño, and is reflected by a highly negative Southern Oscillation Index, or SOI.
But the juice could be running out. Over the past few weeks, the cold blob has surfaced in areas of the Pacific Ocean outside the immediate influence of Niño-region trade winds. The latest SST data shows an expanding cold pool that, if it should close in on Tahiti, may soon and quickly reverse the trend in SOI to favor La Niña (tmsnrt.rs/1Lt66E9).
SOI tends to fluctuate a lot, even in the midst of a strong El Niño or La Niña event, but there is a clear separation between the stronger La Niña years and the weaker ones. In the stronger years (1988, 1998, 2010), considerably higher SOI values are already in place by July, while the weaker years tend to bounce around for a bit longer (1983, 1995, 2005, 2007) (tmsnrt.rs/1Lt627j).
Another ENSO-supporting variable is the outgoing longwave radiation, or OLR, which is a proxy for thunderstorm activity in the central equatorial Pacific Ocean. Because warmer waters favor atmospheric convection, El Niño is often associated with negative OLR anomalies. This implies an increased presence of towering storm clouds that block solar radiation from escaping back into space.
Last month’s OLR anomalies were record negative for the month of February, but like SOI, we would expect these anomalies to start turning positive at some point in the next couple of months in order to assist with the wind-down of El Niño (tmsnrt.rs/1UkhtAY).
If we do not see positively persisting SOI and OLR anomalies by July or August, then the potential impending La Niña may be weaker and/or delayed, all other factors aside.
But history would suggest that a late-forming La Niña – after about August – does not usually go on to become a strong La Niña, even a few months down the road. So if we do not see convincingly negative SSTs by September then we may be headed for either weak La Niña, or possibly “La Nada.”
http://www.reuters.com/article/weather-el-nino-braun-idINKCN0WK0KM
# # #
Futures traders in the commodities markets seem to be rather more advanced than the climate science alarm community, to which they don’t appear to be listening.
The GMST Illusion
‘Current record-shattering temperatures are shocking even to climate scientists’
Posted on 21 March 2016 by Dana Nuccitelli
“Stunning,” “wow,” “shocker,” “bombshell,” “astronomical,” “insane,”“unprecedented”– these are some of the words climate scientists have used to describe the record-shattering global surface temperatures in February 2016.
[Graph] NASA GISS global monthly (red) and 12-month average (blue) surface temperatures as compared to pre-industrial temperatures. Illustration: Dana Nuccitelli
It’s difficult to see any ‘pause’ or slowdown in the global warming over the past 50 years.
To put the current temperatures into context, prior to last October, monthly global surface temperatures had not been more than 0.96°C hotter than the 1951–1980 average, according to Nasa. The past 5 months have been 1.06°C, 1.03°C, 1.10°C, 1.14°C, and 1.35°C hotter than that average, absolutely destroying previous records. Estimates from Noaa are in broad agreement with those from Nasa.
Right now, the Earth’s average surface temperature is hotter than it’s been in thousands of years; potentially even longer.
Continues>>>>>>
http://www.skepticalscience.com/current-record-shattering-temps-shocking-to-climate-scientists.html
# # #
“Stupid,” “whoppers,” “suckers,” “baloney,” “astounding,” “insane,”“uneducated”– these are some of the words BS-alert readers have used to describe the record-shattering global media temperature headline silliness since February 2016.
‘Pause in global temperatures ended but carbon dioxide not the cause ‘
Written by Jennifer Marohasy, Online Opinion on 21 March 2016
[…] So, while the global satellite temperature data indicates that February 2016 is the hottest month on record, this pertains to a record that only goes back to 1979. If we consider the much longer surface temperature record for many individual locations across Australia and other parts of the world, February 2016 is not that hot.
But this is an exceedingly contentious claim, rejected by a “consensus” of climate scientists that rely exclusively on homogenized temperature series. That is the early temperature records are almost all adjusted down through the homogenization process, so the present appears warmer relative to the past. It is not contested that the official surface temperature records are adjusted, or that the early records are generally cooled. This is justified mostly on the basis of “world’s best practice” and that temperature series should show global warming from at least 1910. I’ve explained the inconsistencies in the adjustment made in the homogenization of the original observed maximum temperatures at Darwin in a technical paper originally accepted for publication in the International Journal of Climatology (see postscript for more information).
Back in 1876, the weather station at Darwin was the responsibility of Charles Todd, Australia’s Postmaster General and Superintendent of Telegraphs. His priority until 1872 had been the construction of an overland telegraphic line from Adelaide to Darwin along which he established 14 regional weather stations. Todd’s first passion was meteorology and astronomy, both of which he used in his weather forecasting.
Air pressure measurements were important to the weather forecasts being issued by Todd, with these the measurements standardized based on local temperature measurements. It was thus important that local temperatures were accurately recorded. While these records stood for over 100 years, beginning in 1996 the Australian Bureau of Meteorology started “adjusting” all the old temperature records used in the calculation of official temperature trends, including the maximum temperature series for Darwin.
What the homogenization process tends to do to the temperature record is not only create a global warming trend where none previously existed, but it also removes the natural cooling and warming cycles so evident in the raw observational data. For example, in the unadjusted maximum temperatures as recorded at Darwin the hottest year is 1907. Temperatures then appeared to cool to 1942 when there is a spike in temperatures. Note that 1941/1942, like 1997/98 and 2015/2016 were El Nino years. These were also years of minimum lunar declination.
Unlike modern meteorologists, Todd understood that climate change on Earth is driven by extraterrestrial phenomena. But he would likely have cautioned against single-cause explanations recognizing that there are multiple and overlapping periodicities evident in the history of the Earth’s climate. There are natural cycles that spans tens of thousands of years affected by changes in the Earth’s tilt, and much shorter cycles affected by changes in solar activity. Early 20th Century astronomers and weather forecasters, particularly Inigo Owen Jones, where interested in the planets. They noted decades in advance that 1940/41 would have been a year of Jupiter, Saturn and Uranus conjunction.
Todd would have outlawed the practice of homogenization. Scientists of that era considered the integrity of observational records sacrosanct.
At an online thread I recently read the following comment about homogenization:
Don’t you love the word homogenise? When I was working in the dairy industry we used to have a homogeniser. This was a device for forcing the fat back into the milk. What it did was use a very high pressure to compress and punish the fat until it became part of the milk. No fat was allowed to remain on the top of the milk it all had to be to same consistency… Force the data under pressure to conform to what is required. Torture the data if necessary until it complies…
Clearly the Bureau’s remodeling of historical temperature data is unnatural and unscientific. In erasing the natural climate cycles and generating a global warming trends, the capacity of modern climate scientists to forecast spikes in global warming is greatly diminished, as is their capacity to forecast droughts and floods.
Because of the homogenization of the surface temperature record in the compilation of national and global climate statistics, those skeptical of anthropogenic global warming, have long preferred the UAH satellite record. Even though this record only begins in 1979.
The UAH global temperature record for the lower troposphere which once showed no trend for 18 years, now shows a surge in warming. This warming, however, is neither catastrophic nor outside the bounds of natural variability. And it certainly hasn’t been caused by carbon dioxide.
http://www.climatechangedispatch.com/pause-in-global-temperatures-ended-but-carbon-dioxide-not-the-cause.html
The GMST Illusion
The “Greenhouse Effect” Hypothesis’ Sleight of Hand
Written by Carl Brehmer. Published on March 19, 2016
ABSTRACT: In any meaningful thermodynamic discussion it is necessary at the outset to clearly define the parameters of the “system” under study. One of those parameters is whether or not the system is “closed”.
The typical explanation of the “greenhouse effect” hypothesis pulls a slight of hand in this regard in that it starts out describing the thermodynamics of one “system” with an emphasis on the first law of TD, but then switches twice midstream to different “systems” altogether and asserts that the thermodynamics of the second and third systems mirror the thermodynamics of the first system.
If you are already confused you are not alone. The “greenhouse effect” hypothesis is itself very confused. Let’s explore.
First system
The first system is the Earth/atmosphere ensemble, whose “boundary” is space and whose two thermal energy sources are 1) incoming sunlight and 2) internal nuclear decay.
This is, indeed, a “closed” system in that for all practical purposes there is no movement of matter into or out of the system that would carry thermal energy with it into or out of the system. (For the sake of this discussion we will neglect the energy added to the system by meteors and lost from the system by the outer atmosphere as it continually loses molecules blown out into space by the solar wind.)
In this system the thermal energy that crosses the “boundary” between the system and its surroundings is 100% electromagnetic. There is absorbed sunlight (insolation – albedo) in and long-waver IR radiation out.
According to the first law of thermodynamics for this entire Earth/atmosphere ensemble system to be in thermal equilibrium the total energy contained within the sunlight absorbed must equal the total energy contained within the outgoing long-wave radiation.
The “greenhouse effect” hypothesis asserts that an increase in the concentration of “greenhouse gases” inhibits the ability of the entire Earth/atmosphere ensemble “system” to emit long-wave IR radiation out into space. In the language of radiation thermodynamics, “greenhouse gases”: make the entire Earth/atmosphere ensemble “system” less “emissive”.
This, in turn, forces the entire Earth/atmosphere ensemble system to retain enough thermal energy to raise its collective temperature high enough to again emit the requisite amount of long-wave IR radiation out into space to equal the absorbed incoming sunlight. (I am not saying that I agree that “greenhouse gases” make the Earth/atmosphere ensemble less emissive, rather I am simply attempting to state the claim of the “greenhouse effect” hypothesis.)
Note: This over-arching, big picture definition of the “greenhouse effect” hypothesis is not concerned with the distribution of thermal energy within the entire Earth/atmosphere ensemble system, only that the total amount of outgoing long-wave radiation equals the total amount of absorbed sunlight per the First Law of Thermodynamics. Now comes the slight of hand
Second system
The “greenhouse effect” hypothesis then switches mid-stream to a completely different thermodynamic system—a one-meter thick layer of air about 1.5 meters off of the ground. This is where weather stations are sited, whose temperatures are averaged along with sea surface temperatures to create the “global mean temperature”.
This “global mean temperature” ignores the temperature of the entire solid Earth, the temperature of 99.98% of the oceans (the amount of ocean not contained within the thin layer of water whose temperature is being called the sea surface temperature) and the temperature of 99.9998% of the atmosphere (the amount of atmosphere not contained within the one meter thick layer of air whose temperature is being monitored.)
Thus, the “greenhouse effect” hypothesis assumes that the entire Earth/atmosphere ensemble system is retaining excess thermal energy based on the temperature of hottest 0.02% of the ocean combined with the hottest 0.0002% of the atmosphere.
Were they to base their calculations on the average temperature of the entire troposphere, which contains 85% of the atmosphere’s mass, no retention of thermal energy would be suspected since the average temperature of the entire troposphere is well below zero °C.
What are the characteristics of this second thermodynamic system? Let’s just look at the land based portion of this “second system”. Its “boundaries” are imaginary dividing lines below and above a one-meter thick layer of air about 1.5 meters off of the ground.
1) Convection: This second system is completely “open” in that air is continually moving into and out of the system carrying thermal energy into and thermal energy out of the system.
2) Conduction: Because of the high R-factor of air very little thermal energy is moving into and out of the system via conduction (quite frankly the air within that limited layer does not stay in it long enough for significant conduction to occur.)
3) Radiation: Only a small amount of thermal energy enters this second system via IR radiation since most of the “net radiation heat loss” from the ground up-ward via up-going long-wave radiation passes straight through this system because of its “transmissivity”.
4) Latent heat: There isn’t any liquid water in this system (except when there is fog) so little or no thermal energy is lost due to evaporation, but this system does gain thermal energy at night when some of the water vapor present in this layer condenses into dew.
Let me elaborate on #3, the Radiation portion of this second system’s thermodynamics. In a study that I did using surface radiation readings at a SURFRAD site in Desert Rock Nevada in the summer 2012, on cloudless days the up-going “net radiation heat loss rate” from the ground was 119 W/m2.
Up-going long-wave radiation – down-going long-wave radiation:
418 W/m2 – 299 W/m2 = 119 W/m2
Since the calculated “transmissivity” of the air in this very arid climate was 0.26, 109 W/m2 passed straight through the system in question (the one-meter thick layer of air about 1.5 meters off the ground) without thermally interacting with the system.
Total up-going IR radiation x transmissivity
418 W/m2 x 0.26 = 109 W/m2
Thus, the amount of “heat” that was being transferred from the ground into this second system on the hottest, cloud free days within this desert climate was only 10 W/m2.
Up-going radiation heat loss rate – amount of radiation exiting through the “atmospheric window”
119 W/m2 – 109 W/m2 = 10 W/m2
While 109 W/m2 of up-going IR radiation was passing straight through this ground level layer of air leaving only 10 W/m2 of “heat” to be absorbed this same ground level air was emitting 299 W/m2. Since the air above this “second system” also had a transmissivity of at least 0.26 we can calculate that the up-going “radiation heat loss rate” from this “second system”. It was 77.4 W/m2.
Total up-going IR radiation from the “second system” x transmissivity
299 W/m2 x 0.26 = 77.4 W/m2
As you can see, from these simple calculations drawn from standard radiation thermodynamic using surface radiation readings being gathered by NOAA, this second system (whose temperatures are being averaged to create the “global mean temperature”) loses over 7 times as much “heat” via IR radiation upwards as it gains via IR radiation from the ground! 10 W/m2 of “heat” is transferred via IR radiation into this “second system” through the lower boundary of the system, while 77.4 W/m2 of “heat” is simultaneously being lost via IR radiation through the upper boundary of the system.
Contrary to this physical reality, the “greenhouse effect” hypothesis asserts that IR radiation is somehow causing “heat to be trapped” within the system under study, thus forcing its temperature to increase.
Out of curiosity I applied these same mathematical formulae to the very humid air in Mississippi during the same time period. The results are as follows:
1) Transmissivity of the air = 0.12
2) Net radiation heat loss rate from the ground = 53W/m2
3) Amount of “heat” transferred from the ground to the air via IR radiation = 4.6W/m2
4) Amount of “heat” transferred upwards from the “second system” via IR radiation = 42W/m2
As you can see, the increased humidity in Mississippi increased the ratio of radiative heat loss to radiative heat gain in this second system from ~1:7.7 and in Nevada to ~1:10. This is not inconsistent with radiation thermodynamics since increasing the emissivity of matter enhances its ability to cool via IR radiation and, indeed, water vapor increases the emissivity of air.
Before we move on do not think that these findings conflict with the Kiehl and Trenberth’s generally accepted Earth Energy Budget.
[see K&T Earth Energy Budget diagram]
Applying the same mathematical formulae as above to the KT Earth Energy Budget we see that globally the ongoing average amount of “heat” (net radiation heat loss rate) that is being transferred from the ground to the atmosphere via IR radiation is only 26 W/m2, while simultaneously the ongoing average amount of “heat” that is being transferred out into space by the atmosphere via IR radiation is 195 W/m2.
This is a 1:7.5 ratio of atmospheric “heat” gain via IR radiation from the ground to atmospheric “heat” loss via IR radiation out into space. In other words, for every 1 W/m2 of heat gained via IR radiation from the ground the atmosphere disgorges 7.5 W/m2 via IR radiation out into space. From where does this extra energy come? From “thermals”, from “latent heat transfer” and from sunlight absorbed directly by the atmosphere on it’s way in.
The idea therefore that “greenhouse gases” are somehow causing the atmosphere to “trap heat” simply doesn’t jib with the actual thermodynamics of the atmosphere. Stated directly, the “greenhouse effect” hypothesis is false. Thus the believers in the “greenhouse effect” hypothesis had to create a third imaginary system.
3) Third system
In the third system the entire atmosphere is replaced by a single thin pane of magic material that is 100% transparent to down-going sunlight while simultaneously being 100% opaque to up-going longwave radiation. In other words at some wavelength the pane of material magically changes from being a “white” body to being a “black” body.
Beyond that, this pane of magic material is, unlike the atmosphere, separated from the ground by a layer of vacuum thus making the thermodynamic relationship between the ground and the atmosphere 100% electromagnetic. Gone are convection, i.e., the movement of thermal energy into and out of the system via mass transfer, conduction and latent heat transfer.
The majority of the mathematical formulas being taught within institutions of “higher learning” that presume to explain and quantify the “greenhouse effect” hypothesis apply to this third imaginary — non-existent — system and thus have no connection whatever to the real world thermodynamics of the atmosphere, which is what science is suppose to study.
Continues>>>>>>>>
http://principia-scientific.org/greenhouse-effect-hypothesis-slight-hand/
Brehmer from ‘Second system’ in respect to Desert Rock Nevada:
And in respect to the K&T budget and diagram:
Dang, missed an end quote tag just prior to “Ok, let’s think of this in view…….”
That’s my comment from then on, not Brehmer’s.
Inattention and browser without edit for some reason (Opera). I was watching ‘Last Call at the Oasis’ on Maori Television and compiling a comment simultaneously. ‘Last call’ is the 2011 water crises alarm doco featuring Erin Brockvich and …………Peter Gleick.
Some good stuff on water pollution and bottled water but the “climate change” alarm looks silly since rain in Australia and California.
The GMST Illusion at WUWT
“It’s on land. It’s in the oceans. It’s in the upper atmosphere. It’s in the lower atmosphere.”
But it’s not in GISTEMP Southern Latitudes to end of 2015.
[see graph]
1980 SH Extratropics anomaly was fractionally warmer than 2015:
1980 46
2015 41
Needless to say, I find the anomaly neither “astronomical,” ”staggering” or “strange.” from a SH perspective. The heat-in-transit from sea to space has been concentrated in the NH [rather than evenly distributed north – south].
And “heat trapping greenhouse gasses” will “trap” NONE of this passing heat. That’s because El Nino heat is special heat that “heat trapping greenhouse gasses” can’t “trap” /Sarc.
This is an abrupt localized oceanic surface COOLING process, not surface warming like some “shocked” climate scientists (think Rahmstorf et al) see when they look at GMST. They’ve been suckered by the GMST illusion.
http://wattsupwiththat.com/2016/03/22/more-alarmist-nonsense-with-the-release-of-the-redundant-noaa-global-temperature-data-for-february-2016/comment-page-1/#comment-2172327
The GMST Illusion, WMO version (also upthread).
‘UN Claims There’s An ‘Alarming Rate’ Of Global Warming By Ignoring The 15-Year ‘Hiatus’ And El Nino’
Written by Michael Bastasch, Daily Caller on 22 March 2016.
http://www.climatechangedispatch.com/un-claims-there-s-an-alarming-rate-of-global-warming-by-ignoring-the-15-year-hiatus-and-el-nino.html
And,
# # #
Amazing how these alarmists misconstrue localized heat-in-transit and surface COOLING – for “global” “warming”.
For the record:
“Climatologists such as Stefan Rahmstorf, from Germany’s Potsdam Institute of Climate Impact Research, say the big El Nino event in the Pacific is topping up the background warming from climate change by about 0.2 degrees.”
http://www.smh.com.au/environment/climate-change/spike-in-global-temperature-fuels-climate-change-fears-20160317-gnl7do.html
# # #
Impressive Rahmstorf can narrow it down that much, given GISTEMP NH & SH monthly to February 2016:
https://bobtisdale.files.wordpress.com/2016/03/figure-8.png
For the record (from the same SMH article):
Stephen Sherwood, an atmospheric scientist at UNSW-based ARC Centre of Excellence for Climate System Science, said the recent surge in warming indicates the slowdown in surface temperature increases of the past 10-15 years is over.
“We knew that was never going to last,” Professor Sherwood said, referring to what had been dubbed a “warming hiatus”. “We’re back on track to where the models were predicting.”
http://www.smh.com.au/environment/climate-change/spike-in-global-temperature-fuels-climate-change-fears-20160317-gnl7do.html
# # #
Heh. What will Sherwood say when the spike drops right back out of the model range to neutral after the El Nino?
Not to mention the very real possibility of a La Nina (Gaia forbid it’s a strong one).
But then, Sherwood is at the “ARC Centre of Excellence for Climate System Science” so who am I to doubt his excellence?
Joanne Nova on Rahmstorf, Sherwood, UNSW, and Hannam/SMH, aided by Bob Tisdale (abbrev.):
‘It’s “special” science where one Hot Month is the signal, and years of The Pause is just noise.’
[See graph] Mystical Sign Watch in Global Temperatures
https://s3.amazonaws.com/jo.nova/graph/temp/global/2016/global-temperatures-mystical-sign-2016.gif
“Spike in global temperature fuels climate change fears” – SMH
Prof Rahmstorf seems a bit confused about what’s “noise” and what’s “signal”
So sayth the man [Sherwood] who “predicted” one hot month in two decades and finally got lucky? Now, despite the models being wrong for 18 years, in a flicker they are “right”. Three years from now if the pause still exists will he give up his job and pay back the salary?
http://joannenova.com.au/2016/03/its-special-science-where-one-hot-month-is-the-signal-and-years-of-the-pause-is-just-noise/
# # #
I was waiting for this.
The “spike” is only actually seen in the NH temperature series:
GISTEMP LOTI NH &SH
https://bobtisdale.files.wordpress.com/2016/03/figure-8.png
How exactly, is this a “surge in warming” (Sherwood)?
At Hot Topic, no-one has yet dared to respond to this comment:
“Its just nonsensical, denialist drivel” apparently. “Utter pseudo scientific drivel”, “[in]comprehensible”, and “[un]intelligible” too.
How can these people possibly critique IPCC Assessment Reports or peer-reviewed scientific papers or the man-made climate change conjecture in respect to the IPCC’s primary criteria? Let alone address simple questions as above. They don’t even know the source of this El Nino heat in the troposphere (Renowden and Taylor) or agree on where it exhibits in the metrics (Kiwiiano and SimonP) i.e. they can’t deduce from graphs either.
Any more than about 2 (inconvenient) sentences and their eyes glaze over (excepting Andy here).
Hannam/SMH graph:
Source NASA: Departure from the norm – 2016 so far
http://www.smh.com.au/cqstatic/gnraz9/web_temp_anomalies_729x410.jpg
Except, let’s look at the NASA NH-SH data compared to GL mean:
Northern Hemisphere
2015 87 Jul
2015 98 Aug
2015 112 Sept
2015 122 Oct
2015 135 Nov
2015 144 Dec
2016 153 Jan
2016 190 Feb
http://data.giss.nasa.gov/gistemp/tabledata_v3/NH.Ts+dSST.txt
Southern Hemisphere
2015 59 Jul
2015 57 Aug
2015 52 Sept
2015 89 Oct
2015 70 Nov
2015 76 Dec
2016 76 Jan
2016 81 Feb
http://data.giss.nasa.gov/gistemp/tabledata_v3/SH.Ts+dSST.txt
Global Mean
2016 135 February
http://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt
NH February was +55 more than the mean, +37 more than NH January, and by far the highest anomaly.
SH February was -54 less than the mean, but didn’t eclipse SH October 2015.
It’s tale of two hemispheres. GMST is a totally meaningless illusion.
Then there’s the Zonal annual means data:
SH Extratropics Zonal Annual Mean 24S – 90S
1980 46
2015 41
http://data.giss.nasa.gov/gistemp/tabledata_v3/ZonAnn.Ts+dSST.txt
Think of that plotted as per SMH graph.