Note these phrases in the article: “According to NASA’s methods,” and “we’d be remiss if we didn’t mention the fact that the IPCC messed up when it picked a target year for when melting of Himalayan glaciers reaches critical levels.”
The first comment is there because their methods are under review and being found wanting. Notice my commentary on the other open thread. Additionally, NASA and NOAA are being drawn into the Climate gate issue with their temperature calclulation methodology.
The second comment would have been a huge issue before Climate gate, but finding that one of the major claims in the latest IPCC climate report is undeniably false, and worse derived from a non-science source (not non-peer reviewed) source. There are additional problems with the report that are essentially only being discussed in the blogs and not in the media.
The whole climate change landscape has changed since Climate gate. Nearly every catastrophic claim is now being questioned when they are not being laughed at.
As to the warmest Day, week, month, year, decade, century, etc. ho hum. Thank heavens the temps continue to rise as we recover from the Little Ice Age (LIA). It should continue til the next LIA or worse.
Note these phrases in the article: “According to NASA’s methods,” and “we’d be remiss if we didn’t mention the fact that the IPCC messed up when it picked a target year for when melting of Himalayan glaciers reaches critical levels.”
The first comment is there because their methods are under review and being found wanting. Notice my commentary on the other open thread. Additionally, NASA and NOAA are being drawn into the Climate gate issue with their temperature calclulation methodology. See the graphs below which compare GHCN’s current temps versus the raw data. These graphs are from here: http://noconsensus.wordpress.com/2010/01/19/long-record-ghcn-analysis/
The second comment would have been a huge issue before Climate gate, but finding that one of the major claims in the latest IPCC climate report is undeniably false, and worse derived from a non-science source (not non-peer reviewed source as reported in this article) has morphed into a “So what’s new?” issue. There are additional problems with that IPCC report that are essentially only being discussed in the blogs and not in the media.
The whole climate change landscape has changed since Climate gate. Nearly every catastrophic claim is now being questioned by skeptical scientists when they are not being laughed at.
As to the warmest day, week, month, year, decade, century, etc. ho hum. Thank heavens the temps continue to rise as we recover from the Little Ice Age (LIA). Rising temps should continue til the next LIA or worse occurs.
“… So add it up. Nearly every thing that we think may have a significant influence on global temperature was in full positive mode during the 20th Century: the sun, ocean cycles and greenhouse gases. So what lies ahead for the 21st Century? We will have 2 cool phases of the PDO and only one warm one. The sun, apparently, is settling down and may be headed to a minimum. CO2 will continue to increase, but due to its logarithmic influence on climate, the increase will have less and less impact. The net result will be global cooling. Any other conclusion simply ignores the facts!
…. (paragraph removed as it pertained to the original article)
We lingered near the top of the curve for several years and are now going down. In the past, El Ninos where step jumps in a gradually increasing slope of global temperatures. The current El Nino is manifesting as a temporary slowing of the cooling trend. Big difference.”
I included this comment as a response to 2slugs in the other thread. “In the past, El Ninos where step jumps in a gradually increasing slope of global temperatures.” So, not only bob Tisdale believes in the el Nino step impacts.
As CoRev said, the NASA data you put up is under fire.
The scientists at NASA’s GISS are widely considered to be the world’s leading researchers into atmospheric and climate changes. And their Surface Temperature (GISTemp) analysis system is undoubtedly the premiere source for global surface temperature anomaly reports. In creating its widely disseminated maps and charts, the program merges station readings collected from the Scientific Committee on Antarctic Research (SCAR) with GHCN and USHCN data from NOAA.
It then puts the merged data through a few “adjustments” of its own.
First, it further “homogenizes” stations, supposedly adjusting for UHI by (according to NASA) changing “the long term trend of any non-rural station to match the long term trend of their rural neighbors, while retaining the short term monthly and annual variations.” Of course, the reduced number of stations will have the same effect on GISS’s UHI correction as it did on NOAA’s discontinuity homogenization – the creation of artificial warming.
Furthermore, in his communications with me, Smith cited boatloads of problems and errors he found in the Fortran code written to accomplish this task, ranging from hot airport stations being mismarked as “rural” to the “correction” having the wrong sign (+/-) and therefore increasing when it meant to decrease or vice-versa.http://www.americanthinker.com/2010/01/climategate_cru_was_but_the_ti.html There is a good chance this was all a con.
Predicting the next 100 Yrs? No, just showing the trend using historical data. Data, BTW, which is longer than the global temp data and some what less noisy, albeit not necessarily more accurate. Solar data has gone from using the mark 1 eyeball to today’s more sophisticated electronic/magnetic measurement tools.
Sammy, the wheels have come off, but the oxen haven’t been unhitched. Funding should begin to dry up soon, and the noise level, except for the laughter should lessen. If the Copenhagen funding results are any indication, it looks like the funding issue has arrived. The attack on Pachauri’s conflicts of interest is another indication that the support is diminishing. Which, finally brings me to Big Al Gore. Anyone heard from him lately?
I give coberly credit, as he has been pretty silent since this all began to unravel.
It is the Warmists who should be the most angry about the series of Climategates: they came to their conclusions on the basis of purposely bogeyed-up data. So it is NOT THEIR FAULT. The cons took advantage of their instinctive fear of mankind destroying the world (that has been exhibited throughout history). The cons made a lot of money, but their followers have been made to be public dupes who erroneously cast a lot of insults at their more skeptical blogmates.
ILSM, FORTRAN was used as the coding is that old. It was accepted as the better scientific formula programming language in the 70s. As a programming language it has been replaced, but there are still pockets of old code where it is still used.
Sammy, there is a long history of societies buying into false beliefs. Eugenics comes first to mind. The problem is it takes the better part of a generation for it to completely fade, and as we have seen the zealots hang on. Follow the money still applies.
Imagine how the particulates raised by the firebombings of Dresden and Tokyo and London helped reflect heat back into space and the suspended sulfate haze helped cool the stratosphere. It just might work!
Perhaps the belief that the greenhouse effect does not exist, that we can simply keep adding carbon, methane, and other warming gases to the atmosphere without paying a cost for it, and the decision to believe that scientists behave like teenage girls chasing a popular theory because we don’t like the answers they are getting to the questions they ask is also an example of a society buying into “false beliefs.” Eugenics isn’t a false belief; when we do it to farm animals and pets it is called breeding. When humans do it to themselves we get Tay Sachs, Sickle Cell, and a host of other unintentional genetic ailments.
ILSM, many in the climate community use R. ” 1998. Language and environment for statistical computation and graphics. ” Since Climatology is heavily derived from stats you can see why.
DH, I don’t know of anyone seriously looking at the climate that thinks: “Perhaps the belief that the greenhouse effect does not exist,“. So starting from a false premise takes us down a false path.
I’m not sure what you are saying here: “without paying a cost for it,” and in your list of warming gases you forgot H2O.
What is this supposed to be about? “and the decision to believe that scientists behave like teenage girls chasing a popular theory” I don’t know anyone who believes this either, but there are many who believe many scientists write their papers (and grant requests) around CC since there is more grant money following that line of science than many others.
The reason many in the “climate community” use “R” is because many in the climate community are eager amateurs and “R” is free and subroutines are ubiquitous on the web. “R” is very versatile and is kind of “wiki” like in that a world of users make their programs and subroutines freely available. But power users do not use “R”. For that you’ve got to use something like MATLAB or SAS….but those are some serious dollars and your average amateur sitting at his home PC doesn’t have the wherewithal to fork over those kinds of dollars. But places like GISS/NASA do.
ILSM, FORTRAN was used as the coding is that old. It was accepted as the better scientific formula programming language in the 70s. As a programming language it has been replaced, but there are still pockets of old code where it is still used.
Fortran is still being used today for scientific computing when you are writing programs using raw code (rather than using R, SAS, or Matlab). There is a lot of free high quality code on the net written in Fortran.
Cantie, I don’t diosagree, but it is sleldom taught in the college labs, and ILSM’s thought was from when it dominated scientific programming. It is far from that today.
My onw background includes supporting the Apollo missions. Yuop, pretty old, but I am intimately familiar with the early days of computing. Y’ano when use of machine language, assembler code and Fortran were common.
I am old enough to have met ADM Grace Hopper, and received one of her nanosecond handouts.
I think you might be surpised at how much of of that old code is still around, and being used. Moreover, I can see how someone who was working on a global warming model who started kicking around on the internet looking for algorithms would have found fortran code to solve the problems they were working on. Here are a few of the sites I found doing the same thing:
“IPCC scientist admits Glaciergate was about influencing governments”
The scientist behind the bogus claim in a Nobel Prize-winning UN report that Himalayan glaciers will have melted by 2035 last night admitted it was included purely to put political pressure on world leaders.
Now this may be disheartening and might al;so explain the 70s Ice Age craze. We are approaching the 1500 year mark, ~1480 Yrs since the LIA. I’m not big on Wiki, but provide this as a talking point.
Bond events are North Atlantic climate fluctuations occurring every ≈1,470 ± 500 years throughout the Holocene. Eight such events have been identified, primarily from fluctuations in ice-rafted debris. Bond events may be the interglacial relatives of the glacial Dansgaard-Oeschger events, with a magnitude of perhaps 15-20% of the glacial-interglacial temperature change. The theory of 1,500-year climate cycles in the Holocene was postulated by Gerard C. Bond of the Lamont-Doherty Earth Observatory at Columbia University, mainly based on petrologic tracers of drift ice in the North Atlantic.[1][2] The existence of climatic changes, possibly on a quasi-1,500 year cycle, is well established for the last glacial period from ice cores. Less well established is the continuation of these cycles into the holocene. Bond et al. (1997) argue for a cyclicity close to 1470 ± 500 years in the North Atlantic region, and that their results imply a variation in Holocene climate in this region. In their view, many if not most of the Dansgaard-Oeschger events of the last ice age, conform to a 1,500-year pattern, as do some climate events of later eras, like the Little Ice Age, the 8.2 kiloyear event, and the start of the Younger Dryas. The North Atlantic ice-rafting events happen to correlate with most weak events of the Asianmonsoon over the past 9,000 years,[3][4] as well as with most aridification events in the Middle East.
Now this may be disheartening and might also explain the 70s Ice Age craze. We are approaching the 1500 year mark, ~1480 Yrs since the LIA. I’m not big on Wiki, but provide this as a talking point.
Bond events are North Atlantic climate fluctuations occurring every ≈1,470 ± 500 years throughout the Holocene. Eight such events have been identified, primarily from fluctuations in ice-rafted debris. Bond events may be the interglacial relatives of the glacial Dansgaard-Oeschger events, with a magnitude of perhaps 15-20% of the glacial-interglacial temperature change. The theory of 1,500-year climate cycles in the Holocene was postulated by Gerard C. Bond of the Lamont-Doherty Earth Observatory at Columbia University, mainly based on petrologic tracers of drift ice in the North Atlantic.[1][2] The existence of climatic changes, possibly on a quasi-1,500 year cycle, is well established for the last glacial period from ice cores. Less well established is the continuation of these cycles into the holocene. Bond et al. (1997) argue for a cyclicity close to 1470 ± 500 years in the North Atlantic region, and that their results imply a variation in Holocene climate in this region. In their view, many if not most of the Dansgaard-Oeschger events of the last ice age, conform to a 1,500-year pattern, as do some climate events of later eras, like the Little Ice Age, the 8.2 kiloyear event, and the start of the Younger Dryas. The North Atlantic ice-rafting events happen to correlate with most weak events of the Asianmonsoon over the past 9,000 years,[3][4] as well as with most aridification events in the Middle East.[5] Also, there is widespread evidence that a ≈1,500 yr climate oscillation caused changes in vegetation communities across all of North America.
Took your advice and took it up with Jeff. He’s clueless. The good news is that there was an effort to detrend and account for seasonality. The approach was rather ad hoc, but at least there was an effort. The bad news is that Jeff doesn’t understand that an AR(1) model is unacceptable when the AR(1) coefficient is very close to 1.00 (most of his were more than 0.96). So taking the lag adjusted data and using it to fit a “trend” line is meaningless. He’s just fitting a random walk. He admits that an AR(1) model probably isn’t the best (wow!, talk about understatement), but his excuse was that a lot of other people use it as well. On that I agree…you see it used a lot. But convenience is not an excuse for bad analysis.
That’s a completely dishonest representation of what the guy said. But I’ve come to expect as much from the American Stinker. The AT article is a lie because it tries to imply that scientist knowingly published false information in order to influence politicians. That would indeed be reprehensible, but that’s not what happened. There is NO evidence that anyone at the IPCC knew that the Himalyan glacier melt statement was wrong. Should they have known? Yes. Was it unprofessional? Yes. Should the guy responsible for the screw up be fired? Yes. But was it deliberate? No. There is nothing wrong with using Himalayan glacier melt data in order to apply political pressure. If the data happened to have been true, then it would have been irresponsible NOT to have used the Himalayan melt as poltical tool.
The American Thinker does a lot of dishonest blogging, so you really shouldn’t cite them.
Israel is determined to keep the US at war with Islam forever and it knows just how to do it. And Unkie Sam is too simpleminded to understand and too weak kneed to do anything about it. He lets shrewish wife Israel tell him what to do all the time. Sad and funny. Poor poor stupid Unkie Sam.
I do have a wonderful idea re warming skeptics. Ship them all off to some low lying atolls in the Pacific or in the Indian Ocean and make them stay there for the next 25 years. I would almost guarantee that would be the last we’d hear of them. LOL.
That’s a completely dishonest representation of what the guy said.
Hardly. Did you read the link?
“Dr Murari Lal also said he was well aware the statement, in the 2007 report by the Intergovernmental Panel on Climate Change (IPCC), did not rest on peer-reviewed scientific research.”
“In an interview with The Mail on Sunday, Dr Lal, the co-ordinating lead author of the report’s chapter on Asia, said: ‘It related to several countries in this region and their water sources. We thought that if we can highlight it, it will impact policy-makers and politicians and encourage them to take some concrete action. ‘It had importance for the region, so we thought we should put it in.’
Yes, I can read. Can you? The issue isn’t whether or not they thought the Himalayan glacier melt stuff would influence political support. Of course they did. And there’s nothing wrong with that. The ONLY issue is whether or not they knew that the statement was false. He said that he knew the claim was not supported by peer reviewed research. That’s a reason for removing him for incompetence. The claim never should have made it into the report. That is not an admission that the IPCC was being dishonest. The distinction here really isn’t that hard to grasp.
The ONLY issue is whether or not they knew that the statement was false.
You are impossible. The Thinker and the Daily Mail NEVER SAID THE IPCC WAS BEING DISHONEST.
Again a quote, from the opening paragraph no less:
In a stunning admission, the scientist responsible for publishing the part of the 2007 IPCC report on global warming in Asia says he knew the evidence for the disappearing Himalayan Glacier was suspect but allowed it into the report in order to put pressure on governments to take action.
That’s all they are saying. Do I need to link to the definition of “suspect“?
The ONLY issue is whether or not they knew that the statement was false.
You are impossible. The Thinker and the Daily Mail NEVER SAID THE IPCC WAS BEING DISHONEST.
Again a quote, from the opening paragraph no less:
In a stunning admission, the scientist responsible for publishing the part of the 2007 IPCC report on global warming in Asia says he knew the evidence for the disappearing Himalayan Glacier was suspect but allowed it into the report in order to put pressure on governments to take action.
That’s all they are saying. Do I need to link to the definition of “suspect“?
The ONLY issue is whether or not they knew that the statement was false.
You are impossible. The Thinker and the Daily Mail NEVER SAID THE IPCC WAS BEING DISHONEST.
Again a quote, from the opening paragraph no less:
“In a stunning admission, the scientist responsible for publishing the part of the 2007 IPCC report on global warming in Asia says he knew the evidence for the disappearing Himalayan Glacier was suspect but allowed it into the report in order to put pressure on governments to take action.”
That’s all they are saying. Do I need to link to the definition of “suspect“?
The ONLY issue is whether or not they knew that the statement was false.
You are impossible. The Thinker and the Daily Mail NEVER SAID THE IPCC WAS BEING DISHONEST. That is your strawman.
Again a quote, from the opening paragraph no less:
“In a stunning admission, the scientist responsible for publishing the part of the 2007 IPCC report on global warming in Asia says he knew the evidence for the disappearing Himalayan Glacier was suspect but allowed it into the report in order to put pressure on governments to take action.”
That’s all they are saying. Do I need to link to the definition of “suspect“?
That is not an admission that the IPCC was being dishonest.
2slugs,
You are impossible. The Thinker and the Daily Mail NEVER SAID THE IPCC WAS BEING DISHONEST. That is your strawman I guess.
Again, a quote (from the opening paragraph no less): “
“In a stunning admission, the scientist responsible for publishing the part of the 2007 IPCC reporton global warming in Asia says he knew the evidence for the disappearing Himalayan Glacier was suspect but allowed it into the report in order to put pressure on governments to take action.”
That’s all they are saying. Do I need to link to the definition of “suspect“?
That is not an admission that the IPCC was being dishonest.
2slugs,
You are impossible. The Thinker and the Daily Mail NEVER SAID THE IPCC WAS BEING DISHONEST. That is your strawman I guess.
Again, a quote (from the opening paragraph no less): ”
“In a stunning admission, the scientist responsible for publishing the part of the 2007 IPCC reporton global warming in Asia says he knew the evidence for the disappearing Himalayan Glacier was suspect but allowed it into the report in order to put pressure on governments to take action.”
That’s all they are saying. Do I need to link to the definition of “suspect“?
Sammy, 2slugs, is on a roll this weekend. Nothing negative re: AGW can be true, because, unh, ehh, well cause he thinks so. He’s done it again, Y’ano. Changed the subject and took you down an alternative path.
BTW, 2slugs, done that contact thing with Jeff and Bob? Done the actual work to show that you are more correct than Jeff? Thought not.
2slugs, dishonest blogging versus dishonest IPCC scientific reports? Another minor issue with AR4 is: “The problem is that the IPCC cited a study on severe weather event frequency that wasn’t complete yet. When it was complete in 2008, it came to an entirely different conclusion about linkage to global warming: ” They did the same thing with the original Mann Hockey Stick report.
But, you just keep on believing the ?science? of AGW.
2slugs, thank you! So not being the best, “He admits that an AR(1) model probably isn’t the best (wow!, talk about understatement), but his excuse was that a lot of other people use it as well.” Most of those other people are THE CLIMATOLOGISTS.
2slugs, here’s the explanation for the use of AR(1): “The reason the AR(1) gets used is it closely approximates weather noise (the tail of its transfer function is 1/f).”
After rereading Jeff’s response, I fail to see the reason for the vehemence of your statement: “He admits that an AR(1) model probably isn’t the best (wow!, talk about understatement), but his excuse was that a lot of other people use it as well. On that I agree…you see it used a lot. But convenience is not an excuse for bad analysis.”
As he said: “This was code written by Ryan O where the residuals are used in the ACF function. The AR 1,0,0 correction is a not uncommonly used value for monthly temperature anomaly in climate. I don’t believe it’s perfect but I also don’t think it’s that bad.”
So through these eyes we get two technicians discussing the best approach for “teasing” out that ole signal. Oh, and then graphing. You have failed to make a solid case that your view/approach is superior to what is common practice to theirs (many climatologists).
I have said many times that climatologist are not good statisticians. For consistencies sake I would not recommend changing common practices today, UNLESS we are doing a complete recalculation. Then all processes and why they were selected for use would be thoroughly documented.
The usual way one tests different techniques is to propose an alternative approach and then compare the efficiency of the two methods in extracting the signal from the noise. If 2slugs wants to suggest a better approach I’m all for using it, but the fact that he hasn’t come up with anything better speaks in droves.
Jeff ID has explained that the purpose of his exercise was a QC operation to replicate CRU and GISTemp using his own code. After replication, presumably the next step would be to test the effect of other assumptions. I’m afraid it was a bit of a LOL moment seeing 2slugs criticize a replication of other methods for following the methodology used in the other analyses then come to this board gloating about his intellectual superiority.
I found this link from a comment at tAV. Again, 2slug I did not perform the analysis Nic did. I did write that when AR1 gets near 1 the assumptions for the DOF correction break down – yet you come here and claim I don’t understand that??
AR1 is not nearly as bad an estimate as you claim for monthly temp data and the seasonal correction ‘which you started by saying we did wrong’ is absolutely NOT ad hoc as you assert. It’s how you calculate anomaly in climatology.
You came by and accused us of not deterending before calculation, not taking seasonality into account, claimed that we should use a 0,1,0 model – which would be stupid. Then you come back here and claim victory.
I freely and openly admit Nic’s analysis of trend significance isn’t perfect, who cares. He wasn’t making any huge claims about significance anyway. He used filtered data for his analyis whcih all of the regular Air Vent readers INCLUDING NIC understand is a nono for trend significance analysis. THIS WOULD HAVE BEEN YOUR BEST CRITICISM IF YOU UNDERSTOOD WHAT YOU WERE TALKING ABOUT!
You used a shotgun criticism like a person with no understanding of what’s been done. Like standing in a room and shouting error!! to everything you see in order to try to find something wrong. Then when your points were answered, you ignore them and disappear to claim victory somewhere else.
Where’s 2slugs when we need him to respond? I hope this clairifies that what 2slugs was doing was no more than sidelines random sniping. Hoping to hit something and I suspect impress those here.
I have also been in contact with Lucia Liljgren who blogs on http://rankexploits.com/musings/ She has been analyzing the IPCC model outputs (using several statisitical approaches) compared with observed readings.
Her advice was to ask 2slugs just what he meant by: “Yes. But what does he mean by “unacceptable”? The traditional (1+rho)/(1-rho) correction underestimates the uncertainty intervals, but we can quantify how much. Nychka suggests a correction for small sample size, but it seem to overcorrect. …
When he answers, bear this in mind: For large r1, the 0.68/sqrt(N) fix up over corrects the original problem. That is: where the original uncertainty intervals were too small, they are now too big.
In fact, if you were to guide yourself by this 0.68/sqrt(N) correction, you would conclude that the classic AR(1) [(1+rho)/(1- rho)] correction is "worthless" when r1 = 0.96 and you have 360 months of data or less.) The reason you would consider it "worthless" is using this approximate formula, you would conclude the uncertainty intervals should be *infinite*. However, this is not the case."
As I said earlier, I think 2slugs was trying to add a level of precision when it was not necessary, and was actually trying to implement a statistical tool better formulated for random data. He forgot that temperature data, although noisy, is far from random and its limits are quite tightly bounded. So infinite uncertainty? Hardly!
Perhaps now we know why 2slugs has been so leery of visiting the source blogs. This is what jeff id added at tAV: “ #81, I just left a reply at the link you gave. Slug comes across as reasonable here then goes off breathlessly declaring victory on the other site like he’s won a boxing championship. So, let’s have some fun with slug. Error’s he’s made since stopping by. 1 “A quick visual check of the raw data suggests that it is nonstationary, so right out of the box that means you cannot use an AR(1) mode” Actually the calculation uses detrended data. Since we are comparing long term to short term trends this is a reasonable method and can be used. 2 Jeff did the post. — The post was by Nic L 3 “So the initial model probably should have been ARIMA(0,1,0) rather than ARIMA(1,0,0).” Failed to recognize that anomaly has seasonality removed 4 “The raw data was seasonal, which suggests that a seasonal ARIMA model would have been appropriate. Again, a misspecification error.” – same as 3 5 “The slope coefficient in the AR(1) model does not represent a trend. The slope coefficient in an AR(1) model tells you how much of the previous observations random error term (i.e., residual) is carried over into the next period’s observation.” – Failed to recognize that the data he was looking at was temperature anomaly data not AR lag term. 6 “Good to know that there was some attempt to remove seasonality, although the method is a bit ad hoc.” – Failed to recognize temperature anomaly is a climate standard. I’m sure climate science would be interested in hearing Slug’s improved method for taking seasonality into account. 7 “Finally, you are taking the results from the AR(1) model and then estimating a trend line.” – Fails again to recognize that the trend line is for the temperature data. So 7 errors in two posts then he goes over to the other blog and says: 8 “The approach was rather ad hoc” – again failing to recognize that the data he’s looking at is anomaly data which is a standard in climate science. 9 “The bad news is that Jeff doesn’t understand that an AR(1) model is unacceptable when the AR(1) coefficient is very close to 1.00 (most of his were more than 0.96).” v – Again I told him the DOF estimate breaks down when we get near 1. How is that not understanding, but he’s failed to recognize yet again thatI didn’t write this post. 10 “his excuse was that a lot of other people use it as well” – It’s not an excuse, it’s just common in climate science and happens to be part of the plot routine used by Nic. Finally, if he had wanted to criticize this post, all he had to do was read. The mean is slightly smoothed, hence the high Lag-1 serial correlation. — this was left right there in the post for everyone to read. When a statistician sees this and the high AR coefficient, he knows that the best you can do with these intervals is a rough estimate. Using filtered data is a nono for significance analysis, again that’s not the point of the post. Nic was very open about these facts, perhaps he should have put in a better explanation for Slug, but guys Carrick (who doesn’t mind a good argument) probably didn’t have any trouble figuring out this post and wouldn’t have made any of the ten errors above. I didn’t have any trouble loosely interpreting the confidence intervals. But then again, I did’t drop by a blog, skim the post, and then start making one accusation of error after another with no respect for the facts or computer code laid out in front of me.”
I think the message is clear. If 2slugs wants to add value at these other blogs, then it looks like the doors are open, but coming here and sniping adds nothing […]
That may be how you do things in climatology, but it’s not how you do things in other fields. Believe it or not, it really does matter whether the residuals are taking you along a random walk or not. The data you used was levels based. Levels data in general and temperature data in particular tends to show a lot of serial correlation. The very first thing you do in any time series analysis is to see if the data needs to be differenced. The fact that the results of the AR(1) showed coefficients very close to 1.00 is very strong evidence for taking first differences. If you fail to do that, then there’s no point in doing much else because the AR(1) model is useless. You are simply creating spurious trends.
At a minimum, post some correlograms. Show us how long it takes for the correlations to decay. Show us the “Q” tests.
2slugs, give it up! If you want to do something positive, take up Geoff Sherrington’s offer over at tAVto help analyze some OZ temp data that is beyond his stats capability. He can make the data available. That would be beneficial to all.
2slugs, give it up! If you want to do something positive, take up Geoff Sherrington’s offer over at tAV to help him analyze some OZ temp data that is beyond his stats capability. He can make the data available. That would be beneficial to all.
Finally, if he had wanted to criticize this post, all he had to do was read. The mean is slightly smoothed, hence the high Lag-1 serial correlation. — this was left right there in the post for everyone to read. When a statistician sees this and the high AR coefficient, he knows that the best you can do with these intervals is a rough estimate. Using filtered data is a nono for significance analysis, again that’s not the point of the post.
Herein lies the problem. When you see a high AR coefficient this is telling you that you CAN do better. That’s the point of having an ARIMA model and not just an AR(1) model. The abuse here is that the point of an AR(1) model is not to smooth data…they may be how climatologists use AR(1) models, but that’s not the point. The point of an ARIMA model is to purge serial correlation.
If Lucia Liljgren thinks climate data is noisy, then she ought to work with Army data. The temperature data that I’ve seen is reasonably well behaved. It’s not like you’re dealing with standard errors that are measured in thousands of percents. Even the whiz’s at MIT are shocked when they see variability in Army data.
The problem with temperature data is not noise; it’s serial correlation. That’s what they should be worried about. Here’s a real Dick and Jane explanation. Take an AR(1) model like this: Y(t) = [a*Y(t-1)] + e(t) where Y(t) is the current observation, Y(t-1) is the lagged observation, e(t) is the current period random term and “a” is the AR(1) coefficient. If a = 1.00, then ALL of last period’s observation…including the random error from last period (i.e., e(t-1)) is carried over to Y(t). This is a random walk. If you use it to “filter” your data, all you are doing is piling up error terms. At this point “noisy” data is the least of your problems.
“Herein lies the problem. ” – and Jeff Id pointed it out.
And all you needed to do was ask, and you would have found out that this was a well understood point at tAV. That’s exactly why I put the single best criticism here for all to read, so that people understand that there were no shennanigans, no lack of understanding, no slight of hand and no extreme claims being made from a significance value which was KNOWN to be questionable.
The value is part of a standard routine which plots trendline and significance. It’s more work to remove the calculation than work with it, so Nic just pointed out the high value and moved on.
If Lucia Liljgren thinks climate data is noisy, then she ought to work with Army data.
Oddly enough, I do some work with Army data though probably not the same army data 2slugs works with. That said, even if I didn’t, I don’t see how your suggestion I should work with Army data would support your position about what JeffId has done.
I’m not entirely sure what lies at the heart of your complaint about what Jeff did. Do you think applying least squares to temperature time series will result in biased estimates of the mean trend for some reason? If yes, why? Are you worried his uncertainty intervals are too small because he used AR(1) with a lag correlation coefficient of 0.96, and used the traditional (1+r)/(1-r) correction? (If the underlying noise process is AR1, they will be too small. That’s what I told CoRev when he asked me. But decreeing them “worthless” would seem something of an overstatement.) Do you just want to see the correlogram so you can see if the lagged correlations decay more or less rapidly than one would expect based on the AR1 assumptions?
I’m pretty sure Jeff Id would show these things if you asked directly instead of simply decreeing things worthless and then veiling your specific concerns in mystery. He may well have looked at them, but everyone prioritizes choices when deciding which information to include in a post/ paper/ thesis etc.
I’m sure when you work with people in the army, you hope they will express their specific concerns with enough detail to permit a response. That way, you can address the concerns and either go back and incorprate their concerns or show them you have already looked at that.
Anyway, bye. Probably won’t be coming back. As I told CoRev, my Dad’s in and out of the hospital, so blog commenting is light for now.
CoRev– There is nothing wrong with Fortran per se. People can write good or bad code in all sorts of languages. People have been decreeing it dead since the 80s, but it doesn’t die because it’s a useful flexible language and especially suitable for certain types of computations (e.g. computational fluid dynamics.)
Old codes become spaghetti when they are written by series of contributors, no-one ever specifically sets up a program to clean the code up and each modification is done under time constraints with no extra budget to clean up the code when done. I know a lot of you guys don’t like Fortran, but the problem isn’t Fortran per se.
This saga does not want to end. NicL, the author of the the Air Vent (tAV) article has responded to 2slugs comments there. This is his reponse: “ NicL said
January 26, 2010 at 11:57 am Sorry I was off-watch when the debate about AR(1) coefficients and ARIMA blew up. As Jeff says, the near unity AR(1) coefficients stated in the plots relate to series are entirely due to the fairly short period smoothing applied to make the charts of monthly data easier to interpret – something that, as Jeff says, I pointed out in the post. If I had used raw monthly anomaly data, the typical AR(1) coefficient would have been more like 0.2. There seem to be few grounds for using ARIMA [Integrated] models in preference to ARMA ones for temperature data. Whilst many economic and financial times series may be non-stationary (or at least it is impossible to reject a null hypothesis of a unit root), so an integrated model may well be appropriate, surface temperatures appear to have moved in a limited range for billions of years and there are physical grounds for expecting that to be the case. Last year I performed unit root tests on various ground station temperature series and a null unit root hypothesis was rejected each time I tested it. What there is more of a case for using for climate data is a Fractionally Integrated (long memory) ARFIMA model. There is evidence for long memory in many climate series. When I looked at this last year I found that a simple (0,d,0) model with the long memory parameter d of about 0.2 fitted the data reasonably well, although there was a bit of AR(1) autocorrelation on top of the long memory effect. R function fracdiff can be used for investigating long memory effects. For monthly temperature anomaly series with the sort of trends shown in the charts I gave, it makes virtually no difference whether or not the data is detrended before calculating AR(1) coefficients.”
So, is the bottom line on this transferring accepted practices from one discipline to another is not always the best solution? After this dialog I would say, yes.
Another lesson is that when dealing with folks with more detailed knowledge, don’t snipe from the sidelines about having a better analysis alternative without at least doing the basic work to prove your point.
2slugs has raised interest in his approach, and has been invited to participate in future efforts. I sincerely hope he takes advantage of the offer, as his experience/special skill set may improve the overall findings.
Note these phrases in the article: “According to NASA’s methods,” and “we’d be remiss if we didn’t mention the fact that the IPCC messed up when it picked a target year for when melting of Himalayan glaciers reaches critical levels.”
The first comment is there because their methods are under review and being found wanting. Notice my commentary on the other open thread. Additionally, NASA and NOAA are being drawn into the Climate gate issue with their temperature calclulation methodology.
The second comment would have been a huge issue before Climate gate, but finding that one of the major claims in the latest IPCC climate report is undeniably false, and worse derived from a non-science source (not non-peer reviewed) source. There are additional problems with the report that are essentially only being discussed in the blogs and not in the media.
The whole climate change landscape has changed since Climate gate. Nearly every catastrophic claim is now being questioned when they are not being laughed at.
As to the warmest Day, week, month, year, decade, century, etc. ho hum. Thank heavens the temps continue to rise as we recover from the Little Ice Age (LIA). It should continue til the next LIA or worse.
The post hoc ergo propter hoc solution is obvious. Start World War III. Look at all the cold World Wars have created or saved.
Note these phrases in the article: “According to NASA’s methods,” and “we’d be remiss if we didn’t mention the fact that the IPCC messed up when it picked a target year for when melting of Himalayan glaciers reaches critical levels.”
The first comment is there because their methods are under review and being found wanting. Notice my commentary on the other open thread. Additionally, NASA and NOAA are being drawn into the Climate gate issue with their temperature calclulation methodology. See the graphs below which compare GHCN’s current temps versus the raw data. These graphs are from here: http://noconsensus.wordpress.com/2010/01/19/long-record-ghcn-analysis/
The second comment would have been a huge issue before Climate gate, but finding that one of the major claims in the latest IPCC climate report is undeniably false, and worse derived from a non-science source (not non-peer reviewed source as reported in this article) has morphed into a “So what’s new?” issue. There are additional problems with that IPCC report that are essentially only being discussed in the blogs and not in the media.
The whole climate change landscape has changed since Climate gate. Nearly every catastrophic claim is now being questioned by skeptical scientists when they are not being laughed at.
As to the warmest day, week, month, year, decade, century, etc. ho hum. Thank heavens the temps continue to rise as we recover from the Little Ice Age (LIA). Rising temps should continue til the next LIA or worse occurs.
This is for those who still beleive, and Dale and 2slugs. Comment was lifted from one of Watts articles here: http://wattsupwiththat.com/2010/01/23/sanity-check-2008-2009-were-the-coolest-years-since-1998-in-the-usa/
“…
So add it up. Nearly every thing that we think may have a significant influence on global temperature was in full positive mode during the 20th Century: the sun, ocean cycles and greenhouse gases.
So what lies ahead for the 21st Century? We will have 2 cool phases of the PDO and only one warm one. The sun, apparently, is settling down and may be headed to a minimum. CO2 will continue to increase, but due to its logarithmic influence on climate, the increase will have less and less impact. The net result will be global cooling. Any other conclusion simply ignores the facts!
…. (paragraph removed as it pertained to the original article)
We lingered near the top of the curve for several years and are now going down. In the past, El Ninos where step jumps in a gradually increasing slope of global temperatures. The current El Nino is manifesting as a temporary slowing of the cooling trend. Big difference.”
I included this comment as a response to 2slugs in the other thread. “In the past, El Ninos where step jumps in a gradually increasing slope of global temperatures.” So, not only bob Tisdale believes in the el Nino step impacts.
CoRev,
All gumint methods are suspect.
It ain’t only entitlements spending which are worthless shut NASA down as well as DoD.
CoRev,
I am inspired by a guy who predicts the output of the bazillion H bombs making up good ole Sol. Over the next 100 years even!!!
Does he recommend a type of Kool AID?
rdan,
As CoRev said, the NASA data you put up is under fire.
The scientists at NASA’s GISS are widely considered to be the world’s leading researchers into atmospheric and climate changes. And their Surface Temperature (GISTemp) analysis system is undoubtedly the premiere source for global surface temperature anomaly reports.
In creating its widely disseminated maps and charts, the program merges station readings collected from the Scientific Committee on Antarctic Research (SCAR) with GHCN and USHCN data from NOAA.
It then puts the merged data through a few “adjustments” of its own.
First, it further “homogenizes” stations, supposedly adjusting for UHI by (according to NASA) changing “the long term trend of any non-rural station to match the long term trend of their rural neighbors, while retaining the short term monthly and annual variations.” Of course, the reduced number of stations will have the same effect on GISS’s UHI correction as it did on NOAA’s discontinuity homogenization – the creation of artificial warming.
Furthermore, in his communications with me, Smith cited boatloads of problems and errors he found in the Fortran code written to accomplish this task, ranging from hot airport stations being mismarked as “rural” to the “correction” having the wrong sign (+/-) and therefore increasing when it meant to decrease or vice-versa.http://www.americanthinker.com/2010/01/climategate_cru_was_but_the_ti.html
There is a good chance this was all a con.
CoRev,
Everyone agrees the Himalayan glacier thing was a big screw up. That hardly proves they are wrong about the central issue.
The Climategate stuff was about HadleyCRU data. The chart that rdan put up is GISS data.
And I see that you’re back to posting amateurish charts from third parties. Are you going to stand by and defend those charts?
Take it up with Jeff. If they’re so amateurish have him correct them. A simple dialog works very well.
Predicting the next 100 Yrs? No, just showing the trend using historical data. Data, BTW, which is longer than the global temp data and some what less noisy, albeit not necessarily more accurate. Solar data has gone from using the mark 1 eyeball to today’s more sophisticated electronic/magnetic measurement tools.
Sammy, the wheels have come off, but the oxen haven’t been unhitched. Funding should begin to dry up soon, and the noise level, except for the laughter should lessen. If the Copenhagen funding results are any indication, it looks like the funding issue has arrived. The attack on Pachauri’s conflicts of interest is another indication that the support is diminishing. Which, finally brings me to Big Al Gore. Anyone heard from him lately?
Are a bazillion H bombs randomly exploding in the sun capable of exhibiting trends, a trend would suppose an underlying causation.
What I would like to know is does your boy, whom you quote, recommend a better drink than my Stone Brewing Co. Arrogant Bastard ale?
Why don’t you and sammy exhibit some understanding and explain why the FORTRAN was used?
CoRev,
I give coberly credit, as he has been pretty silent since this all began to unravel.
It is the Warmists who should be the most angry about the series of Climategates: they came to their conclusions on the basis of purposely bogeyed-up data. So it is NOT THEIR FAULT.
The cons took advantage of their instinctive fear of mankind destroying the world (that has been exhibited throughout history). The cons made a lot of money, but their followers have been made to be public dupes who erroneously cast a lot of insults at their more skeptical blogmates.
ILSM, FORTRAN was used as the coding is that old. It was accepted as the better scientific formula programming language in the 70s. As a programming language it has been replaced, but there are still pockets of old code where it is still used.
Sammy, there is a long history of societies buying into false beliefs. Eugenics comes first to mind. The problem is it takes the better part of a generation for it to completely fade, and as we have seen the zealots hang on. Follow the money still applies.
Jay,
Is that an early endorsement of Sarah Palin?
Imagine how the particulates raised by the firebombings of Dresden and Tokyo and London helped reflect heat back into space and the suspended sulfate haze helped cool the stratosphere. It just might work!
Perhaps the belief that the greenhouse effect does not exist, that we can simply keep adding carbon, methane, and other warming gases to the atmosphere without paying a cost for it, and the decision to believe that scientists behave like teenage girls chasing a popular theory because we don’t like the answers they are getting to the questions they ask is also an example of a society buying into “false beliefs.”
Eugenics isn’t a false belief; when we do it to farm animals and pets it is called breeding. When humans do it to themselves we get Tay Sachs, Sickle Cell, and a host of other unintentional genetic ailments.
What is better than FORTRAN for what it does?
Looks like we are due for an ABC correction. LOL!
ILSM, many in the climate community use R. ” 1998. Language and environment for statistical computation and graphics. ” Since Climatology is heavily derived from stats you can see why.
DH, I don’t know of anyone seriously looking at the climate that thinks: “Perhaps the belief that the greenhouse effect does not exist,“. So starting from a false premise takes us down a false path.
I’m not sure what you are saying here: “without paying a cost for it,” and in your list of warming gases you forgot H2O.
What is this supposed to be about? “and the decision to believe that scientists behave like teenage girls chasing a popular theory” I don’t know anyone who believes this either, but there are many who believe many scientists write their papers (and grant requests) around CC since there is more grant money following that line of science than many others.
CoRev,
The reason many in the “climate community” use “R” is because many in the climate community are eager amateurs and “R” is free and subroutines are ubiquitous on the web. “R” is very versatile and is kind of “wiki” like in that a world of users make their programs and subroutines freely available. But power users do not use “R”. For that you’ve got to use something like MATLAB or SAS….but those are some serious dollars and your average amateur sitting at his home PC doesn’t have the wherewithal to fork over those kinds of dollars. But places like GISS/NASA do.
I agree!
What is really neat, is that when I did a stats course in ’79 I did my very small regressions on punch cards through a mainframe.
In 2001 I took a research course on my own PC using a souped up add on to excel and could play with much larger regressions………………..
Maybe someday I will find a new add on and do some regressions to see if it is faster now.
Slugs,
I’m a SAS user. The amateurs on this site ought to remember this when then challenge me on quantitative issues.
CoRev,
ILSM, FORTRAN was used as the coding is that old. It was accepted as the better scientific formula programming language in the 70s. As a programming language it has been replaced, but there are still pockets of old code where it is still used.
Fortran is still being used today for scientific computing when you are writing programs using raw code (rather than using R, SAS, or Matlab). There is a lot of free high quality code on the net written in Fortran.
Cantie, I don’t diosagree, but it is sleldom taught in the college labs, and ILSM’s thought was from when it dominated scientific programming. It is far from that today.
My onw background includes supporting the Apollo missions. Yuop, pretty old, but I am intimately familiar with the early days of computing. Y’ano when use of machine language, assembler code and Fortran were common.
I am old enough to have met ADM Grace Hopper, and received one of her nanosecond handouts.
CoRev,
I think you might be surpised at how much of of that old code is still around, and being used. Moreover, I can see how someone who was working on a global warming model who started kicking around on the internet looking for algorithms would have found fortran code to solve the problems they were working on. Here are a few of the sites I found doing the same thing:
http://gams.nist.gov/serve.cgi
http://plato.asu.edu/guide.html
http://www.netlib.org/lapack/
“IPCC scientist admits Glaciergate was about influencing governments”
The scientist behind the bogus claim in a Nobel Prize-winning UN report that Himalayan glaciers will have melted by 2035 last night admitted it was included purely to put political pressure on world leaders.
http://www.americanthinker.com/blog/2010/01/ipcc_scientist_admits_glacierg.html
“China to rich nations: Hand out climate money now”
Brazil, China, India and South Africa called Sunday for developed countries to quickly begin handing over the $10 billion pledged in Copenhagen to poor countries to help them deal with the effects of climate change.
http://seattletimes.nwsource.com/html/businesstechnology/2010879173_apclimateindia.html
As Al Gore would say/shout: “They played on our fears!”
Now this may be disheartening and might al;so explain the 70s Ice Age craze. We are approaching the 1500 year mark, ~1480 Yrs since the LIA. I’m not big on Wiki, but provide this as a talking point.
Bond events are North Atlantic climate fluctuations occurring every ≈1,470 ± 500 years throughout the Holocene. Eight such events have been identified, primarily from fluctuations in ice-rafted debris. Bond events may be the interglacial relatives of the glacial Dansgaard-Oeschger events, with a magnitude of perhaps 15-20% of the glacial-interglacial temperature change.
The theory of 1,500-year climate cycles in the Holocene was postulated by Gerard C. Bond of the Lamont-Doherty Earth Observatory at Columbia University, mainly based on petrologic tracers of drift ice in the North Atlantic.[1][2]
The existence of climatic changes, possibly on a quasi-1,500 year cycle, is well established for the last glacial period from ice cores. Less well established is the continuation of these cycles into the holocene. Bond et al. (1997) argue for a cyclicity close to 1470 ± 500 years in the North Atlantic region, and that their results imply a variation in Holocene climate in this region. In their view, many if not most of the Dansgaard-Oeschger events of the last ice age, conform to a 1,500-year pattern, as do some climate events of later eras, like the Little Ice Age, the 8.2 kiloyear event, and the start of the Younger Dryas.
The North Atlantic ice-rafting events happen to correlate with most weak events of the Asian monsoon over the past 9,000 years,[3][4] as well as with most aridification events in the Middle East.
Now this may be disheartening and might also explain the 70s Ice Age craze. We are approaching the 1500 year mark, ~1480 Yrs since the LIA. I’m not big on Wiki, but provide this as a talking point.
Bond events are North Atlantic climate fluctuations occurring every ≈1,470 ± 500 years throughout the Holocene. Eight such events have been identified, primarily from fluctuations in ice-rafted debris. Bond events may be the interglacial relatives of the glacial Dansgaard-Oeschger events, with a magnitude of perhaps 15-20% of the glacial-interglacial temperature change.
The theory of 1,500-year climate cycles in the Holocene was postulated by Gerard C. Bond of the Lamont-Doherty Earth Observatory at Columbia University, mainly based on petrologic tracers of drift ice in the North Atlantic.[1][2]
The existence of climatic changes, possibly on a quasi-1,500 year cycle, is well established for the last glacial period from ice cores. Less well established is the continuation of these cycles into the holocene. Bond et al. (1997) argue for a cyclicity close to 1470 ± 500 years in the North Atlantic region, and that their results imply a variation in Holocene climate in this region. In their view, many if not most of the Dansgaard-Oeschger events of the last ice age, conform to a 1,500-year pattern, as do some climate events of later eras, like the Little Ice Age, the 8.2 kiloyear event, and the start of the Younger Dryas.
The North Atlantic ice-rafting events happen to correlate with most weak events of the Asian monsoon over the past 9,000 years,[3][4] as well as with most aridification events in the Middle East.[5] Also, there is widespread evidence that a ≈1,500 yr climate oscillation caused changes in vegetation communities across all of North America.
CoRev,
Took your advice and took it up with Jeff. He’s clueless. The good news is that there was an effort to detrend and account for seasonality. The approach was rather ad hoc, but at least there was an effort. The bad news is that Jeff doesn’t understand that an AR(1) model is unacceptable when the AR(1) coefficient is very close to 1.00 (most of his were more than 0.96). So taking the lag adjusted data and using it to fit a “trend” line is meaningless. He’s just fitting a random walk. He admits that an AR(1) model probably isn’t the best (wow!, talk about understatement), but his excuse was that a lot of other people use it as well. On that I agree…you see it used a lot. But convenience is not an excuse for bad analysis.
sammy,
That’s a completely dishonest representation of what the guy said. But I’ve come to expect as much from the American Stinker. The AT article is a lie because it tries to imply that scientist knowingly published false information in order to influence politicians. That would indeed be reprehensible, but that’s not what happened. There is NO evidence that anyone at the IPCC knew that the Himalyan glacier melt statement was wrong. Should they have known? Yes. Was it unprofessional? Yes. Should the guy responsible for the screw up be fired? Yes. But was it deliberate? No. There is nothing wrong with using Himalayan glacier melt data in order to apply political pressure. If the data happened to have been true, then it would have been irresponsible NOT to have used the Himalayan melt as poltical tool.
The American Thinker does a lot of dishonest blogging, so you really shouldn’t cite them.
http://news.yahoo.com/s/ap/20100124/ap_on_re_mi_ea/ml_israel_palestinians
Israel is determined to keep the US at war with Islam forever and it knows just how to do it. And Unkie Sam is too simpleminded to understand and too weak kneed to do anything about it. He lets shrewish wife Israel tell him what to do all the time. Sad and funny. Poor poor stupid Unkie Sam.
I do have a wonderful idea re warming skeptics. Ship them all off to some low lying atolls in the Pacific or in the Indian Ocean and make them stay there for the next 25 years. I would almost guarantee that would be the last we’d hear of them. LOL.
2slugs,
That’s a completely dishonest representation of what the guy said.
Hardly. Did you read the link?
“Dr Murari Lal also said he was well aware the statement, in the 2007 report by the Intergovernmental Panel on Climate Change (IPCC), did not rest on peer-reviewed scientific research.”
“In an interview with The Mail on Sunday, Dr Lal, the co-ordinating lead author of the report’s chapter on Asia, said: ‘It related to several countries in this region and their water sources. We thought that if we can highlight it, it will impact policy-makers and politicians and encourage them to take some concrete action.
‘It had importance for the region, so we thought we should put it in.’
http://www.dailymail.co.uk/news/article-1245636/Glacier-scientists-says-knew-data-verified.html#ixzz0dUx6pwXe
2slugs, note the quotation marks.
sammy,
Yes, I can read. Can you? The issue isn’t whether or not they thought the Himalayan glacier melt stuff would influence political support. Of course they did. And there’s nothing wrong with that. The ONLY issue is whether or not they knew that the statement was false. He said that he knew the claim was not supported by peer reviewed research. That’s a reason for removing him for incompetence. The claim never should have made it into the report. That is not an admission that the IPCC was being dishonest. The distinction here really isn’t that hard to grasp.
2slugs,
The ONLY issue is whether or not they knew that the statement was false.
You are impossible. The Thinker and the Daily Mail NEVER SAID THE IPCC WAS BEING DISHONEST.
Again a quote, from the opening paragraph no less:
In a stunning admission, the scientist responsible for publishing the part of the 2007 IPCC report on global warming in Asia says he knew the evidence for the disappearing Himalayan Glacier was suspect but allowed it into the report in order to put pressure on governments to take action.
That’s all they are saying. Do I need to link to the definition of “suspect“?
2slugs,
The ONLY issue is whether or not they knew that the statement was false.
You are impossible. The Thinker and the Daily Mail NEVER SAID THE IPCC WAS BEING DISHONEST.
Again a quote, from the opening paragraph no less:
In a stunning admission, the scientist responsible for publishing the part of the 2007 IPCC report on global warming in Asia says he knew the evidence for the disappearing Himalayan Glacier was suspect but allowed it into the report in order to put pressure on governments to take action.
That’s all they are saying. Do I need to link to the definition of “suspect“?
2slugs,
The ONLY issue is whether or not they knew that the statement was false.
You are impossible. The Thinker and the Daily Mail NEVER SAID THE IPCC WAS BEING DISHONEST.
Again a quote, from the opening paragraph no less:
“In a stunning admission, the scientist responsible for publishing the part of the 2007 IPCC report on global warming in Asia says he knew the evidence for the disappearing Himalayan Glacier was suspect but allowed it into the report in order to put pressure on governments to take action.”
That’s all they are saying. Do I need to link to the definition of “suspect“?
2slugbaits,
The ONLY issue is whether or not they knew that the statement was false.
You are impossible. The Thinker and the Daily Mail NEVER SAID THE IPCC WAS BEING DISHONEST. That is your strawman.
Again a quote, from the opening paragraph no less:
“In a stunning admission, the scientist responsible for publishing the part of the 2007 IPCC report on global warming in Asia says he knew the evidence for the disappearing Himalayan Glacier was suspect but allowed it into the report in order to put pressure on governments to take action.”
That’s all they are saying. Do I need to link to the definition of “suspect“?
That is not an admission that the IPCC was being dishonest.
2slugs,
You are impossible. The Thinker and the Daily Mail NEVER SAID THE IPCC WAS BEING DISHONEST. That is your strawman I guess.
Again, a quote (from the opening paragraph no less): “
“In a stunning admission, the scientist responsible for publishing the part of the 2007 IPCC report on global warming in Asia says he knew the evidence for the disappearing Himalayan Glacier was suspect but allowed it into the report in order to put pressure on governments to take action.”
That’s all they are saying. Do I need to link to the definition of “suspect“?
That is not an admission that the IPCC was being dishonest.
2slugs,
You are impossible. The Thinker and the Daily Mail NEVER SAID THE IPCC WAS BEING DISHONEST. That is your strawman I guess.
Again, a quote (from the opening paragraph no less): ”
“In a stunning admission, the scientist responsible for publishing the part of the 2007 IPCC report on global warming in Asia says he knew the evidence for the disappearing Himalayan Glacier was suspect but allowed it into the report in order to put pressure on governments to take action.”
That’s all they are saying. Do I need to link to the definition of “suspect“?
Sammy, 2slugs, is on a roll this weekend. Nothing negative re: AGW can be true, because, unh, ehh, well cause he thinks so. He’s done it again, Y’ano. Changed the subject and took you down an alternative path.
BTW, 2slugs, done that contact thing with Jeff and Bob? Done the actual work to show that you are more correct than Jeff? Thought not.
CoRev,
He has tried all his plays: divert, obscure, jargon, and strawman. He needs to try thinking for a change.
2slugs, dishonest blogging versus dishonest IPCC scientific reports? Another minor issue with AR4 is: “The problem is that the IPCC cited a study on severe weather event frequency that wasn’t complete yet. When it was complete in 2008, it came to an entirely different conclusion about linkage to global warming: ” They did the same thing with the original Mann Hockey Stick report.
But, you just keep on believing the ?science? of AGW.
Tuvalu not sinking.
http://nzclimatescience.net/index.php?Itemid=32&id=14&option=com_content&task=view
2slugs, thank you! So not being the best, “He admits that an AR(1) model probably isn’t the best (wow!, talk about understatement), but his excuse was that a lot of other people use it as well.” Most of those other people are THE CLIMATOLOGISTS.
So are you going to help him add some precision?
2slugs, here’s the explanation for the use of AR(1): “The reason the AR(1) gets used is it closely approximates weather noise (the tail of its transfer function is 1/f).”
After rereading Jeff’s response, I fail to see the reason for the vehemence of your statement: “He admits that an AR(1) model probably isn’t the best (wow!, talk about understatement), but his excuse was that a lot of other people use it as well. On that I agree…you see it used a lot. But convenience is not an excuse for bad analysis.”
As he said: “This was code written by Ryan O where the residuals are used in the ACF function. The AR 1,0,0 correction is a not uncommonly used value for monthly temperature anomaly in climate. I don’t believe it’s perfect but I also don’t think it’s that bad.”
So through these eyes we get two technicians discussing the best approach for “teasing” out that ole signal. Oh, and then graphing. You have failed to make a solid case that your view/approach is superior to what is common practice to theirs (many climatologists).
I have said many times that climatologist are not good statisticians. For consistencies sake I would not recommend changing common practices today, UNLESS we are doing a complete recalculation. Then all processes and why they were selected for use would be thoroughly documented.
The usual way one tests different techniques is to propose an alternative approach and then compare the efficiency of the two methods in extracting the signal from the noise. If 2slugs wants to suggest a better approach I’m all for using it, but the fact that he hasn’t come up with anything better speaks in droves.
Jeff ID has explained that the purpose of his exercise was a QC operation to replicate CRU and GISTemp using his own code. After replication, presumably the next step would be to test the effect of other assumptions. I’m afraid it was a bit of a LOL moment seeing 2slugs criticize a replication of other methods for following the methodology used in the other analyses then come to this board gloating about his intellectual superiority.
Wow. XD
I found this link from a comment at tAV. Again, 2slug I did not perform the analysis Nic did. I did write that when AR1 gets near 1 the assumptions for the DOF correction break down – yet you come here and claim I don’t understand that??
AR1 is not nearly as bad an estimate as you claim for monthly temp data and the seasonal correction ‘which you started by saying we did wrong’ is absolutely NOT ad hoc as you assert. It’s how you calculate anomaly in climatology.
You came by and accused us of not deterending before calculation, not taking seasonality into account, claimed that we should use a 0,1,0 model – which would be stupid. Then you come back here and claim victory.
I freely and openly admit Nic’s analysis of trend significance isn’t perfect, who cares. He wasn’t making any huge claims about significance anyway. He used filtered data for his analyis whcih all of the regular Air Vent readers INCLUDING NIC understand is a nono for trend significance analysis. THIS WOULD HAVE BEEN YOUR BEST CRITICISM IF YOU UNDERSTOOD WHAT YOU WERE TALKING ABOUT!
You used a shotgun criticism like a person with no understanding of what’s been done. Like standing in a room and shouting error!! to everything you see in order to try to find something wrong. Then when your points were answered, you ignore them and disappear to claim victory somewhere else.
Pretty pathetic Dr. Slug….
Where’s 2slugs when we need him to respond? I hope this clairifies that what 2slugs was doing was no more than sidelines random sniping. Hoping to hit something and I suspect impress those here.
I have also been in contact with Lucia Liljgren who blogs on http://rankexploits.com/musings/
She has been analyzing the IPCC model outputs (using several statisitical approaches) compared with observed readings.
Her advice was to ask 2slugs just what he meant by: “Yes. But what does he mean by “unacceptable”? The traditional (1+rho)/(1-rho) correction underestimates the uncertainty intervals, but we can quantify how much. Nychka suggests a correction for small sample size, but it seem to overcorrect.
…
Perhaps now we know why 2slugs has been so leery of visiting the source blogs. This is what jeff id added at tAV: “
#81,
I just left a reply at the link you gave. Slug comes across as reasonable here then goes off breathlessly declaring victory on the other site like he’s won a boxing championship. So, let’s have some fun with slug. Error’s he’s made since stopping by.
1 “A quick visual check of the raw data suggests that it is nonstationary, so right out of the box that means you cannot use an AR(1) mode”
Actually the calculation uses detrended data. Since we are comparing long term to short term trends this is a reasonable method and can be used.
2 Jeff did the post. — The post was by Nic L
3 “So the initial model probably should have been ARIMA(0,1,0) rather than ARIMA(1,0,0).” Failed to recognize that anomaly has seasonality removed
4 “The raw data was seasonal, which suggests that a seasonal ARIMA model would have been appropriate. Again, a misspecification error.” – same as 3
5 “The slope coefficient in the AR(1) model does not represent a trend. The slope coefficient in an AR(1) model tells you how much of the previous observations random error term (i.e., residual) is carried over into the next period’s observation.” – Failed to recognize that the data he was looking at was temperature anomaly data not AR lag term.
6 “Good to know that there was some attempt to remove seasonality, although the method is a bit ad hoc.” – Failed to recognize temperature anomaly is a climate standard. I’m sure climate science would be interested in hearing Slug’s improved method for taking seasonality into account.
7 “Finally, you are taking the results from the AR(1) model and then estimating a trend line.” – Fails again to recognize that the trend line is for the temperature data.
So 7 errors in two posts then he goes over to the other blog and says:
8 “The approach was rather ad hoc” – again failing to recognize that the data he’s looking at is anomaly data which is a standard in climate science.
9 “The bad news is that Jeff doesn’t understand that an AR(1) model is unacceptable when the AR(1) coefficient is very close to 1.00 (most of his were more than 0.96).” v – Again I told him the DOF estimate breaks down when we get near 1. How is that not understanding, but he’s failed to recognize yet again thatI didn’t write this post.
10 “his excuse was that a lot of other people use it as well” – It’s not an excuse, it’s just common in climate science and happens to be part of the plot routine used by Nic.
Finally, if he had wanted to criticize this post, all he had to do was read.
The mean is slightly smoothed, hence the high Lag-1 serial correlation. — this was left right there in the post for everyone to read. When a statistician sees this and the high AR coefficient, he knows that the best you can do with these intervals is a rough estimate. Using filtered data is a nono for significance analysis, again that’s not the point of the post.
Nic was very open about these facts, perhaps he should have put in a better explanation for Slug, but guys Carrick (who doesn’t mind a good argument) probably didn’t have any trouble figuring out this post and wouldn’t have made any of the ten errors above. I didn’t have any trouble loosely interpreting the confidence intervals. But then again, I did’t drop by a blog, skim the post, and then start making one accusation of error after another with no respect for the facts or computer code laid out in front of me.”
I think the message is clear. If 2slugs wants to add value at these other blogs, then it looks like the doors are open, but coming here and sniping adds nothing […]
2slugs has been called out (finally).
Jeff Id,
That may be how you do things in climatology, but it’s not how you do things in other fields. Believe it or not, it really does matter whether the residuals are taking you along a random walk or not. The data you used was levels based. Levels data in general and temperature data in particular tends to show a lot of serial correlation. The very first thing you do in any time series analysis is to see if the data needs to be differenced. The fact that the results of the AR(1) showed coefficients very close to 1.00 is very strong evidence for taking first differences. If you fail to do that, then there’s no point in doing much else because the AR(1) model is useless. You are simply creating spurious trends.
At a minimum, post some correlograms. Show us how long it takes for the correlations to decay. Show us the “Q” tests.
2slugs, give it up! If you want to do something positive, take up Geoff Sherrington’s offer over at tAVto help analyze some OZ temp data that is beyond his stats capability. He can make the data available. That would be beneficial to all.
2slugs, give it up! If you want to do something positive, take up Geoff Sherrington’s offer over at tAV to help him analyze some OZ temp data that is beyond his stats capability. He can make the data available. That would be beneficial to all.
CoRev,
Finally, if he had wanted to criticize this post, all he had to do was read.
The mean is slightly smoothed, hence the high Lag-1 serial correlation. — this was left right there in the post for everyone to read. When a statistician sees this and the high AR coefficient, he knows that the best you can do with these intervals is a rough estimate. Using filtered data is a nono for significance analysis, again that’s not the point of the post.
Herein lies the problem. When you see a high AR coefficient this is telling you that you CAN do better. That’s the point of having an ARIMA model and not just an AR(1) model. The abuse here is that the point of an AR(1) model is not to smooth data…they may be how climatologists use AR(1) models, but that’s not the point. The point of an ARIMA model is to purge serial correlation.
CoRev,
Also posted over at Jeff Id’s place.
If Lucia Liljgren thinks climate data is noisy, then she ought to work with Army data. The temperature data that I’ve seen is reasonably well behaved. It’s not like you’re dealing with standard errors that are measured in thousands of percents. Even the whiz’s at MIT are shocked when they see variability in Army data.
The problem with temperature data is not noise; it’s serial correlation. That’s what they should be worried about. Here’s a real Dick and Jane explanation. Take an AR(1) model like this: Y(t) = [a*Y(t-1)] + e(t) where Y(t) is the current observation, Y(t-1) is the lagged observation, e(t) is the current period random term and “a” is the AR(1) coefficient. If a = 1.00, then ALL of last period’s observation…including the random error from last period (i.e., e(t-1)) is carried over to Y(t). This is a random walk. If you use it to “filter” your data, all you are doing is piling up error terms. At this point “noisy” data is the least of your problems.
“Herein lies the problem. ” – and Jeff Id pointed it out.
And all you needed to do was ask, and you would have found out that this was a well understood point at tAV. That’s exactly why I put the single best criticism here for all to read, so that people understand that there were no shennanigans, no lack of understanding, no slight of hand and no extreme claims being made from a significance value which was KNOWN to be questionable.
The value is part of a standard routine which plots trendline and significance. It’s more work to remove the calculation than work with it, so Nic just pointed out the high value and moved on.
Jeff Id,
Simple question. Instead of always using an AR(1) model, why not go through the normal Box-Jenkins procedures?
2slugs, this is a funny admission (posted by 2slugs at Jeff Ids): “Oh…I’m sure I’m a pompous ass too, but that’s a minor concern.”
Oddly enough, I do some work with Army data though probably not the same army data 2slugs works with. That said, even if I didn’t, I don’t see how your suggestion I should work with Army data would support your position about what JeffId has done.
I’m not entirely sure what lies at the heart of your complaint about what Jeff did. Do you think applying least squares to temperature time series will result in biased estimates of the mean trend for some reason? If yes, why? Are you worried his uncertainty intervals are too small because he used AR(1) with a lag correlation coefficient of 0.96, and used the traditional (1+r)/(1-r) correction? (If the underlying noise process is AR1, they will be too small. That’s what I told CoRev when he asked me. But decreeing them “worthless” would seem something of an overstatement.) Do you just want to see the correlogram so you can see if the lagged correlations decay more or less rapidly than one would expect based on the AR1 assumptions?
I’m pretty sure Jeff Id would show these things if you asked directly instead of simply decreeing things worthless and then veiling your specific concerns in mystery. He may well have looked at them, but everyone prioritizes choices when deciding which information to include in a post/ paper/ thesis etc.
I’m sure when you work with people in the army, you hope they will express their specific concerns with enough detail to permit a response. That way, you can address the concerns and either go back and incorprate their concerns or show them you have already looked at that.
Anyway, bye. Probably won’t be coming back. As I told CoRev, my Dad’s in and out of the hospital, so blog commenting is light for now.
CoRev–
There is nothing wrong with Fortran per se. People can write good or bad code in all sorts of languages. People have been decreeing it dead since the 80s, but it doesn’t die because it’s a useful flexible language and especially suitable for certain types of computations (e.g. computational fluid dynamics.)
Old codes become spaghetti when they are written by series of contributors, no-one ever specifically sets up a program to clean the code up and each modification is done under time constraints with no extra budget to clean up the code when done. I know a lot of you guys don’t like Fortran, but the problem isn’t Fortran per se.
This saga does not want to end. NicL, the author of the the Air Vent (tAV) article has responded to 2slugs comments there. This is his reponse:
“
NicL said
January 26, 2010 at 11:57 am
Sorry I was off-watch when the debate about AR(1) coefficients and ARIMA blew up.
As Jeff says, the near unity AR(1) coefficients stated in the plots relate to series are entirely due to the fairly short period smoothing applied to make the charts of monthly data easier to interpret – something that, as Jeff says, I pointed out in the post. If I had used raw monthly anomaly data, the typical AR(1) coefficient would have been more like 0.2.
There seem to be few grounds for using ARIMA [Integrated] models in preference to ARMA ones for temperature data. Whilst many economic and financial times series may be non-stationary (or at least it is impossible to reject a null hypothesis of a unit root), so an integrated model may well be appropriate, surface temperatures appear to have moved in a limited range for billions of years and there are physical grounds for expecting that to be the case. Last year I performed unit root tests on various ground station temperature series and a null unit root hypothesis was rejected each time I tested it.
What there is more of a case for using for climate data is a Fractionally Integrated (long memory) ARFIMA model. There is evidence for long memory in many climate series. When I looked at this last year I found that a simple (0,d,0) model with the long memory parameter d of about 0.2 fitted the data reasonably well, although there was a bit of AR(1) autocorrelation on top of the long memory effect. R function fracdiff can be used for investigating long memory effects.
For monthly temperature anomaly series with the sort of trends shown in the charts I gave, it makes virtually no difference whether or not the data is detrended before calculating AR(1) coefficients.”
So, is the bottom line on this transferring accepted practices from one discipline to another is not always the best solution? After this dialog I would say, yes.
Another lesson is that when dealing with folks with more detailed knowledge, don’t snipe from the sidelines about having a better analysis alternative without at least doing the basic work to prove your point.
2slugs has raised interest in his approach, and has been invited to participate in future efforts. I sincerely hope he takes advantage of the offer, as his experience/special skill set may improve the overall findings.