The Hockey Stick Resurrected By Marcott et al. 2012

Guest post by Hank

Many readers know that the blogosphere is a buzz with alarmists excited over the latest 73 proxy, 11,280 year climate reconstruction by Marcott et al. published March 7th in the journal Science. Here’s a link to the abstract.

And why are alarmists so excited?


Link to larger image

Do you see it to the very right of the reconstruction? It’s the famous hockey stick. To coin Frankenstein’s famous words when his monster was brought back to life – “it’s alive!”

Alarmists are high-fiving themselves because the Marcott study purports to have found the same blade of the hockey stick in the 73 proxies used in the study, lending further proof that anthropogenic CO2 has warmed the earth in an unprecedented way in the past 150 years.

Most who read suyts space know that I enjoy working with paleo climate data to generate my own charts like here and here. It happens that Marcott et al. made their proxy database available to the public so I downloaded it from here to create my own graphs. I created the above graph from the study’s “Global Temperature Stack” dataset.

The raw proxy datasets were included in the Marcott database. Studying the proxies, I discovered that only nine of the 73 proxies contained data that extended to 1950. Of those nine, only two contained data that extended to 2000. My first impulse was to think there was a bias introduced when the number of proxies fell off to nine. Perhaps one or two of the proxies were strongly biased towards warm.

Eyeballing the data in the nine proxies of interest I started looking at the latest years presented in the datasets for anomalous data points. I didn’t see any. What I saw in my cursory review was a slight cooling trend as they approached present dates.

All of the proxy datasets in the Marcott study span greater than 6,500 years and use the years 4,500 to 5,500 as a common reference and calibration period. The temporal resolution of each proxy varied with the largest resolution being 300 years per measurement. In some proxies the temperature was calibrated to the local temperature or offset of it. In others, the temperature was expressed as an anomaly from a reference period mean.

I needed to turn apples and oranges into all apples. To do this I first normalized each proxy to its 4,500 to 5,500 year mean in the same manner used in the study. Using the reference mean, all absolute temperature scales were converted to anomalies (difference from the mean). Those proxies that were already using anomaly measurements were simply normalized without conversion.

I used a data infilling technique known as regularized expectation maximization to create a new 10 year matrix of the data series for each proxy. I chose a 10 year resolution in order to minimize high frequency filtering. Running statistical tests in SPSS I confirmed that the difference in variance between the raw and the regularized datasets was insignificant.

I now had a dataset for each proxy that could be aligned on 10 year boundaries with all temperatures expressed as anomalies from their respective reference period mean. I have all apples. Let the charting begin!

I output all nine proxies as CSV files and imported them into Excel to finally visualize the blade of the hockey stick. Starting at 1,500 before present (BP), I graphed the nine proxy datasets. And here’s what I got:


Link to larger image

Hang on a second… Where’s the blade? It was supposed to look like this:


Link to larger image

Before I tackle the question of the blade I have some explaining to do. The black line on each graph is a running mean of the proxies used. You’ll notice that the X axis of the two graphs are slightly different. The proxy data uses 1950 as their “Present.” For this reason, zero on the X axis of the nine proxy graph represents 1950. Negative numbers are years following 1950. The value -40 equates to 1990. The plot ends at the year 2000 (-50). The Marcott global temperature series graph uses the year 2000 as its zero point.

Before I get called out on using only nine proxies for the full 1,500 years in my reconstruction, I’ll caution you to look at only the last 50 to 100 years of each graph. That’s where the action is and where the Marcott graph falls off to nine proxies also. That’s where both graphs should agree on what is shown.

Notice the little downtick at the end of the black line in the nine proxy graph above. Look at this graph from Ljungqvist’s 30 proxy reconstruction:


Do you see the downtick at the very right of the plot? I circled it in red. Although both plots are on a different Y axis scale, Ljungqvist’s and my nine proxy chart agree.

Back to the question of what happened to the blade? Marcott gives us a hint of how it wound up in his reconstruction. Remember “Mike’s Nature Trick?” Marcott has a trick of his own. Take a look at this:


The above was excerpted from Marcott’s study. Note that he’s calling attention to the break in the Y-axis at 25. You can see it as a white vertical line in the lower left legend. Notice also that Marcott is attributing the break to the Mann et al. dataset.

This new 73 proxy study has alarmists convinced that this is an independent verification and vindication of Mann’s hockey stick. It isn’t. The hockey stick blade at the end of the reconstruction is resulting from an adjustment of the proxy data to agree with Mann’s treemometer study. That, or it is an outright splice of Mann’s data directly.

Allow me to leave you with a parting image I’m particularly fond of. I think it tells the true story of catastrophic global warming.



Update: In my original run of the nine proxies, I wasn’t concerned with a number of issues, mostly because I was more interested in finding a hockey stick signal. In my later post using 24 proxies here, I used the 1,500 – 6,000 BP period for calculating the weighting of each proxy. Subsequently, I compared my results to the Ljugzvist F.C. 2010 30 proxy reconstruction and validated that my weighting of the proxies based on that time period produced results more comparable to that of Ljugzvist. As such, I applied the same weighting to my first 9 proxy analysis.

I also centered the means of each proxy, not that it mattered for this analysis. The graph of the nine proxies was updated to reflect these changes and improvements. While none of this changes the results of my initial analysis, the updated graph presents a higher quality visualization of the analysis.

Hanks new

This entry was posted in Climate. Bookmark the permalink.

106 Responses to The Hockey Stick Resurrected By Marcott et al. 2012

  1. Latitude says:

    damn… go boy!
    thanks Hank!! that took a while…but well worth it!

  2. Me says:

    How much did they massage the data to come up with that piece of work, 😆
    So who are they trying to convince, themselves or us with that?
    Me guesses they want to keep their consensus bullshit together because it’s falling apart so now they pimped their shite a little to appear on topic. 😆

    • HankH says:

      It’s hard to tell intent from raw data. But the data talks a big story of its own. In this case it wasn’t talking about hockey sticks.

      • Me says:

        No, they learned their lesson from the last time but the song remains the same.

        • HankH says:

          True that! In my field of research, even the slightest hint that I’ve massaged the data in any way is reason to have the editors refuse to publish it. If there is any issue with the data, you explain it, not adjust it. I don’t understand this need for alarmist scientists to continue singing the hockey stick song when it has fallen to the bottom of the charts (no pun intended).

  3. DaveG says:

    Hang on a second, whats the blade? Its there in the lurking in the mind of an evil maladjusted warmist. Mike’s Nature Trick and know Marcott’s Nature Trick = same difference just cooked at slightly different temperature. A down ticking time bomb and still a crock of S##t!

    • suyts says:

      LOL, yeh, they are tricksters!

    • eqibno says:

      That would be Marcott’s “Science” trick, to be consistent with the source of publication.

      • HankH says:

        Hello eqibno, I’ve always been leaning on the side of calibration of the late proxies to a higher frequency dataset of some sort, perhaps Mann’s reconstructions. Although I mention the possibility of direct splicing, I’m not personally invested in that notion at all. Another possibility that I think might be more compelling, in light of newer revelations coming out, is that Marcott may have used the same principal component analysis (PCA) as did Mann – the PCA that produced a HS no matter what was fed into it. Alas, I can only guess. I believe Marcott will need to explain how it wound up there when so many people looking at the raw proxy data in so many ways are all concluding the HS is not in the raw data.

        • HankH says:

          Minor mod… I wrote “I’m not personally invested in that notion at all.” Actually I’m not personally invested in it much. I would be very surprised to learn that’s what they did but at this time, I can’t rule out anything because I don’t know what they did.

        • eqibno says:

          This is the same gang (that couldn’t graph straight?) that created the paper that “showed” that CO2 preceded the temperature rise after the last glacial period…. One can only wonder what legerdemain they resorted to on that one.

        • DirkH says:

          Ah Blech that link doesn’t work. It should have pointed to this comment:
          ” “TerryS
          Posted Mar 17, 2013 at 4:30 AM | Permalink | Reply | Paste Link
          No links because my comments with links seem to disappear so I’ll try adding them in a reply.
          An article was published in Nature in April 2012 called “Global warming preceded by increasing carbon dioxide concentrations during the last deglaciation”. The authors where Shakun, Clark, Marcott, Mix and others.
          This used Calab 6.0.1 with IntCal04 to redate the proxies, but the supplimentary information also contains re-dating information with IntCal09.””

  4. DaveG says:

    I should proof read my comment it before I send it. Great Job Hank it’s part of my must keep file.

    • HankH says:

      Your welcome Dave. I wasn’t laughing at you. I was laughing with you. I think it folly for these people to be adjusting anything, particularly closer in time where the precision should be getting better, not worse.

  5. tckev says:

    Excellent work guys. I wonder how long it will take to get this message out? As the public are not that interested (in AGW/climate alarmism) it may well be up to the likes of you and me to make sure that the politicos get the message.

    • suyts says:

      That’s why we’re here! This work was all Hank. I’m just a portal! 🙂

    • HankH says:

      I think the usual suspects are fully aware of how Mann’s data got into this study. His dendro studies are a proxy reconstruction. So I think it would come as both no surprise and no concern to them that it’s there. The problem is they’re representing this study as independent verification of Mann’s work when it isn’t. I started out interested in graphing the data because I got my hands on the database and I’m just curious. When I couldn’t get the same results using raw data, I dug deeper.

  6. Alex C says:

    What do you mean by ‘year 25?’, where the splice is? That’s a count of the number of proxies, not the year number. It seems to be a space-saving measure for that graph.

    FWIW in the supplementary material it is clear that the warming tick at the end is not the warming from the Mann et al paper – the Mann et al paper extends past 1950, whereas that blue ‘blade’ (if you will) goes until about 1950. See for instance Figure S4 in the supplement.

    • HankH says:

      Alex, if you look at the supplement, the it is mentioned that the proxy data, unless otherwise stated, uses 1950 as the “Present.” Download the proxy data and you’ll see that the nine datasets I selected go either to zero (1950) or into the negative numbers (past 1950). I stated that. If you refer to Figure S10, you’ll see verification that the proxies dropped off to 9 and below at the end of the reconstruction.

      • Alex C says:

        I haven’t looked through all of the proxy data but it’s not relevant what the ones that extent to or past 1950 say, since the stacked composite goes to 1940. Every other number later than that is NAN. The blade at the end is from the average of the datasets at the 1940 point.

        • HankH says:

          Look at the datasets named in the nine proxy series ledger. They extend to 1950 and beyond. My plots go as far as the data takes me. I’m using the same data in the Marcott database. I would expect that I should see a hockey stick if it is there. It’s not there. Admittedly, I’m not using Mann’s dataset because it wasn’t provided in the study database. Soooo, if I’m not using it and I can’t find a hockey stick in the datasets provided, what does that suggest?

        • Alex C says:

          Did you zero all of the proxy datasets over the same time period that the paper did?

          Either way, it doesn’t suggest that they appended Mann’s data. Because Mann’s data doesn’t show such a blade.

          I’m working with the proxy stuff myself now to see if I can recreate their results. I’ll share in a while, though it’s getting late where I am and my progress will be slowed somewhat.

        • HankH says:

          Actually, I zeroed each dataset’s temperatures against it’s own reference mean. I didn’t want to zero them to the same reference for several reasons:

          1) That would throw off the zero bias of one or more of the data series.
          2) It wouldn’t have changed their variance, only their mean. Marcott discusses where he intentionally moved the mean on some datasets. From my perspective, that’s perfectly okay if all you’re interested in is the variance. So it doesn’t matter if I zeroed on one reference and accepted a built in zero bias or let them zero to their own reference. The variance would still be the same. The study is about variance, not absolute values around the mean.

          I’m pleased that you’re running the data yourself. Validation is alway a good thing. Would you mind sharing your results when finished? If you would like, I would be pleased to provide you with my matrices in Excel format so you can validate them against the raw proxies.

        • Alex C says:

          Hi Hank and all,

          Just wanted to give a check-in, I’m still working but unfortunately my days are not quite as free as my weekends. I’m slowly getting there, plotting all of the proxies that I think are relevant to the graph I want, but a couple points that I just wanted to bring up before I even get my own results:

          – I found 18 of the proxies extended at least to ~1950;
          – Since we want to replicate Marcott’s results, we need to sample at 20 year intervals, starting with +10 years BP and then +30, +50, so on; there are more proxies that go up to this point. If we want to see the blade then we need at least a few data points older than 1940, so I’m plotting from 250 YBP to 0.
          – Since we’re adding in proxies that will not be contributing to later data points (for instance, if it ends in 1920, there will be no 1940 data point), then we must follow the zeroing method as in the paper, because only if each point in time has a representative datapoint does the choice of zero not matter toward the general behavior of the graph (as you said, it only affects the mean, not the variance – but not if the proxy count changes over the course of the time interval we’re looking at).

          I’ll try to get something later today, I am busy starting from about 6:00 EST to several hours later.

        • suyts says:

          No worrys Alex. We all understand time constraints. I’m very interested in comparing notes!

        • HankH says:

          Hello Alex,

          Thanks for checking in. Yes I noted there were more proxies leading up to ~1950. However, being aware of the effect of removing a proxy, I selected proxies that went past 1950 as I was running the data past 1950. Even if you zero all of the proxies to the same baseline, as the number of proxies falls below say n < 25 the drop-off can still introduce an exaggerated change of the running mean, particularly if it was an outlier.

          As an example: If you're planning on using TN057-17, take a close look at it. That proxy swings wildly about the mean, sometimes as much as 4C to 5C per sample point throughout it's entire measurement period. It's sensitive to something but it isn't temperature. I don't know why that proxy is even in the study. It's on one of it's obvious wild swings when it hits 1950. If you're using n < 25 proxies at that point, you may be making much of a noisy proxy.

          Vostock seems to want to play dead after 149 BP despite the series going to 1950. I would be concerned about using it past 149 BP because it is clear there was no data. Someone thought it good to to just make it zero and zero is a real anomaly from the mean.

          The other issue that has me puzzled is the temporal resolution towards the end of the study. The uptick in the author's output data was of decadal resolution at a time proxies were falling off to a low n value. That concerns the statistician in me.

          Regarding using the author's 20 year matrix, a 10 year matrix avoids alignment issues to a large degree. That's partly why I used it. A 2 x sample rate allows for only a five year maximum and 2.5 year average disagreement between their sample points and mine. I don't think misalignment that small between matrices is going to mean much when the proxies don't have decadal resolution.

          Like you, I have a day job. Now that the weekend is over, I can only peck at the data in short bits. I look forward to seeing your analysis.

    • suyts says:

      Alex, welcome and thank you, I missed that in proofing. You’re right that’s an amount not years.

      As to the blade at 1950, that would be most problematic for the authors in that the thermometer record doesn’t reflect such a jump at that time. If that’s the case then there should be little or no difference between years 50 and zero on the graph……

      • Alex C says:

        Thanks suyts for the welcome;

        More specifically (and I didn’t specify this earlier so maybe we can start anew a bit), the blade is at 1940, which is slightly before the leveling during the middle of the twentieth century.

        Also, the authors state in their abstract (I don’t have the paper, which is somewhat a travesty because I allowed my Science subscription to expire just a month or two ago – sadface) that the present temperatures are higher than 75% of the Holocene; I can’t tell which of their reconstructions they’re using for that, be it the Standard, 5×5 grid, so on – I’m getting ~35% and 55% for the Standard and 5×5 grid (using the lower bound value for the present datapoint (1940) defined by the lower 1 sigma bound, and then finding where that’s higher than the mean of the rest of the reconstruction).

        So maybe they’re using one of the other lat/long box methods for that calculation? Or do they make that assertion using the actual instrumental dataset? And again I ask those because I don’t have the paper, I don’t know.

        But, it doesn’t seem either way that they appended any data from Mann onto their reconstruction.

        • HankH says:

          Alex, thanks for your comments. If you’re using the 5×5 grid, the jackcuts, etc… in their output data you’re absolutely right. You’re going to find significantly more variance there. The two graphs I produced from the output data were using the ungridded global standard stack. These are the two graphs that show the uptick.

          The sets like the jackcuts, are validation tests from what I understand. Anyway, the 1950 issue is a rather tangled affair in their study. I refer you to page 15 where they’re talking about temperature data from 1961 to 1990, which has representation in the “MD01-2421,KR02-06GC,KR02-06MC,” “Lake 850,” “Lake Nujulla,” and “Homestead Scarp” proxies. Why use a mean that falls outside of the data timeline (not calculated from the data)?

          Like you, I started off convinced that their study left off at 1950 but discussion about later dates drew some doubts. So I decided to run the data as far as it would take me. Now, if I’m wrong then I’ve taken the proxy data out further than them. In such case I would expect a greater likelihood of seeing the upward tick. I didn’t so I’m asking why not?

        • suyts says:

          Alas, Alex, I’m at the same disadvantage as you in that I don’t have access to the paper either. Usually one can get pretty close just using the abstract and the SI, but, in some instances, we won’t know until we get our hands on the actual paper.

          For the record, I didn’t run the numbers, Hank is the author of the post. I’m sure he’ll get back with us regarding the points you’ve brought up.

          It’s a maddening thing to have one graph put out world wide without being privy what it actually says or means.

          In the mean time I might dig into the numbers a bit.

        • suyts says:

          Lol, okay disregard. Hank snuck a comment in while I was writing mine.

    • HankH says:

      Ah yes, I did say year 25. Should have proofed it better. Irrespective, it doesn’t change the data or outcome.

  7. suyts says:

    You have to give them credit. Few people would think of splicing spliced data and call it an original reconstruction!

  8. Sparks says:

    Thank you Hank, that is excellent.

  9. Sparks says:

    Is there data available for this I can look at, in xls of flat-file?

  10. miked1947 says:

    Good job Hank! That is about what I expected to see. None of the Dedro studies used by the IPCC were independent. This was just another attempt to defend Da Mann!

  11. Latitude says:

    Current global temperatures of the past decade have not yet exceeded peak interglacial values but are warmer than during ~75% of the Holocene temperature history.
    Current global temperatures of the past decade have not yet exceeded peak interglacial values and are cooler than during ~25% of the Holocene temperature history.

    …don’t look like squat when you write the same exact thing…the right way

    If I gave someone a pile of money to hold…and they only gave me back 75%….I’d be pissed!
    25% less is a lot!

    • HankH says:

      In other words, temperatures today are cooler today than 2,000 to 3,000 annual records in the past (depending if you’re going with their 70% or 80% numbers). It kind of puts the whole concept of “records” into perspective.

    • suyts says:

      More than that, it means we’re well within the normal variations of the earth’s climate.

    • Latitude says:

      uno mas tiempo….

      Current global termpertures of the past decade did not exceed peak interglacial values by 25%, and are presently falling…….

      “Current global temperatures of the past decade have not yet exceeded” was the past decade, there’s no “have not yet”!…past means it over, stopped, done with…….unless they plan on re-writing the past again

  12. Bruce of Newcastle says:

    Looks like they’ve found a great new way to measure pCO2 to me (/sarc). Via WUWT.

    • suyts says:

      Thanks Bruce! How did I miss that one?

    • HankH says:

      Thanks Bruce! I’ve always understood that CO2 is not as well mixed in the atmosphere as many believe. They have to figure out how to average the clumps and not count so much on individual stations as representing the global picture.

  13. copernicus34 says:

    i find it hilarious that every alarmist bit of crap can be immediately debunked by those who simply run the numbers. well done, this should be run over in Anthony’s blog to receive maximum viewership (no offense suyt’s 🙂 )

    • suyts says:

      None taken! As a rationalization for much smaller numbers I consider the readers and commenters here as an elite group, so naturally we’d have fewer among us! ……. well, it works in my mind! 😀

      Anthony is always more than welcome to pick up the posts here.

      • copernicus34 says:

        Definitely some smart readership here, I’m happy all you eggheads are here to do the heavy lifting so people like me can just read LOL

        • HankH says:

          We all have some perspective that adds to the wealth of knowledge. We don’t so much do heavy lifting – more trying to figure stuff out. Feel free to stop in and point us in the right direction. 😉

  14. Mike Mellor says:

    More cargo cult science. When CO2 gets up to 2000ppm we might see Marcott’s predicted +8K anomaly but that’s not going to happen this century.

    What will happen this century is clean nuclear energy either thorium or fusion. America will attempt to cool the planet by pumping liquid sulfur dioxide into the stratosphere and have to back down when China says that it is quite happy with a couple of degrees warming, thank you.

    • HankH says:

      Thanks for commenting, Mike. I’m hoping we can get the detractors of nuclear out of the way. I’m a strong proponent of nuclear power. One of the defenses of wind and solar is “we just need to solve the problems to make it viable.” In my opinion, we should apply the same “can do” attitude towards nuclear.

  15. tckev says:

    ” To coin Frankenstein’s famous words when his monster was brought back to life – “it’s alive!””
    or just before it –

  16. Jim Masterson says:

    I needed to turn apples and oranges into all apples.

    I give you Kudos for doing this. I hate it when climatologists compare apples and oranges.

    Those proxies that were already using anomaly measurements were simply normalized without conversion.

    Can we normalize anomalies without converting back to the original temps? I think this might be an invalid step. I hate anomalies–they hide a multitude of sins.

    Running statistical tests in SPSS I confirmed that the difference in variance between the raw and the regularized datasets was insignificant.

    Does this step invalidate my previous concern?

    I always enjoy deconstruction pieces–good work Hank.


    • HankH says:

      Hello Jim! The term anomaly has perhaps a different meaning in my world of statistics. An anomaly is simply any sample point that isn’t smack dead the same value of the mean. Others would argue that it represents some portion of the standard error of the mean or 1-R^2 in a regression model, and so fourth. They’d be right on all accounts. When things move off of where we want them it’s an anomaly. If it gets really distracting it’s an outlier. LOL!

      Yes, we can normalize temperature anomalies (ugh) without converting back to absolute values. Normalizing simply means to move the Y zero crossing (intercept) to some agreed upon offset. Now, if I were concerned with the means, I’d be screaming bloody murder. But if I’m only concerned with the variance in the sample, I’d say “why not – it doesn’t matter.”

      Climate science is full of anomalies. I tend to avoid the term in my work because it doesn’t convey any real meaning and requires that I explain why I used it. 😉

      • Jim Masterson says:

        HankH says:
        March 11, 2013 at 2:23 am

        Yes, we can normalize temperature anomalies (ugh) without converting back to absolute values. Normalizing simply means to move the Y zero crossing (intercept) to some agreed upon offset.

        Normalizing means different things in different contexts, but I see what you’re doing. I guess I’ll let you get away with it this time. 😉


        • HankH says:

          Thanks. Coming from you, I’ll take that as the ultimate complement as I know your math prowess. 🙂

        • Jim Masterson says:

          HankH says:
          March 11, 2013 at 3:00 am

          . . . as I know your math prowess.

          I wasn’t fishing for a complement, but thanks. I think my math prowess and $6 might get me a mocha.


        • Jim Masterson says:

          Hmmm, complement-compliment. Are we playing with words? 😉


        • HankH says:

          I think in this case, they both apply. 😉

  17. Havis S says:

    Thanks Hank. A lot of work on your part. So peer review of this hockey stick research suggests the study is a dud hey..

    • HankH says:

      I think the study is both interesting and uninteresting. Interesting because it pulls together so many proxies and reconstructs the entire Holocene Epoch. When Ljungvquist released his 30 proxy reconstruction and made his data available I jumped on it like a buzzard on a dead rabbit and played with it because that’s what I enjoy doing. Uninteresting because I think the HS blade is an artifact in some way that borrows from Mann’s reconstructions in some way (perhaps calibration of the near term proxies). In such case, it’s just a rehash of the HS that’s been done so many times before.

  18. philjourdan says:

    Book Marked and saved. The questions and comments and responses to Alex are a great addition to the article as well.

  19. Pingback: Let’s play hockey – again | Climate Etc.

  20. Pingback: The Dagger In The Heart? Maybe….. A Remedial Explanation Of Marcott’s HS Blade ……. Mikey? What’s That About A Dagger? | suyts space

  21. Jeff Norman says:

    How did you go about averaging a data set with a 200 year granularity (Flarken Lake say) with a data set with a 50, 20, 10 or 1 year granularity? Not by interpolation I hope.

    There has got to be all kinds of questions about the individual proxy series. Flarken Lake for instance is in southern Sweden. The authors of this proxy series claims the lake was relatively undisturbed until recently due to acidification, but I have all kinds of thoughts about other anthoprogenic impacts like logging, farming, etc. There are some nice pictures of olde windmills in the general vicinity.

  22. HankH says:

    How did you go about averaging a data set with a 200 year granularity (Flarken Lake say) with a data set with a 50, 20, 10 or 1 year granularity? Not by interpolation I hope.

    Hello Jeff, I’m not sure if you were directing the question to me or if it was rhetorical. You correctly note that interpolation of temporal data can’t work when each dataset has it’s own frequency of measurement and different sample times that don’t line up. What you have to do is build a matrix that does the aligning by statistical infilling the intermediate predicted values into each cell of the matrix. Once you have all of the data “regularized” in this way then you can average across same date boundaries and get meaningful averages.

    There are some issues in the proxies themselves that I’m sure the authors took into account. The one that has me baffled, however, is TN057-17. That proxy dataset is all over the place with wild excursions. I can’t figure out why anyone would use it, especially where it is used with a low number of proxies towards the end of the reconstruction range.

  23. Nick Stokes says:

    Here’s where I think the blade is coming from. The reference to the global CRU-EIV composite is to Mann’s 2008 PNAS paper, eg Fig 3. The EIV is Mann’s series, and CRU is the instrumental series. So it brings in instrumental temperatures and they are the source of the spike.

    The reason why it seems centred at about 1940-50 is that they use a centred 100-year moving average which stops at 2006, or maybe before.

  24. kim2ooo says:

    Reblogged this on Climate Ponderings and commented:

  25. Pingback: Monokultur » Vinkeljernet

  26. Pingback: More Fishing for Hockey Sticks in Marcott et al., 2013 | suyts space

  27. Pingback: The New Hockey Stick | New Zealand Climate Change

  28. Pingback: Met Office Only A Few Months Behind Suyts Space!!! | suyts space

  29. Pingback: 2013 in review | suyts space

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s