Copyright Watts Up With That

Marlo Lewis, Ph.D., Senior Fellow in Energy and Environmental Policy, Competitive Enterprise Institute Politico recently published an article by Benjamin Storrow, Chelsea Harvey, Scott Waldman, and Paula Friedrich titled “How a major DOE report hides the whole truth on climate change.” The reporters’ objective is obvious and their strategy simple. They aim to discredit the Environmental Protection Agency’s proposal to repeal the December 2009 Greenhouse Gas Endangerment Finding by discrediting a Department of Energy (DOE) draft report which is cited in the repeal proposal’s climate science discussion. From a statutory perspective, that strategy is not a winner. The EPA’s proposal to repeal the Endangerment Finding (plus motor vehicle emission standards adopted by the agency in April 2024) relies chiefly on legal arguments that do not presuppose specific climate change assessments. However, the Politico article could sway the court of public opinion, which in turn could influence future litigation. Such influence would be undeserved. The article ignores foundational biases compromising the scientific basis of the 2009 Endangerment Finding. Further, its criticisms of the DOE report repeatedly misfire or backfire, and none comes close to refuting any of the report’s conclusions. Background The 2009 Endangerment Finding purported to determine that carbon dioxide (CO2) and other greenhouse gas (GHG) emissions from new motor vehicles “cause, or contribute to, air pollution which may reasonably be anticipated to endanger public health or welfare.” The Finding was the impetus for the Obama administration EPA’s adoption, in 2010, of GHG emission standards for model year 2012-2016 motor vehicles. To one degree or another, the Finding undergirds all subsequent climate policy regulations proposed or promulgated by the Obama and Biden administrations. DOE’s July 2025 draft report, A Critical Review of Impacts of Greenhouse Gas Emissions on the U.S. Climate, does not opine on the Endangerment Finding, which is a legal document. However, the report’s non-alarming assessment of climate change risks is heresy to legions of progressive policymakers, activists, academics, and journalists. The Politico reporters accuse the DOE report’s authors—John Christy, Judith Curry, Steve Koonin, Ross McKitrick, and Roy Spencer—of cherry picking, omitting context, relying on debunked or outmoded studies, and citing non-peer-reviewed analyses. They also contend the report is “overtly political” and therefore not a “scientific exercise.” As shown below, those allegations are false, misleading, or unsubstantiated. This essay has two main parts. Part 1 summarizes the disqualifying cherry picks, omissions, and outmoded opinions fundamental to the “vast scientific consensus” the Politico reporters invoke. It also rebuts their critique of the DOE report’s discussion of climate models. Part 2 rebuts other objections they raise about the DOE report. Part 1: Realist perspective, (part 2 tomorrow) Mainstream climate research has a deep scientific integrity problem due to its reliance on a triply biased methodology. For decades, the usual practice has been to run overheated models with inflated emission scenarios and ignore or depreciate humanity’s remarkable capacity for adaptation. That approach is wired to exaggerate the physical impacts of GHG emissions and the harmfulness of such impacts. All three biases compromise the major assessment reports informing the 2009 Endangerment Finding as well as subsequent assessments touted as updating and strengthening it. However, studies that exposed those biases mostly examined the later assessments. Accordingly, the following sections on unrealistic models and emissions scenarios present the information in somewhat reverse chronological order. Warm-biased models To project the physical impacts of climate change, the Intergovernmental Panel on Climate Change (IPCC), US Global Change Research Program (USGCRP), and other “mainstream” assessments use climate change projection models “forced” with various GHG emission scenarios. The IPCC works with climate modeling groups around the world to develop and evaluate their products. This exercise is called the Coupled Model Intercomparison Project (CMIP). There have been six CMIPs, the first one in 1996. The CMIP3 model ensemble was used in the IPCC’s 2007 Fourth Assessment Report (AR4), the CMIP5 ensemble in the IPCC’s 2013 Fifth Assessment Report (AR5) and USGCRP’s 2017 Fourth National Climate Assessment (NCA4), and the CMIP6 ensemble in the IPCC’s 2021 Sixth Assessment Report (AR6) and USGCRP’s 2023 Fifth National Climate Assessment (NCA5). CMIP models make projections about the evolution of global annual average temperatures out to the year 2100 and beyond. There is no way to directly test the accuracy of those projections. However, the models can hindcast global temperature changes over recent decades, and those projections can be compared to observations. That is what atmospheric scientist John Christy and colleagues have done in a series of analyses since the early 2000s. The chart below compares CMIP5 warming projections in the tropical bulk atmosphere (mid-troposphere) to observations in three empirical datasets: satellites, balloons, and re-analyses. On average, modeled warming exceeds observed warming by more than a factor of two during 1979-2016. Source: John Christy (2017). Solid red line—average of all the CMIP5 climate models; thin colored lines—individual CMIP-5 models; solid figures—weather balloon, satellite, and reanalysis data for the tropical troposphere. The next chart shows that only one CMIP5 model, the Russian INM-CM4, accurately tracks temperature change through the depth of the tropical troposphere. Source: Updated from Christy and McNider (2017). Tropical atmosphere temperature trends (1979-2018) from 25 CMIP5 models compared to four radiosonde (weather balloon) datasets. The superior accuracy of INM-CM4 likely has something to do with its equilibrium climate sensitivity (ECS) estimate, which is the lowest of any CMIP5 model. ECS is customarily defined as the amount of warming that occurs after the climate system fully adjusts to a doubling of carbon dioxide-equivalent greenhouse gas concentration. INM-CM4 has an ECS of 1.8°C. In contrast, GFDL-CM3, which has an ECS of 4.0°C (or higher), projects a warming trend that is literally off the chart. Readers may wonder why the comparisons focus on the tropical troposphere. After all, nobody lives there! As DOE report authors McKitrick and Christy explain in a peer-reviewed study published in Earth and Space Science, the tropical mid-troposphere is uniquely suited for testing the validity of climate models. That is because: (1) Nearly all models predict strong positive feedbacks (accelerated warming) in the tropical mid-troposphere; (2) the region is well-monitored by satellites and weather balloons; (3) the mid-troposphere is too distant from the surface to be influenced by land use changes; and (4) the models were not previously “tuned” to match the historical climatology in that region, hence are genuinely independent of the data used to test them. That last point is the most critical. Modelers try to make their models realistic by adjusting climate parameters (such as climate sensitivity) until hindcasts match historical temperature changes. Typically, 20th century land and sea surface temperatures are used to “train” the models. However, hindcasting data already used to tune a model is like peeking at the answers before taking a quiz. The only real way to test a climate model’s predictive skill (other than waiting 30+ years to see how things evolve) is to compare the model’s hindcasts to data that are “out of sample”—data not used to adjust model parameters. That is Christy’s procedure. The models are not trained to reproduce tropospheric data. The results speak for themselves. The models are not realistic. They run too hot. One might suppose the new and improved CMIP6 models used in AR6 would be more accurate. Not so—instead, they are worse. In the tropical troposphere, all the models hindcast faster warming than observational average drawn from satellites, weather balloons, and re-analyses. Moreover, the CMIP6 models overshoot observed warming throughout the global troposphere, with projections rising about 2.3 times faster than observations. Source: McKitrick and Christy (2025), Draft DOE climate science report, p. 35. A reasonable explanation for the persistent mismatch between models and observations is that the models overestimate climate sensitivity. The larger (global) mismatch in the CMIP6 ensemble is consistent with that explanation. A 2019 study by Zeke Hausfather found that 14 of 40 CMIP6 models have higher ECS estimates than the warmest CMIP5 model. Source: Hausfather (2019). Yellow bars show CMIP6 models with higher sensitivity than any CMIP5 model. Blue bars show CMIP6 model sensitivities within the CMIP5 range. But what about the 2009 Endangerment Finding—did it also have a “hot model” problem? Yes, as the next chart shows. The IPCC’s 2007 AR4 was a key scientific basis for the Endangerment Finding. The most critical input to AR4 was the CMIP3 model ensemble. In the 2000s, it was still difficult to obtain tropospheric temperature projections from climate modelers. Christy, however, was able to obtain surface temperature projections from the models. He then compared those to the UK Climate Research Centre (HadCRUT) surface record and satellite data adjusted to match surface temperatures. In the chart below, temperature trends start in the year indicated on the X-axis and end in 2009. The observations (squares) all fall much below the AR4 model average (diamonds), usually about half the magnitude of the modeled trend. Source: John Christy Another question arises: Did the Endangerment Finding Technical Support Document (TSD) acknowledge the hot model problem? No, but the TSD makes a case for the models’ realism. In a nutshell, the models are realistic because they can reproduce 20th century global-scale changes in surface temperature, but only if the models are run with both natural variability and anthropogenic GHG emissions. Source: EPA 2009 TSD, IPCC AR4 As the chart shows, AR4 is the source of the TSD’s assumption that models are realistic when run with both natural and anthropogenic “forcings” (perturbations that change the balance between incoming solar radiation and outgoing infrared radiation). The reasoning is circular, as it assumes all significant natural forcings that warm the planet are known and estimated correctly. If, instead, the models omit or underestimate such forcings, they might not track surface temperature trends unless forced with extra GHGs. The assumption of adequately known natural variability is problematic given the ongoing debate over the causes of early 20th century warming and evidence of a widespread Medieval Warm Period. Moreover, as noted, because climate impact models are trained to simulate 20th century land and ocean temperatures, a model’s ability to reproduce “in sample” data is no assurance of predictive skill. Christy may have been the first to challenge AR4’s claim that model projections match observations when the models include both natural and anthropogenic forcings. However, he had to wait until the IPCC posted a hard-to-decipher chart in an online supplement to AR5 (Figure 10.8). When enlarged and clarified, the AR5 chart reveals that model projections and observations almost entirely diverge unless the models are run with natural variability alone. Source: John Christy, Annotated version of IPCC AR5 Figure 10.8(b), vertical warming pattern for tropics (20S to 20N). Horizontal axis: °C/decade. Draft DOE climate science report, p. 37. According to the Politico reporters, the DOE report’s “assertions about the models’ track records are false.” Citing Hausfather et al. (2019), they contend that 1970s climate models “accurately predicted current global warming.” However, that is a red herring, because the early 1970s models did not inform either the Endangerment Finding or subsequent IPCC and USGCRP assessments. As McKitrick pointed out on Judith Curry’s blog, Supporting Information published by Hausfather et al. (2019) reports the ECS estimates of eight early climate models. Those models and their ECS values are: Manabe and Weatherald (1967) / Manabe (1970) / Mitchell (1970): 2.3°C Benson (1970) / Sawyer (1972) / Broecker (1975): 2.4°C Rasool and Schneider (1971): 0.8°C Nordhaus (1977): 2.0°C Each model’s ECS is lower than 3°C—the IPCC’s “best estimate” in AR4 and AR6 and “mid-range estimate” in AR5. The average ECS of the eight models is 2.1°C. Even if we exclude Rasool and Schneider as an outlier, the average ECS is 2.3°C. So, the apparent accuracy of early climate models in projecting current surface warming is not evidence the CMIP models are realistic. Rather, it is additional evidence the CMIP models are tuned too hot. Indeed, as the DOE report points out, current low ESC models do a good job of replicating the warming rate of surface temperatures on which they were trained. However, as explained above, comparing models to surface observations is not an independent scientific test. Using the deep atmosphere, where the joules of energy from rising GHG concentration are supposed to accumulate, is a far better metric. Even low ECS CMIP models don’t perform well there. The Politico reporters say nothing about this fundamental problem. Inflated Emission Scenarios Although the Shale Revolution began in 2007, many emission scenarios assumed until quite recently that learning-by-extraction and economies of scale would make coal the increasingly affordable backstop energy for the global economy. For example, some analysts assumed oil and gas would become increasingly costly to extract, creating sizeable markets for coal-to-liquid fuels and coal gasification. The IPCC and USGCRP have been the main legitimizers of the two most influential scenarios used in recent climate impact assessments—RCP8.5 and SSP5-8.5. RCP8.5 is the high-end emissions scenario in the AR5, NCA4, and the IPCC’s 2018 Special Report on Global Warming of 1.5°C. SSP5-8.5 is the high-end emissions scenario in AR6 and NCA5. For readers unfamiliar with those abbreviations, “RCP” stands for representative concentration pathway. An RCP plots a projected change in global annual GHG emissions and concentrations from 2000 to 2100 and beyond. Each RCP is numbered for the quantity of radiative forcing it adds to the pre-industrial climate by 2100. Radiative forcing is measured in watts per square meter. Thus, in RCP8.5, radiative forcing increases by 8.5W/m2. “SSP” stands for shared socioeconomic pathway. An SSP is a socioeconomic development scenario that results in much the same forcing as a corresponding RCP. Thus, in AR6 and NCA5, SSP5-8.5 is the development scenario that results in roughly the same global temperature increase as RCP8.5. Although neither RCP8.5 nor SSP5-8.5 was designed to be the baseline or business-as-usual scenario, both have been widely misrepresented—including by the IPCC and USGCRP—as official forecasts of where 21st century emissions are headed absent strong measures to transform the US and other major economies. RCP8.5 tacitly assumes global coal consumption increases almost tenfold during 2000-2100. Source: Riahi et al. (2011). RCP8.5 is implausible, and not only because natural gas is increasingly abundant and affordable and governments have adopted or pledged numerous climate change mitigation policies. Coal producer prices more than doubled during 2000-2010 and are now about 3.5 times higher than in 2000. Source: Bureau of Labor Statistics via St. Louis Fed. In the International Energy Agency’s (IEA) “current policies” and “state policies” scenarios, global emissions at mid-century are projected to be only about half the quantities in RCP8.5 and SSP5-8.5. As the chart below shows, the range of emissions projected by the IEA baseline scenarios “lie almost entirely outside” the IPCC “baseline” ranges. Source: Roger Pielke, Jr. and Justin Ritchie (2021). In 2022, Resources for the Future (RFF) published updated baseline emission scenarios, informed by IEA and other market forecasts. In the RFF’s baseline projection, global CO2 emissions are about half those projected in SSP5-8.5 in 2050 and less than one-fifth those projected in 2100. The EPA adopted the RFF baselines as the best available for its November 2023 report on the social cost of greenhouse gases. Source: Kevin Rennert et al. (2022). The solid black line is the RFF’s baseline projection. The dotted green line is SSP5-8.5. The dotted blue line is SSP2-4.5. These shifts in baseline emission projections have significant implications for endangerment assessments. The new RFF baseline closely aligns with SSP2-4.5, which has the same radiative forcing as RCP4.5. In NCA4, RCP8.5 was the business-as-usual scenario and RCP4.5 was the climate policy mitigation scenario. Achieving RCP4.5 was estimated to reduce harmful climate change impacts on labor productivity, extreme heat mortality, and coastal property by 48 percent, 58 percent, and 22 percent respectively (NCA4, Ch. 29, p. 1359). But wait, there’s more! Recent research by Roger Pielke, Jr. and colleagues suggests the most realistic emission scenario is not SSP2-4.5 but an even “cooler” scenario, SSP2-3.4. In other words, the current global emissions trajectory adds 3.4 W/m2 of warming pressure by 2100. Assuming 3°C climate sensitivity, SSP2-3.4 results in 2.0°C-2.4°C of warming by 2100. Keep in mind that lower ECS values between 1.5°C and 2.0°C “are quite plausible.” It is difficult to overstate the distorting influence RCP8.5 and SSP5-8.5 have had on climate research and public discourse. Google Scholar lists 51,900 papers on RCP8.5 and 15,500 on SSP5-8.5. Cursory sampling suggests that very few studies challenge the plausibility of those scenarios. Of the first 50 entries for both RCP8.5 and SSP5-8.5, only one is critical. The other 99 studies use RCP8.5 or SSP5-8.5 to project climate change impacts. The climate fraternity’s decades-long embrace of extreme scenarios as business-as-usual is a scandal about which the Politico reporters say nothing. Turning now to AR4 and the USGCRP reports informing the EPA’s 2009 Endangerment Finding, we find the same reliance on implausible emission scenarios. Pielke, Jr. recently posted the relevant information on his blog. As he explains, the Endangerment Finding relied on two sets of scenarios to project future changes in climate and the associated risks: the six scenarios developed in the IPCC’s Special Report on Emission Scenarios (SRES, 2000) and three Climate Change Science Program (CCSP) scenarios developed by Clarke et al. (2007). Pielke, Jr. presents two charts showing the nine scenarios and their radiative forcings in 2100: The left panel shows the six SRES scenarios (plus three earlier IPCC scenarios, the IS92 scenarios); the right panel shows the three CCSP scenarios. Here are the nine scenarios arranged from highest to lowest forcing: A1FI-9.2 (SRES) IGSM-8.6 (CCSP) A2-8.1 (SRES) MERGE-6.6 (CCSP) MiniCAM-6.4 (CCSP) A1B-6.1 (SRES) B2-5.7 (SRES) A1T-5.1 (SRES) B1-4.2 (SRES) Pielke, Jr. observes: The nine scenarios “are heavily skewed to very high levels of 2100 radiative forcing, with two even more extreme than RCP8.5.” Eight of the nine “project a central estimate” of 3.0°C above pre-industrial temperature by 2100, “a value today viewed to be unlikely.” The average radiative forcing across all nine scenarios is 6.7 W/m2. Of the nine scenarios, only B1-4.2 “is consistent with what today are called ‘current policy’ scenarios.” The chart below shows the CCSP energy market projections. The purple segments depict the projected market shares of coal without carbon capture and storage (CCS). In each of the six panels, coal without CCS increases to become either the dominant component of the US and global energy mix or the largest single component. “No one believes that anymore,” Pielke, Jr. comments. Like the later IPCC and USGCRP reports, the 2009 Endangerment Finding relied on unrealistic, warm-biased models and emission scenarios. The Politico reporters fail to engage the DOE report’s specific critique of the CMIP models. They avoid the issue of implausible emission scenarios entirely. Depreciating adaptation The Obama administration EPA’s decision to treat potential adaptation as “outside the scope” of an endangerment finding also distorted the analysis. The EPA argued it would be as inappropriate to consider potential adaptation to a changing climate as it would be to “consider the availability of asthma medication in determining whether criteria pollutants endanger public health.” That argument is specious, because CO2-related health risks are not analogous to health risks associated with criteria pollutants. Criteria pollutants, toxic air pollutants, and radioactive substances endanger health or welfare via direct routes of exposure such as inhalation, dermal contact, or deposition and ingestion. For such pollutants, the only reasonable form of “adaptation” is mitigation—i.e. pollution control or prevention. In contrast, CO2 is non-toxic to human and animal life at any concentration projected to result from fossil fuel combustion, and the ongoing rise in the air’s CO2 content has substantial agricultural and ecological benefits. Carbon dioxide-related risks arise not from exposure but from potential changes in weather and sea levels over periods of decades to centuries. Consequently, adapting to a changing climate is fundamentally different from “adapting” to toxic exposures or associated illnesses. No one claims medications for pulmonary disease or radiation sickness—or the availability of hazmat suits—can make people better off than they would be if they were never exposed to dangerous pathogens. However, adaptation to changes in the weather and sea levels over periods of decades to centuries could very well make future generations better off than current generations. Adapting to varied and even extreme environmental conditions is what human beings have been doing since time immemorial. And it works. Adaptation is part of the virtuous cycle of progress that, in the post-1950s warming period, has achieved unprecedented improvements in global life expectancy, per capita income, per capita food supply, and crop yields. More pertinently, adaptations driven by the pursuit of happiness, market dynamics, and prudent policies increasingly protect humanity from extreme weather. Globally, the decadal annual average number of deaths due to droughts, floods, wildfires, storms, and extreme temperatures declined from about 485,000 per year in the 1920s to about 14,000 per year in the past decade—a 96 percent reduction in climate-related mortality. Factoring in the fourfold increase in global population since the 1920s, the average person’s risk of dying from extreme weather has decreased by 99.4 percent. Source: Bjorn Lomborg (2022). Taking an even longer view, global deaths from extreme weather are conservatively estimated at 50 million in the 1870s. That frightful toll declined to an estimated 5 million in the 1920s, 500,000 in the 1970s, and 50,000 in the 2020s. Global weather-related deaths in the first half of 2025 totaled about 2,200—very likely the lowest weather-related mortality of any six-month period in recorded history. Source: Roger Pielke, Jr. (July 21, 2025). Globally, climate-related economic losses have increased as population and exposed wealth have increased. However, losses as a percentage of exposed wealth declined almost five-fold from 1980-1989 to 2007-2016, with most of that progress occurring in low-to-middle income countries. Neither the Endangerment Finding nor the subsequent assessments that supposedly strengthen it spotlight this big picture of improving climate safety. The Politico reporters do not mention it. But suppose climate sensitivity turns out to be 3.0°C or higher, and current energy market trends reverse—could adaptation continue to improve the quality of the human environment? In his book False Alarm, Bjorn Lomborg reviews Hinkel et al. (2014), a sea-level rise study published in Proceedings of the National Academy of Sciences. The study includes a scenario in which sea levels driven by an RCP8.5 warming of 5.0°C flood up to 4.6 percent of global population in 2100, with annual losses up to 9.3 percent of global GDP. However, those extraordinary damages are projected to occur only if people do nothing more than maintain current sea walls. If “enhanced” adaptive measures are taken, so that coastal protections keep pace with sea-level rise, flood damages in 2100 are “2-3 orders of magnitude lower.” Yes, annual flood and dike costs increase by tens of billions of dollars. However, Lomborg calculates, the relative economic impact of coastal flooding declines sixfold from 0.05 percent of global GDP in 2000 to 0.008 percent in 2100. Moreover, the annual average number of flood victims declines by more than 99 percent—from 3.4 million in 2000 to 15,000 in 2100. In short, even in a 5°C warming scenario, forward-looking adaptation could potentially make coastal flooding less disruptive and damaging than it is today. To exclude this type of analysis from an endangerment determination is unreasonable. Overheated models, inflated emission scenarios, and lame adaptation assumptions compelled the conclusion that rising GHG concentration “may reasonably be anticipated to endanger public health or welfare.” Today’s EPA should seriously consider an alternative conclusion: Societies that protect economic liberty and welcome abundant energy may reasonably anticipate a future of increasing climate safety and diminishing relative impact of weather-related economic damage.