Friday, 8 May 2026

Ukraine launches second-biggest drone attack on Russia amid rows over Victory Day ceasefire.

 Extract from ABC News


3 hours ago

A Russian soldier on top of an army vehicle behind a mounted gun in a public square.

Security in Moscow has been beefed up ahead of the city's commemoration of Victory Day. (Reuters)

In short:

Ukraine has launched its second-biggest drone assault on Russia since war broke out in February 2022.

Air defences downed 347 Ukrainian drones across 20 Russian regions overnight, with further drone attacks also disrupting flights in Moscow.

What's next?

The Kremlin has ordered mobile internet access in Moscow to be switched off for Victory Day commemorations on Saturday.

James Hansen - Exxon

 

Credit: “Inconvenient Truth” by Pat Bagley, Salt Lake Tribune, 19 Jan. 2023
 
Exxon

7 May 2026
James Hansen
A transformation of Exxon occurred in the 1980s that is related to a fundamental change of our nation that has occurred gradually since World War II. The change of our nation is what I hope Sophie’s Planet can help young people understand. The following is a draft of the end of Chapter 23 (Global Habitability), for context, and all of Chapter 24 (Exxon).
 
… The Global Habitability program was shot down at Unispace’82.

Seven years later, in 1989, the United States Administration and Congress would finally give NASA approval and funding for global Earth observations, spurred by an extreme drought and summer heat wave in 1988 and a campaign promise of President George H.W. Bush to address the greenhouse effect with the “White House effect,” as described in later chapters. The initial name of that program, Mission to Planet Earth, was consistent with the concepts defined at the Global Habitability workshop. A comprehensive scientific rationale – Earth Systems Science – was developed for global change research. However, unlike Pioneer Venus and other space science missions, the nature of the observation system for Mission to Planet Earth was not defined by the scientific community. It was imposed by NASA Headquarters.

NASA was different by 1989. It was not the NASA of the 1960s or even the 1970s. NASA had become more bureaucratic, the problem that Tom Young hinted at after he appointed me to be director of GISS (Chapter 19). Doubtless, the changes made NASA less attractive to top talent. Young, the storied director of the Viking Mission to Mars, left NASA in 1982 to join Martin Marietta Corporation, where, after a few years, he was President and Chief Operating Officer. Hans Mark left NASA in 1984 to become Chancellor of the University of Texas system. NASA Associate Administrators (AAs) became more political; no longer were there AAs with the scientific acumen and independence to reverse a project decision on scientific grounds, as Tim Mutch did in the 1970s to preserve our Galileo investigation (Chapter 13).

Are bureaucratization and decline inevitable, an unavoidable result of institutional aging? Surely not, but they are common, not only in organizations such as NASA, but in nations as a whole. In that regard, I often think of the day after American astronauts set foot on the Moon, when I was a post-doc in the Netherlands (Chapter 8). Congratulations for that historic accomplishment overflowed at tea time at the Sterrewacht the next day, yet what I remember is a cautionary remark and question of Joop Hovenier, the Dutch colleague with whom I was working closely.

History shows that triumphant nations decline after their period of greatness, Joop warned. The United States had been ascendant since the end of World War II, but how long could this period last? Did I have any reason to believe that America could avoid the fate of prior great nations? I had no immediate response, but I never forgot his question.

Eventually, I found a rationale to justify my optimism. Our strength is based on concepts from the Age of Science and Reason – the Enlightenment – and the foresight of our Founders to build these concepts into our Declaration of Independence and Constitution. The core idea is equal rights and equal opportunity, which together I unapologetically describe as the American dream, a dream that includes a welcome mat for immigrants who come with their energy, talents, and desire to improve their lives, thus contributing to the progress of our nation.

The dream has receded in the past several decades, as our Founders realized that it could. Benjamin Franklin did not explain his concern, when he responded to Elizabeth Willing Powel’s question “Well, Doctor, what have we got, a republic or a monarchy?” with “A republic, if you can keep it.” However, history reveals that the effect of money on government, the effect of the personal pecuniary interests of our representatives in our democracy, is the major threat.

This does not imply that most of our representatives are engaged in illegal activities; present law allows them to accept “campaign contributions” from special interests who wish to affect legislation. However, consequences on government policies and on the efficiency of our government are enormous. Moreover, the private sector, too, understands how to use this legalized bribery to obtain favorable terms that limit competition. The casualty is the American dream of equal rights and equal opportunity. In turn, this brings long-term decline of our nation.

Our democracy still allows a path that addresses climate change and the decline in government efficiency, we will conclude, but a persuasive case for that path requires observations and understanding of the evidence. Thanks in part to Wally Broecker, we learned a lot in the year following the Global Habitability workshop. Wally suggested that I work with his colleague, Taro Takahashi, to organize a symposium on climate change. Wally and Taro had research support from Exxon, which was willing to fund a Ewing Symposium.[1]
 
Chapter 24.  Exxon

Edward E. David, Jr., President of Exxon Research and Engineering, was impressive. He had a Ph.D. from MIT in electrical engineering, he had been the Executive Director of Research at Bell Labs, and he was Science Adviser to President Richard Nixon from August 1970 until January 1973. We were delighted when David agreed to give the dinner talk on the first day of the Ewing Symposium, which was held in late October 1982.

One benefit of the Symposium was insight into the thinking – about climate change – of the giant oil company, Exxon. Dinner talks are often forgettable. Not E.E. David, Jr.’s. The audience was rapt. David[2] spoke from a text, which we published in the symposium proceedings.[3] David’s presentation was fitting for a scientific symposium and it had broad policy implications. He began by pointing out the power of science and technology to shape the future:
“Exxon is a hundred years old this year; we have a long corporate memory of the very profound social and economic transformations that our business activities have helped bring about, and of how we and society have had to adapt further in response. But faith in technologies, markets, and correcting feedback mechanisms is less than satisfying for a situation such as the one you are studying at this year’s Ewing Symposium. The critical problem is that the environmental impacts of the CO2 buildup may be so long delayed. A look at the theory of feedback systems shows that where there is such a long delay, the system breaks down, unless there is anticipation built into the loop.”
E.E. David was perceptive about the basic issue posed by human-caused climate change: the delayed response and potential implications for global energy policy. With his background in electrical engineering, he realized that a system with amplifying feedbacks can break down if the forcing and feedbacks are too large. “Anticipation” is needed to avoid climate forcing that pushes the system beyond the breakdown point. However, what magnitude of forcing will push the system to breakdown? That depends on climate sensitivity. It is a hard problem.

The Symposium was an opportunity to bring in leading scientists, learn from them, and contribute a paper to the symposium monograph.[3] Our paper[4] was the embodiment of our research approach based on paleoclimate data, global modeling, and analysis of ongoing physical processes. Our paper used all of these to provide insight, but not yet precise answers, for several basic climate issues: climate sensitivity, climate feedbacks, and climate response time.

Climate sensitivity: We showed that comparison of two equilibrium climate states has potential to provide accurate empirical evaluation of climate sensitivity. The best opportunity is comparison of the Last Glacial Maximum (LGM, peak cold about 20,000 years ago) with the present interglacial (the Holocene). Climate sensitivity is the ratio of glacial-interglacial temperature change to the climate forcing that maintains the changed climate. The climate forcing (imposed change of Earth’s energy balance) is caused by change of atmospheric gases and change of surface albedo (reflectivity) due to different ice sheet sizes in the glacial and interglacial periods. These forcings can be calculated from atmospheric composition preserved in polar ice cores (Prologue II) and geologic evidence of ice sheet sizes during the LGM.
Fig. 23.1.  Reto Ruedy and Dorothy Peteet (recent photos).
Thus, if we know how cold the ice age was, we have climate sensitivity. “Ay, there’s the rub,” Shakespeare might say, because it is hard to know how cold the ice age was. A National Science Foundation (NSF) project to evaluate LGM surface conditions (CLIMAP, Climate: Long range Investigation, Mapping, and Prediction)[5] had just been completed. CLIMAP estimated LGM sea surface temperatures (SSTs) from the geographical distribution of microscopic biological species near the ocean surface during the LGM, as recorded in shells of those species in ocean sediments. CLIMAP assumed that each species migrated as climate changed to stay within the temperature range where they live today. With that assumption, CLIMAP found that LGM SSTs averaged only a few degrees cooler than today.

If the CLIMAP assumption was correct, it implied a low climate sensitivity, but by 1982 we had a tool, our global climate model (GCM), that allowed us to quantitatively investigate such issues. Gary Russell (Chapter 14) was the architect of our GCM, but Reto Ruedy (Fig. 23.1), a Swiss mathematician by training (Chapter 14), mastered the physics of the entire model and thus was able to reliably carry out a wide range of studies with the model. When Reto ran the model with CLIMAP’s boundary conditions, we found that the cooling over land was tightly constrained by CLIMAP SSTs, yielding an LGM global cooling of 3.6°C. The LGM-to-Holocene climate forcing (from greenhouse gas and ice sheet changes) is about 6 watts per square meter (W/m2), which is 1.5 times larger than doubled CO2 forcing.

Thus, CLIMAP surface conditions implied climate sensitivity of 2.4°C for doubled CO2 (forcing of 4 W/m2). Such a low sensitivity is consistent with Manabe’s GCM examined by Charney (Chapter 17), which used fixed clouds and an adiabatic adjustment for moist convection. However, we concluded for two reasons that 2.4°C sensitivity is an underestimate.

Our first reason was based on work of Dorothy Peteet (Fig. 23.1), then a New York University graduate student working with us at GISS and with Wally Broecker at Lamont (she graduated and became a post-doc at GISS in 1983). Dorothy used pollen from various plant species, recorded in sediment cores in bogs, to conclude that low latitude land areas were much colder than implied by the mild CLIMAP ocean temperatures. Also, she reviewed paleoclimate literature showing that glaciers on subtropical mountains descended about 1 kilometer during the LGM, which was about twice as far as they would have descended if CLIMAP temperatures were accurate. In our paper, we cited these analyses, which were later published.[6]

Our second reason to infer that CLIMAP SSTs are too warm and climate sensitivity is greater than 2.4°C was based on a climate simulation with our GCM using CLIMAP surface conditions and LGM greenhouse gas amounts. Earth was then far out of energy balance, with more than 2 W/m2 of heat pouring into space – the planet was trying to cool off, by a lot (2 W/m2 is half of doubled CO2 forcing). The plausible explanation, consistent with Peteet’s analysis, was that CLIMAP SSTs are unrealistically warm. Quantitative implications are discussed below.

Climate feedbacks: E.E. David’s talk spurred us to use electrical engineering formalism for feedback “gains” that amplify climate sensitivity. Evaluation of individual feedbacks was already common, for example, in Charney’s report[7] and in our 1981 Science paper, but we found the gain concept to be useful. Doubled CO2 (or +2% solar irradiance) is a 4 W/m2 forcing (it causes an Earth energy imbalance of +4 W/m2, more energy coming in than going out). That imbalance causes global warming. In the absence of feedbacks, Earth would need to warm ΔT = 1.2°C to increase radiation to space 4 W/m2 and restore energy balance (Chapter 10). However, warming increases atmospheric water vapor, reduces sea ice area, and alters clouds, which are feedbacks that cause actual climate sensitivity to be ΔT = 1.2°C/(1 – g), where g, the feedback gain, is the sum of the water vapor, sea ice, and cloud gains, g = gwv + gsi + gcl.

Andy Lacis evaluated the individual gains from the changes of water vapor, sea ice and clouds in our doubled CO2 GCM simulation. By inserting the changes into a column (radiative-convective) climate model one-by-one, he inferred gwv = 0.4, gsi = 0.1, and gcl = 0.2. Together these three feedbacks yield climate sensitivity ΔT = 1.2°C/(1 – 0.7) = 4°C. This feedback analysis also clarifies the climate sensitivity of Manabe’s global model that Charney used in his 1979 study. Manabe used fixed clouds, thus gcl = 0, which reduces climate sensitivity to ΔT = 1.2°C/(1 – 0.5) = 2.4°C. In addition, Manabe’s moist adiabatic adjustment limited penetration of water vapor to the upper troposphere, thus apparently reducing the water vapor gain to about gwv = 0.3 and thus climate sensitivity to ΔT = 1.2°C/(1 – 0.4) = 2°C.

Overall, the feedback analysis provides a simple quantitative picture of the role of feedbacks in producing global climate sensitivity. It draws attention to the importance of understanding the cloud feedback. The entire range of equilibrium climate sensitivities, from ΔT = 2.4°C to 6°C, is accounted for by the range of gcl between 0 and 0.3 (Table 24.1). At the time of the Charney study, it was assumed that cloud modeling in GCMs would improve within several years, and thus understanding of climate sensitivity soon would be much improved. Almost half a century later, cloud modeling is still primitive and by itself cannot tightly constrain climate sensitivity. However, we will show later that precise measurements of spatial and temporal changes of Earth’s energy imbalance imply that the cloud gain is large, near g = 0.2, which implies high climate sensitivity, at least near 4°C.

The “gain” formalism proved useful later for understanding the possibility of “runaway” global warming. If g reaches a value near 1, does that mean an infinitely large climate sensitivity and runaway to Venus-like conditions? No, not in most cases. It depends on the “ammunition” that the feedback has to offer. The only feedback with near-infinite ammunition is water vapor because of Earth’s ocean. Even the ocean, eventually, can run out of water; but, in that case, carbon will be “baked” out of Earth’s crust and Earth will reach the Venus syndrome (Chapter 10). Fortunately, it will take several billion years for the ocean to run out of water.
Table 24.1. Equilibrium climate sensitivity (ΔT) as a function of cloud gain, gcl.
Climate response time: We concluded that climate response time – the time needed for global temperature to approach its new temperature after a climate forcing change – was probably much longer than the research community estimated. Charney’s report (Chapter 17) raised the issue of the delay caused by the ocean – the time it takes for the ocean surface to warm. However, Charney did not realize that the lag depends dramatically on climate sensitivity.

Lag dependence on climate sensitivity is easy to explain. Feedbacks do not come into play in response to climate forcing, but rather in response to temperature change. We can clarify the consequence with an example. The upper layer of the ocean, called the mixed layer, is stirred by winds, with the deepest mixing in winter, when surface water is cold, dense, and thus easy to mix downward. The global average of seasonal-maximum mixed layer depth is 110 m. How long does it take for this layer to approach equilibrium warming in response to a forcing such as doubled CO2? If there are no feedbacks and no exchange of water between the mixed layer and deeper ocean, the problem has a simple answer: ΔT = 1.2°C × [1 – e (– t/Ï„)], where the e-folding time, Ï„, is about 4 years. In other words, global temperature asymptotically approaches its equilibrium warming of 1.2°C; thus, it reaches 63% of the equilibrium response in 4 years (see our Ewing paper for derivation of the formula; e, the base of the natural logarithm is ~2.718).

Real-world climate response is slower for two reasons: feedbacks and the deeper ocean. Feedbacks, because they come into play with temperature change, increase response time in proportion to the temperature change. Our first global climate model (GCM) produced global warming 4.2°C for doubled CO2 (the GCM is described in Chapter 27), thus the feedback factor is 4.2°C/1.2°C = 3.5, which increases mixed layer e-folding time from 4 to 14 years, long enough for substantial exchange of water between the mixed layer and deeper ocean. Ocean dynamics is complex, with deep water formation in polar regions and nearly horizontal motion along constant density (isopycnal) layers in most of the ocean. This ocean mixing can be approximated by local diffusion with diffusion coefficient dependent on stability at the base of the winter mixed layer. We obtained the global ocean distribution of this stability from ocean data of Levitus.[8] We inferred an empirical relation between stability and diffusion coefficient, k, based on penetration of a tracer (tritium sprinkled on the ocean by atmospheric atomic testing) measured at many locations.[9] This relation defines a diffusion coefficient at all ocean locations, with values near 10 cm2/s in the polar ocean and a few tenths of a cm2/s in the tropics (Fig. 15 of our Ewing paper). These values yield an e-folding response time for global ocean surface temperature of about a century for our climate model.

A century! That was much longer than other results. The then-existing atmosphere-ocean GCM, of Bryan and Manabe,[10] had 25-year response time; simpler models had even faster response.[11] Wally Broecker disputed our result; a “box-diffusion” model of his European collaborators, using the same ocean tracer data[9] that we used, gave a much faster response. His criticism was biting, expressed as greater “trust” in modeling ability of others. Successful oral response did not seem possible. Instead, with colleagues, I worked on a paper[12] specifically on climate response time, which was accepted and published as a lead report in Science. We showed that response time is proportional to the square of climate sensitivity; thus, the 25-year time-scale of the Bryan and Manabe model became 25 × (4.2/2)2 = 110 years for the 4.2°C sensitivity of our GCM.

Our Ewing paper, based on broad evidence from paleoclimate, climate modeling, and ongoing global warming, showed that climate sensitivity is 2.5-5°C, higher than Charney’s 1.5-4.5°C. High sensitivity implies greater climate impacts and intergenerational effects. The response (e-folding) time for sensitivity 2.5°C is 40 years, while only 14 years for 1.5°C sensitivity. Thus, our children and grandchildren will experience larger climate effects than we observe now, unless we reverse the human-made drive for climate change. The results and issues raised in our peer-reviewed Ewing and Science papers deserved greater attention.

Intergenerational effects are undeniable. The conclusion that most climate feedbacks come into play only with temperature change, and thus increase the response time, is undeniable. The other cause of slow response – exchange of water between the surface mixed layer and the deeper ocean – is well quantified by observed penetration of trace human-made substances into the deeper ocean; it is also undeniable. These are sobering facts, not speculation. Conclusion that climate sensitivity is at least 2.5°C makes it certain that today’s young people will bear a great burden, if the world does not implement responsible energy and climate policies.

Who bears responsibility? Who should young people focus on to address their situation? It is common to blame the fossil fuel industry, but fossil fuels have raised living standards almost everywhere. Scientists, politicians, and the public also warrant scrutiny. Please keep all of these people in mind as I describe events in subsequent decades. I believe that you will be surprised at where the evidence points and pleasantly surprised at how potentially tractable the actions are that are needed to achieve a brighter future for young people and their children.

Exxon understood the situation. E.E. David knew that climate’s delayed response requires anticipation to avoid system breakdown, thus development of energies that do not emit CO2. Exxon understood the multi-decadal time scale of energy transitions. Could this be a historic moment, when the world’s leading energy company invests in carbon-free energy? No. Instead, Exxon and the fossil fuel industry chose to develop hydraulic fracturing (“fracking”), supported by government subsidies. Thus, predictably, E.E. David’s professed concern was realized: there was no effective “anticipation” and “faith in technologies, markets, and correcting feedback mechanisms,” was, indeed, misplaced. How did David square Exxon policy with the knowledge he exhibited in his Ewing speech? He became a climate change denier, bringing to mind Upton Sinclair’s famous dictum, “It is difficult to get a man to understand something when his salary depends upon his not understanding it.” However, there is little merit in faulting the fossil fuel industry for providing a product that the public wanted and used fruitfully, especially in absence of contrary guidance from scientists, politicians, or the public.

Congress, in passing President Carter’s 1980 Energy Security Act,[13] requested a climate assessment by the National Academy of Sciences. The resulting Changing Climate,[14] also called the Nierenberg report after its chairman, was an abomination, in my opinion. Realization of the threat of climate change had increased since the Charney report, but there was no such tone in this report. The report’s climate science had a major fundamental flaw, and there was no effort to convey the most important climate science implications to policymakers.

The major technical flaw arose from an assumption that climate response time is about 15 years, independent of climate sensitivity, leading to a conclusion that climate sensitivity is near or below 1.5°C for doubled CO2. Changing Climate appeared months before the Ewing monograph, but appropriate review would have uncovered this flawed assumption and conclusion. The preface of Changing Climate made an odd statement: “…as interest in synfuels diminished, the [assessment committee] chose to place less emphasis on this aspect of the CO2 issue…” The fossil fuel industry was keenly aware that reserves of conventional oil and gas were limited and that it requires decades to move from one principal energy source to another. The fossil fuel industry was already beginning to invest in fracking and other unconventional fossil fuels – with the help of taxpayer subsidies.

Instead of providing understandable information that policymakers need, Changing Climate established an approach for climate assessments that has prevailed ever since. Reports are excessively long technical discussions among researchers, without a good summary of policy implications. If we scientists simply speak to ourselves, offering no advice to policymakers, have we not largely wasted the public investment in our education? Anniek reminds me that I tend to be suspicious. I was suspicious in this case. Why was there no scientific objection to the Nierenberg report? Were scientists discouraged from foraying into policy? I was convinced that DoE’s Koomanoff took his hatchet to our CO2 research program (Chapter 19) because he did not like our results, especially the fact that our 1981 Science paper[15] pointed out the likely need to phase down fossil fuel emissions and avoid unconventional fossil fuels. I suspected that, in his entertaining of our proposal a second time and rejecting it again publicly, he was making an example of us: stay in your lane, climate scientists – your lane is not energy policy.

At GISS, we were hanging on by our fingernails. Goddard was determined to move us to Greenbelt. Their freeze on hiring at GISS and the loss of our CO2 research funding combined to create a long, harsh period. NASA’s plan for a Mission to Planet Earth stalled, as Congress failed to provide any support. Then suddenly an angel appeared on the scene. Or was it two angels? Sometimes it is hard to distinguish an angel from an ordinary person.
 
[1] Maurice Ewing was founder and first Lamont director (Prologue II). Symposia on geophysical topics were held at Lamont intermittently in his honor.
[2] Anniek said that his hairpiece was awful.
[3] Hansen JE and Takahashi T. Climate Processes and Climate Sensitivity. Geophysical Monograph 29, Maurice Ewing Volume 5, American Geophysical Union, Washington, DC, 1984
[4] Hansen J, Lacis A, Rind D, et al. Climate sensitivity: analysis of feedback mechanisms. In American Geophysical Union Geophysical Monograph 29, 130-63, 1984
[5] CLIMAP project members. Seasonal reconstruction of the Earth’s surface at the last glacial maximum. Geol Soc Amer, Map and Chart Series, No. 36, 1981
[6] Rind D, Peteet D. Terrestrial conditions at the last glacial maximum and CLIMAP sea-surface temperature estimates: Are they consistent? Quat Res 24, 1-22, 1985
[7] The climate feedback discussion in Charney’s report was largely the work of Robert E. Dickinson, renowned atmospheric and geoscientist. Dickinson did not seek media attention, but he was viewed by many, including me, as a genius and top climate researcher of his era. He generously schooled many other scientists, including Steve Schneider, when they were both at the National Center for Atmospheric Research.
[8] Levitus S, Climatological Atlas of the World Ocean, NOAA Prof. Paper No. 13, U.S. Government Printing Office, Washington DC, 1982.
[9] Broecker WS, Peng TH, Engh R, Modeling the carbon system, Radiocarbon 22, 565-98, 1980
[10] Bryan K, Komro FG, Manabe S et al. Transient climate response to increasing atmospheric carbon dioxide. Science 215, 56-8, 1982
[11] Hunt BG, Wells NC. J Geophys Res 84, 787-91, 1979; Hoffert MI, Callegari AJ, Hsieh CT. J Geophys Res 85, 6667-79, 1980; Cess RD, Goldenberg SD. J Geophys Res 86, 498-502, 1981; Schneider SH, Thompson SL. J Geophys Res 86, 3135-47, 1981; Bryan K, Komro FG, Manabe S et al. Science 215, 56-8, 1982
[12] Hansen J, Russell G, Lacis A et al. Climate response times: dependence on climate sensitivity and ocean mixingScience 229, 857-9, 1985
[13] Senator Abraham Ribicoff added an amendment to the 1978 National Climate Act, which was incorporated into the Energy Security Act signed by President Carter in 1980.  See Naomi Orestes and Erik M. Conway, Merchants of Doubt, pp. 176-177, Bloomsbury Press, 2010
[14] Nierenberg, W.A. (Chairman), Changing Climate: Report of the Carbon Dioxide Assessment Committee, Washington, DC, National Academies Press, 519 pages, https://doi.org/10.17226/18714, 1983
[15] Hansen J, Johnson D, Lacis A et alClimate impact of increasing atmospheric carbon dioxideScience 213, 957-66, 1981

Thursday, 7 May 2026

In this machine age we must hold on to imperfect writing. It is not flawed. It is human.

 Extract from The Guardian

Close-up of hands writing in notebook with pen
Alex Reszelska

We need the mess of it all. Without it, what remains are sentences that are technically flawless but emotionally vacant

Some people are naturally drawn to writing – scribbling notes in the margins, jotting poems and little stories, mostly for themselves, sometimes to entertain others. I’ve always been one of them. Every Christmas, I asked for a new journal.

At first, they came with cute illustrations, questionnaires and only a few blank pages. Later, as my writing grew more “sophisticated”, the journals became simpler: a beautifully decorated cover, sometimes leather-bound, and clean, unlined pages that invited experiments – haiku (always about heartbreak), song lyrics, fragments of short stories, scattered observations about life.

I also wrote poems for every family member’s birthday – rhymed, handwritten, slightly chaotic but earnest. They still live in my parents’ home, a testament to both the passing of time and the slow evolution of the author.

When I started learning English at the age of 10, some of those scribbles began to take on a foreign accent. It felt exciting – almost literary – to write in another language. I had a pen pal in the US; we described our very different lives to each other. I used to carry a heavy, leather backpack to school and didn’t step into a McDonald’s until I was 15. She had bright, plastic things – objects that to us, post-communist Polish kids, signalled abundance. She sent me colourful stickers I never dared to use. I treasured them, like gold.

When I graduated with a journalism degree, I didn’t know how to write well. I wrote poetically, emotionally, often excessively – too much hyperbole and too many metaphors, too many thoughts spilling everywhere.

Then came the editors.

At a major newspaper where I interned, I sat beside them as they worked through my copy. Delete. Delete. Delete. The sound of the Apple keyboard became a kind of brutal metronome. It was so ruthless I often held back tears – and yet it was the best writing education I could have received.

A decade ago, after years abroad in Japan and the UK, I moved to Australia. Another country. Another recalibration of language.

The words were in me, but they were messy – misused, mispronounced. What I had, though, was observation. Curiosity. A raw need to express.

So I kept writing. Essays at first, for free. Then small commissions. Slowly, the writing life took shape.

This long introduction comes down to one thing: the making of a writer is neither neat nor linear. It requires effort.

It is a winding path, full of mistakes, red-marked edits, awkward phrasing and – this stings the most – rejected drafts.

Joseph Conrad (another Pole writing in English) never fully owned the language and yet his books reshaped literature. Shakespeare invented words because existing ones weren’t enough. Meredith Costain’s Ella Diaries – the series my daughter reads obsessively, torch in hand under the covers – is bursting with made-up words and breathless punctuation. It is wonderful. Children know instinctively what we adults keep forgetting: that language is alive, and alive things are allowed to be messy.

And now we arrive here.

In an era where writing starts to feel eerily perfect, where every LinkedIn post reads as though it has been written by an accomplished writer – if only the sentences didn’t all sound filtered through the same machine. And I find myself missing the friction, missing the typos, missing the realisation that the quirky phrase I admire has come from someone’s original thought process.

If you never write a bad essay, how will you know how to write a better one? If a generation of kids stops wrestling with words – because they arrive instantly, fully formed – how will they ever develop their own voice, their taste?

AI has flattened language. Removed the typos, the strange grammar, the gorgeous off rhythm that makes writing feel alive. All of that is disappearing and, with it, we’re losing something profound – the possibility of being moved.

So this is my small plea, to writers of all kinds – young and old, aspiring and accomplished, first-language or second. Consider your imperfect writing not as a flaw but as a gift. A signature.

“There is nothing to writing. All you do is sit down at a typewriter and bleed” is a quote often attributed to Ernest Hemingway.

We need that blood, that pulse of synapses. We need the mess of it all. Because without it what remains are sentences that are technically flawless but emotionally vacant.

Perfectly polished. Entirely forgettable.

As someone who learned to write in English through blood, sweat and tears, who once chased perfection as though it was a destination, I now find myself moving in the opposite direction. Back towards rawness, towards weird syntax. Back home?

Alex Reszelska is a Polish-born, Oxford-educated writer