You are here

The National Interest

Subscribe to The National Interest feed
Updated: 4 weeks 1 day ago

The B-52 Bombers Are Back in a Big Way in Europe

Tue, 25/08/2020 - 14:00

Peter Suciu

Security, Europe

There have been more than two hundred reported successful flight sorties coordinated with NATO allies and partners since the Bomber Task Force missions began.

While the 1980s pop band The B-52s went on tour in Europe during the summer of 2019, the U.S. Air Force’s B-52 Stratofortress bombers will be rocking and rolling in theater and flight training across Europe and Africa in the coming weeks. This weekend six of the aircraft from the Fifth Bomb Wing out of Minot Air Force Base, North Dakota, arrived at Royal Air Force (RAF) Fairford, England for the planned training mission.

Strategic bomber missions, which the Air Force reported have been occurring since 2018, are to provide theater familiarization for the aircrews as well for the Air Force to have an opportunity to integrate with NATO allies and regional partners. The bomber mission will further enhance readiness and provide the necessary training to allow the Air Force to respond to any potential crisis or challenge in the region.

There have been more than two hundred reported successful flight sorties coordinated with NATO allies and partners since the Bomber Task Force missions began. 

“B-52s are back at RAF Fairford, and will be operating across the theater in what will be a very active deployment. Our ability to quickly respond and assure allies and partners rests upon the fact that we are able to deploy our B-52s at a moment’s notice,” said Gen. Jeff Harrigian, U.S. Air Forces in Europe and Air Forces Africa commander via an Air Force statement. “Their presence here helps build trust with our NATO allies and partner nations and affords us new opportunities to train together through a variety of scenarios.” 

History of the B-52 in Europe 

Even before the Berlin Airlift, British and American military planners saw the need for strategic bombardment capabilities in Western Europe and this included the forward deployment of the B-29 Superfortess bomber in bases in Europe. It was in July 1948 that British prime minister Clement Attlee approved the deployment of American bombers to bases in the UK—which marked the first time the U.S. Air Force would station combat aircraft in another sovereign nation during peacetime. 

In the late 1950s, the B-52 Stratofortress made its debut in the UK, while the first B-52s—from Loring Air Force Base, Maine—were deployed to Moron Air Base in Spain, and marked the first such use of a Spanish base. 

It was during the 1991 Gulf War campaign that B-52s forward-deployed to U.S. Air Force Bases in Europe (USAFE) conducted coalition operations in Iraq. In recent years, the venerable B-52 has been joined by the newer B-1 Lancer and B-2 Spirit.

Last year a bomber task force of B-52s, along with airmen and support equipment from the Second Bomb Wing, Barksdale Air Force Base, Louisiana, were deployed to the U.S. European Command area of operations to conduct theater integration and flying training. Those bombers were also deployed to RAF Fairford. 

Despite the age of the B-52, the Cold War bomber has remained a reliable workhorse for the United States Air Force. There are currently fifty-eight B-52 bombers in active service, with another eighteen in reserve and another dozen or so in long term storage. The bombers have flown under various commands during those sixty-seven years, beginning with the Strategic Air Command (SAC), until it was disestablished at the end of the Cold War in 1992 when its aircraft were absorbed into the Air Combat Command (ACC). Since 2010, all B-52 Stratofortresses fly under the Air Force Global Strike Command (AFGSC). 

The B-52 could also be seen to be on a “world tour” of sorts. Over the Fourth of July weekend, the U.S. Air Force deployed a B-52H Stratofortress nuclear-capable bomber to Andersen Air Force Base in Gaum. The bomber, which is assigned to the 96th Bomb Squadron, 2nd Bomb Wing, was deployed from Barksdale Air Force Base—and had flown to Guam in a twenty-eight-hour mission and demonstrated the U.S. Indo-Pacific Command’s commitment to the security and stability of the region. 

Peter Suciu is a Michigan-based writer who has contributed to more than four dozen magazines, newspapers and websites. He is the author of several books on military headgear including A Gallery of Military Headdress, which is available on Amazon.com. 

Image: Reuters

Could North Korea Suffer Its Own Chernobyl Disaster?

Tue, 25/08/2020 - 13:50

Oleg Shcheka

Security, Asia

The risks are real.

Key Point: While North Korea’s nuclear weapons program gets most of the international attention, implications of Pyongyang’s pursuit of civilian nuclear energy should be a concern, too.

Chronic power shortages are one of North Korea’s major vulnerabilities. The country is extremely reliant on hydropower stations which, according to North Korean official sources, provide 56 percent of the national power-generating capacity. Hydropower output depends on precipitation and drops drastically in dry years. Developing nuclear energy has long seemed an obvious option for North Korea to bolster its energy security. Already for many decades the DPRK has been making efforts to build an atomic power industry, although the lack of funding and Pyongyang’s severely restricted access to the international market of civilian nuclear technologies have seriously hampered the its progress in this area. Still, the DPRK continues to pursue nuclear-power generation. In particular, progress is being made on the 100 megawatt-thermal Experimental Light Water Reactor (ELWR) at the North’s Yongbyon Nuclear Scientific Research Center, of which construction began in 2010. There might also be other, as yet undisclosed, nuclear facilities in development and under construction designed to combine civilian (power generation) and military (plutonium production) functions.

This article originally appeared on 38 North.

Until recently Pyongyang has not treated its civilian atomic sector as a top priority, with most of the resources going into military-related nuclear programs instead. This, however, may be changing, especially with the specter of an external trade and energy blockade looming ever larger after the adoption of increasingly harsher international sanctions measures in 2017, including severe limitations on the supply of oil and petroleum products to North Korea. In order to deal with potential oil shortages, Pyongyang may be banking on coal liquefaction. The technology of turning coal into hydrocarbon liquids is not particularly sophisticated, but requires large energy inputs, which might serve as another argument in favor of the speedy deployment of civilian nuclear energy.

Part of the rationale for nuclear power plants may also be strategic—using them as a shield to deter possible attacks on the North. The United States and South Korea might have to think twice before conducting military strikes in the areas where North Korea’s active nuclear power plants would be located given how close they are to Seoul and other densely populated areas.

Pyongyang’s newfound interest in jump starting nuclear energy projects is evidenced by the shift in priorities in the specialization of North Korean science interns sent to study in Russia. Before the late 2000s, mostly physicists came to Russia, including those who demonstrated their interest in nuclear physics. However, since the early 2010s, they were increasingly replaced by specialists in the fields of heat engineering, cooling systems and other areas of energy engineering required to build and operate nuclear power reactors. It should be noted that Russia has always maintained due diligence when admitting North Korean interns to science and technical departments in order to prevent any leaks of dual-use tech. Controls have been considerably strengthened since 2016, after the adoption of UN Security Council resolutions which made studies and research in proliferation-sensitive areas off limits to the DPRK citizens.

Soviet Legacy

As is well known, North Korea’s nuclear energy program has Soviet origins. In 1956, an agreement was concluded with the Soviet Union on the DPRK’s membership in the international nuclear research center at Soviet Dubna. In 1964, a research center was established at Yongbyon, where a year later the Soviet research reactor—an IRT-2000—with a capacity of 2 megawatts was built. This is a pool-type reactor in which the active zone is submerged in an open pool filled with light or heavy water. The water serves simultaneously as a moderator, neutron reflector, coolant and biological shield. This design is easy to use, since all elements of the core and the first cooling circuit are at atmospheric pressure. It also provides easy access to the reactor parts. For a nuclear fuel, a dispersed composition is used—the uranium dioxide particles are distributed in an aluminum matrix. The reactor makes it possible to achieve a flux density in the active zone of 1013 cm-2·s-1. Those technical capabilities were the starting point for the establishment of a nuclear power industry in North Korea.

What kind of reactors can North Korea build for its nuclear power plants with the available technical and intellectual capabilities? Most probably it could build reactors similar to the Soviet RBMK-1000, a variety of light water graphite reactor—the same type as the reactor that exploded at Chernobyl. Such reactors are channel-type, heterogeneous, uranium-graphite (graphite and/or water are used as the moderators of neutrons), boiling-type on thermal neutrons, which use boiling water as a coolant in a single-circuit scheme and can produce saturated steam at a pressure of about 70 kgf/cm.² The creation of such an open-frame reactor is much easier for the DPRK compared to reactors using high-strength large-sized casings that require significant industrial production capacity. In addition, a casing puts limitations on overall dimensions. Absence of a casing allows for greater generating capacity of the power unit. Moreover, in an open-frame design, the fuel assemblies can be replaced without shutting down the reactor, which ensures an increase in the reactor power factor. However, such a simple design feature, together with the desire to crank up more power to generate electricity, inevitably leads to an increased risk of disasters associated with human errors as well as imperfect operation and protection systems. The most tragic example is the accident at the Chernobyl nuclear power plant in 1986, when the plant’s staff, running the reactor in an experimental mode, made gross errors while manipulating the graphite moderators of neutrons.

High Risks of a Reactor Accident

The concern is that the North Koreans may attempt to launch nuclear power plants with substandard and poorly tested reactors. Doing so would keep with the North Korean tradition of sacrificing safety standards to accelerate construction of high-priority industrial facilities. Soviet technical specialists who assisted the DPRK in the 1960s repeatedly noted North Koreans’ willingness to cut corners in terms of safety standards for the sake of construction speed.

Based on the political and economic realities of the DPRK, one can assume with a high degree of certainty that the North Koreans will try to wring out the maximum capacity from their nuclear reactors. However, even slight movements beyond safe operating parameters of an RBMK-type reactor can cause rapid irreversible consequences, namely destruction of the fuel assembly, deformation of the core’s graphite masonry, and injection of a significant amount of radioactive substances from the destroyed fuel assembly into the reactor space. If this happens, a brief increase in pressure may occur in a section of the gas circuit, which will result in large volumes of water flooding the reactor space. Its instantaneous evaporation will cause a sharp increase in pressure in the reactor space, which in turn will lead to extrusion of the reactor’s hydraulic locks and, as a consequence, release of the radioactive vapor-gas mixture from the reactor space into the atmosphere. In other words, rapidly increased pressure inside the construction will destroy it, and the radioactive elements in the vapor mixture will be thrown out. As a result, a huge territory could be contaminated with radioactive substances. Based on the reactor’s power generating capability, and depending on weather conditions, such as strength and direction of the wind, up to 100 million people in North and South Korea, the eastern provinces of China, in the south of Russia’s Far East and on the west coast of Japan could be exposed to mortal danger.

While North Korea’s nuclear weapons program gets most of the international attention, implications of Pyongyang’s pursuit of civilian nuclear energy should be a concern, too. Indeed, just as already happened with the DPRK’s nuclear and missile program, which—unexpectedly to many—has made great strides in a relatively short time, it may not take very long for North Korea to launch power-generating nuclear reactors whose technical parameters may be far from the best standards of safety. Any search for solutions to the Korean nuclear crisis must take this concern into account as well.Image: Reuters

How Did America Bring Democracy To Germany? By Bombing It Into Dust

Tue, 25/08/2020 - 13:30

Warfare History Network

History, Europe

The air war contributed significantly to the eventual defeat of the enemy.

Here's What You Need To Remember: The advocates of strategic bombing and carrying the war to the civilian population had argued that these campaigns would bring the Third Reich to its knees without the need for brutal, bloody combat on the ground. They were wrong. By May 1945, the people of Germany may have lost their enthusiasm for Adolf Hitler’s regime and its wars, but they continued to carry on. It was only after the Allied armies with their superior manpower and firepower overran the German forces that surrender came.

Behind the strategy that governed the American air war in Europe during World War II lay events and ideas that dated back to World War I and the 1920s. The first strategic bombing raid in 1915 deployed not airplanes but German Zeppelins, rigid airships that dumped ordnance on the east coast of Great Britain. Two years later Germany’s Gotha bomber, a machine capable of a round trip from Belgian bases, struck at Folkestone, a port through which British soldiers embarked for the front. This raid killed 300 people, including 115 soldiers. The bomber had proven itself as a weapon against a military target.

A few weeks later, 14 Gothas attacked London in the first fixed-wing assault upon civilians and their institutions. The dead and wounded totaled 600, and the raid wrought consternation among the public and the government. The British hastily summoned fighter units to gird the cities. To counter the defensive cordon, the Gothas flew night missions. With primitive navigational tools and no bombsights, the raiders drizzled explosives without any pretense of hitting military or industrial targets. Theoreticians of war now had a new factor to enter into equations: the terror of massive strikes upon workers producing the stuff of war.

Brigadier General Billy Mitchell, who had only earned his wings in 1916, commanded the air force for General John J. Pershing and his American Expeditionary Force in France. Mitchell met Maj. Gen. Hugh M. Trenchard, commander of the Royal Flying Corps, who quickly persuaded the American that the “airplane is an offensive and not a defensive weapon.” Mitchell grasped the possibilities of taking the war behind the lines and plotted a huge raid that would blast German military and industrial targets in the autumn of 1918. A correspondent for the Associated Press wrote, “His navy of the air is to be expanded until no part of Germany is safe from the rain of bombs…. The work of the independent force is bombing munitions works, factories, cities and other important centers behind the German lines…. Eventually Berlin will know what an air raid means, and the whole great project is a direct answer to the German air attacks on helpless and unfortified British, French and Belgian cities.”

World War I ended before Mitchell could demonstrate what his “navy of the air” might achieve, but he continued to expound his ideas. While accepting the need for control of the skies through destruction of the enemy air forces, he said, “It may be … the best strategy to damage and destroy property, and to kill and disable an enemy’s forces and resources at points far removed from the field of battle of either armies or navies.” Implicitly, Mitchell accepted war on civilians.

In 1921 and 1923, Mitchell demonstrated that bombers could sink some anchored warships. The experiments confirmed that aircraft could destroy substantial stationary targets, but admirals scoffed that vessels under way could easily avoid the attacks. The Army dismissed the show as irrelevant for its vision of warfare, which was to slug it out with hostiles while capturing territory.

While Mitchell and Trenchard promulgated their ideas of aerial offensives, a contemporary Italian, General Giulio Douhet, preached that modern war involved the entire society, including “the soldier carrying his rifle, the woman loading shells in a factory, the farmer growing his wheat, the scientists experimenting in the laboratory.” Douhet spoke not only of smashing wartime production but argued, “How could a country go on living and working, oppressed by the nightmare of imminent destruction.” He conceded that such a war without mercy eliminated considerations of morality.

Americans partially bought into the Douhet’s theories. They buried the idea of indiscriminate raids that slaughtered nonmilitary people and emphasized hitting industrial production, transportation facilities, and military centers. Promoters of strategic bombing hypothesized that by destroying the goods of war and the will of the people to resist, conflicts could be shortened and the wholesale carnage of the World War I battlefield avoided.

Mitchell’s outspoken demands for an independent air force ended his career, but acolytes like Henry “Hap” Arnold, Carl Spaatz, and Ira Eaker retained positions in the military hierarchy. They successfully promulgated the doctrine of strategic bombing, accurate targeting of enemy installations and facilities. Toward that end, in 1933 the War Department approved a prospectus for a plane capable of traveling at speeds in excess of 200 miles per hour and with a range of more than 2,000 miles. The new bomber could be used to defend either coast, but if deployed overseas it would require bases in England or sites like the Philippine Islands. Boeing produced the first model of the Boeing B-17 Flying Fortress, which roared through the sky at 232 mph during a 2,100-mile trip from Seattle to Dayton. The advocates of air power were delighted, but unfortunately the prototype crashed and burned on a test flight. Instead of ordering 65, the War Department scaled back to a mere 13.

To carry out daylight precision bombing the Army adopted a tool ordered and then discarded by the Navy as unsuitable for dive-bombers: the Norden bombsight. In the crucial seconds over an objective, a bombardier manipulated the device to guide the plane as he lined up the target and then released the explosives.

“I Can Shoot Those Things Down Very Easily.”

Douhet also taught his disciples that heavily armed bombers in mass formations could operate by day against fighter defenses. The publicity on Boeing’s creation hailed the new airplane as a “Flying Fortress,” but it was hardly as impregnable as the name indicated. The first B-17s lacked armor plate to protect the crew, carried only five machine guns, and made no provision for a tail gunner. The B-17 faithful believed that was sufficient since, in their minds, the aircraft could attain altitudes beyond reach of interceptors. In the late 1930s, a hot shot fighter pilot, Lieutenant John Alison, confounded the assurance of promoters of the early B-17 when he convincingly demonstrated he could push his fighter close enough to the weaponless rear of a Flying Fortress and shoot it down. His feat, however, did not immediately persuade the bomber command to install a tail gun. Nobody in the Air Corps was going to listen to a pursuit pilot, a second lieutenant who claimed, “I can shoot those things down very easily.”

The conviction that strategic bombers could operate unmolested by enemy aircraft influenced the development of U.S. fighters. Escorts to protect the big planes would be unnecessary, and the design for a fighter focused upon a machine that would protect the ground forces. Not until the Battle of Britain in 1940 and the appearance of the Messerschmitt Me-109 did the American experts realize that the speed and altitude of an enemy fighter challenged their assumptions about the invulnerability of the B-17. The standard American fighter, the Curtiss P-40 Tomahawk—a good gun platform, while speedy in a dive—had a limited ceiling and rate of climb. It would not be deployed in the European Theater.

Desire for an interceptor with longer range and performance higher in the sky had belatedly resulted in the twin-engine Lockheed P-38 Lightning, and aeronautical engineers returned to their drawing boards to blueprint what would become the Republic P-47 Thunderbolt and the North American P-51 Mustang. At the same time manufacturers modified the big bombers, now including the Consolidated B-24 Liberator, which had slightly more range and bomb weight capacity than the B-17, adding better means to defend themselves: up to 12 .50-caliber machine guns, a tail gunner, and armor for the crew. When the enemy changed tactics during World War II and began head-on attacks, a chin turret would be added to give greater firepower forward.

From the start of World War II, the British intended to follow Trenchard’s maxims on carrying the war to the industrial centers and the population of the foe. In May 1940, when the Royal Air Force attempted daylight strategic raids by its fleet of bombers, German ack-ack and interceptors killed more flyers than the enemy lost on the ground. B-17s purchased from the United States carried out a few daylight missions with dismal results and curdled RAF enthusiasm for the Flying Fortress. American analysts noted that the Brits insisted on operating above 30,000 feet, which overloaded the oxygen systems, froze weapons, and reduced airspeed, making the planes vulnerable to the Me-109s. The RAF B-17 missions relied on an inferior bombsight, and the maximum number of aircraft in formation was a mere four. The British, using their own bombers, switched to nighttime assaults, on industrial areas. They made no pretense of discriminating between residential neighborhoods and factories.

The British tried to convince the Air Corps that it too should operate after dark. That would have negated the entire basis for the designs of the B-17 and B-24 and wasted the hundreds of hours training of bombardiers with the Norden device. Faith in daylight precision bombing thus remained intact as the United States entered World War II. There was no disagreement with the British on the purposes and potentials of air power. Maj. Gen. Ira Eaker, who headed the American bomber command in 1942, said, “After two months spent in understanding British Bomber Command, it is still believed than the original all-out air plan for the destruction of the German war effort by air action alone was feasible and sound and more economical than any other method available.” He did agree that since the resources then available were limited, a ground effort might be required.

The Eighth Air Force opened for business in England in May 1942, but neither B-17s nor B-24s were available. Most of the handful of combat-ready heavyweights had been sent to the Philippines, where Japanese attacks destroyed many of them. As a result, when the Eighth inaugurated its campaign against the Axis powers on July 4, 1942, the mission had little resemblance to strategic bombing. The 15th Bombardment Squadron borrowed 12 A-20 twin-engine Bostons from the RAF, and only half of these were flown by U.S. crews. They struck at four airfields in Holland, flying at an extremely low level before unloading their bombs and strafing the base. They inflicted minor damage, and three were shot down.

Not until some six weeks later did the Eighth launch a true daylight strategic bombing mission. On a beautifully sunlit day, a dozen B-17s from the 97th Bomb Group headed for the Rouen (France) railroad yards. Accompanied by RAF Spitfires, they encountered light flak and no serious interference from German fighters. All returned safely, leaving behind them, said Eaker, head of the Eighth Bomber Command who flew in the lead plane, “a great pall of smoke and sand.” General Henry “Hap” Arnold, the Air Corps commander, declared, “The attack on Rouen again verifies the soundness of our policy of the precision bombing of strategic objectives rather than the mass bombing of large, city size areas.”

The Rouen raid achieved only nuisance value, and euphoria dissipated rapidly as the strategic bombing campaign intensified. Practice revealed substantial holes in theory. The Luftwaffe, manning high-performance Me-109s and Focke Wulf 190s, greeted marauders with a skill and savagery that tore huge holes in the fabric of the Eighth. Losses of from 10 to 20 percent frequently resulted from the deadly combination of flak and fighters. Air Corps analysts calculated that adequate self-defense required a minimum of 300 bombers, a figure difficult to achieve during the first 18 months of U.S. aerial combat in Europe. Nevertheless, the Americans strove to meet their responsibilities for round-the-clock assaults. The RAF, exclusively bombing at night, including an occasional thousand-plane raid, endured heavy losses of air crews to flak and night fighters. Post-mission photo analysis indicated their destruction of industrial works was far from commensurate with the casualties.

By the spring of 1943, the Allied air command realized that there were not sufficient aircraft to hammer day and night the entire war industry of occupied Europe and Germany with precision. The RAF had no accuracy with its night raids, and the scattershot approach of the Eighth Bomber Command did not cripple production. Eaker, as commander of the Eighth’s Bomber Command, proposed a “Combined Bomber Offensive,” suggesting “it was better to cause a high degree of destruction in a few really essential industries than a small degree of destruction in many industries.”

Toward this end, the U.S bomber command mounted an attack on Romania’s Ploesti oil fields and refineries using B-24s flying from fields in North Africa. To avoid detection and increase accuracy, the participants in Operation Tidal Wave flew at low level. The raiders inflicted modest harm. Ploesti had been functioning well below capacity, and it was a simple matter for production to recoup. The Tidal Wave bombers suffered horrendous losses: more than 300 killed, hundreds wounded or captured, 79 interned in Turkey. Just 33 of the 178 Liberators involved could be listed as fit for duty after the mission. No one could seriously propose any further low-level daylight attacks for the heavyweights.

Because of the Luftwaffe’s success and with an eye to controlling the air when the time arrived for an invasion of Europe, in June 1943 the Allied high command created Operation Pointblank, announcing, “It has become essential to check the growth and to reduce the strength of the day and night fighter forces which the enemy can concentrate against us … first priority in the operation of British and American bombers … shall be accorded to the attack on German fighter forces and the industry upon which they depend.” German airbases and factories producing planes and essentials like ball bearings drew priority.

General LeMay Calculated That Flak Gunners Needed to Fire 372 Rounds to Guarantee a Hit on a B-17

The enemy met Pointblank with ferocity. In August 1943 a maximum effort that put up 300 bombers to strike Schweinfurt and Regensburg cost the Americans 60 planes—600 crewmen—shot down with many additional aircraft badly damaged. A basic problem lay in vulnerability to interceptors. Spitfires, with a range limited to 100 miles beyond the British coast, could not provide protection on longer missions. Bereft of escorts, the 10 or 12 .50-caliber machine guns of the B-17s and B-24s were not enough to fend off the swarms of Me-109s and FW-190s. Another weakness centered on use of the Norden bombsight, which required a clear view of the target and a steady hand. In the cloudless, peaceful skies over Texas, it might have been possible for a skilled operator in the boast of the times to put a bomb in a pickle barrel. But frequently over Europe heavy rain or snow, thick overcasts, and buffeting winds obscured or misled bombardiers. Even when the target was clearly visible, torrents of antiaircraft shells intimidated the men at the toggle switches and those in the cockpits. Bombs fell away prematurely, or the plane suddenly veered off because the pilot seized the controls and yanked the ship out of imminent danger.

General Curtis LeMay, commander of the 305th Bomb Group, after analyzing the photo reconnaissance intelligence, decided a prime culprit for poor accuracy was failure to maintain a steady course. He decreed that none of his pilots could take evasive action over a target. He had calculated that flak gunners needed to fire 372 rounds to guarantee a hit on a B-17 in level flight. Whether his arithmetic was correct or not, LeMay sought to persuade his subordinates by announcing he would fly the lead aircraft on missions. In fact, the first ship over a target had a better chance for survival because antiaircraft personnel adjusted for range and speed as a flight passed. Disciplined behavior, however, added effectiveness, although no one could compensate for weather that obscured a target.

LeMay also innovated a better defense for the big bombers. He insisted upon a tight, stepped-box formation that enabled gunners to provide mutual assistance. A bomb group could bring to bear from 200 to 600 machine guns on an attacker. His demands for more training by navigators and bombardiers and closed-in formations earned him his nickname of “Iron Ass,” but the 305th proved his point with more effective results and fewer losses.

In contrast to LeMay’s carefully worked out stepped-box formations, inexperienced Brig. Gen. Nathan Bedford Forrest, newly installed as an air division commander, ordered the 95th Bomb Group to use a flattened formation, with aircraft wingtip to wingtip. Over Kiel, the Luftwaffe feasted upon the 95th, shooting down eight, including the bomber carrying Forrest. The disaster proved the efficacy of LeMay’s design.

Pointblank failed to halt aircraft manufacture because the enemy had decentralized its critical industries. In particular, the Third Reich had arranged to import vital ball bearings from neutral Sweden. To inflict lasting damage required repeated raids on factories, and in 1943 the Air Corps lacked enough planes and crews. It was also during Pointblank that the initial desire to minimize civilian casualties by concentrating on military installations, manufacturing plants, and transportation hubs began to give way. Bomber command directed the crews on a mission against the rail junction at Munster to unload on the city center, hitting the town’s workers. According to one historian, the attack upon civilians “did not produce any moral qualms among the airmen; some cheered … their own sufferings had bred bitterness.”

Throughout the last months of 1943, the U.S. bomber campaign staggered from the continued onslaughts of German fighters and the increasingly effective flak aided by improved German radar systems. Losses continued to soar above a prohibitive 10 percent. P-38 Lightnings and P-47 Thunderbolts with American pilots had replaced the Spitfires, but without drop tanks they could only venture as far as the Rhine River, leaving the big fellows exposed to the depredations of interceptors. At the beginning of 1944, newly arrived P-51 Mustangs equipped with Rolls Royce engines debuted and quickly won recognition as the best fighter in the theater.

The tide turned with Big Week, starting February 20, 1944. The occasion introduced new features to the American effort. Lt. Gen. Jimmy Doolittle had replaced Eaker at the helm of the Eighth Air Force. Doolittle modified his predecessor’s policy that no fighters, the Little Friends, could ever leave the bombers to chase the enemy. Doolittle ordered the P-38s, P- 47s, and P-51s to attack the Luftwaffe on sight, provided they left some guardians to screen the Big Friends. The open season on German fighters was a product of an abundance of fighter squadrons and the development of fuel drop tanks that gave the Lightnings, Thunderbolts, and Mustangs hundreds of additional miles of flight distance. The Germans, in spite of the raids upon their industrial areas, were able to replace downed aircraft, but the predatory tactics of the Americans slashed the number of skilled, experienced pilots dueling with the Allied air arm. As the war progressed into 1944, the reservoir of capable German airmen suffered severe attrition.

For the first day of Big Week, the Eighth Air Force, working with British Bomber Command and the U.S. Fifteenth Air Force operating from Italy, dispatched 880 Fortresses and Liberators along with 835 fighters deep inside Germany. The Eighth alone claimed 115 enemy fighters shot down. That may have been an exaggeration, but the ability to launch similar massive raids six times within seven days surely knocked the Luftwaffe back on its heels. The incessant battering of German cities and their people forced the Third Reich to withdraw some fighter squadrons from the Eastern Front and bring others back from France and the Low Countries to protect the home front. Similarly, antiaircraft batteries were redeployed from both fronts.

With Operation Overlord, the invasion of Normandy, now planned for June, Allied strategy switched from attacks deep inside Germany and began to work over the defensive infrastructure along the French coast: rail lines, roads, bridges, tunnels, viaducts, and the communications network. While the bombers struck at German airfields and marshalling yards, the P-47, equipped with bombs, proved to be a great train buster and general interdiction tool. Gradually, many of the Thunderbolt squadrons transferred to the Ninth Air Force, which was more of a tactical support weapon, while the Eighth added P-51s to its rosters.

On D-day, a thick, nearly impenetrable overcast of more than 10,000 Allied bombers and fighters hovered over the English Channel and the Normandy shoreline. The pre-D-day campaign had produced sensational results. At most two or three German planes dared to appear as the invaders struggled ashore. The Luftwaffe preferred to husband its assets rather than risk confrontation with the canopy of American bombers and fighters, along with numerous planes from the RAF.

Fighter pilot Martin Low, in a P-38, said, “From June 6th until about 10 days later, we flew three missions a day, bombed and strafed anything that moved within 50 to a hundred miles of the coast, mostly trains.” The air attacks sharply curtailed movement of German reinforcements to help defend the beaches. Patrols like that mentioned by Low menaced daylight convoys or trains. One serious failure of the Allied air effort was the inability to destroy the blockhouses and emplacements that guarded the shores. Fearful of dropping bombs on friendly forces, the landing area missions were confined to drops a few miles beyond the beaches, and most of the bombs exploded harmlessly in empty fields.

Lt. Gen. Fritz Bayerlein Reported That 70% of His Soldiers Were “Either Dead, Wounded, Crazed or Dazed.”

Six weeks later, as the U.S. Third Army prepared to break out of Normandy toward the end of July 1944, it was exposed to the perils of high-altitude bombing aimed at tactical situations. Poor visibility prevented two-thirds of the scheduled 900 bombers to even reach the target near St. Lo, but 343 B-17s and B-24s unloaded on a poorly defined zone outlined by ground forces commander General Omar Bradley. Many of the bombs fell in no-man’s land between the opposing armies, but some exploded among GIs, killing 25 and wounding more than 60. A subsequent 1,500-plane attack that included fighter-bombers from the Ninth Air Force and the dreadnoughts of the Eighth devastated the Germans. Lt. Gen. Fritz Bayerlein of the Panzer Lehr Division said, “Artillery positions were wiped out, tanks overturned and buried, infantry positions flattened and all roads and tracks destroyed… The shock effect on the troops was indescribable. Some of my men went mad and rushed around in the open until they were cut down by splinters.” He reported 70 percent of his soldiers “either dead, wounded, crazed or dazed.” The aerial assault opened the gates for the St. Lo breakout.

Campaigns like Overlord and the St. Lo mission mandated detours from the strategic bombing program. Similarly, when the first V-1 pilotless bombs struck London in June 1944, the British demanded immediate attacks to neutralize the launch installations near the French coast. Starting June 16, bombers with fighter escorts in what were called Noball raids hit several of the V-1 emplacements. By September, the advances into Normandy by the invaders eliminated the V-1 bases, but the more deadly V-2, a rocket-propelled explosive with a primitive guidance system fired from territory still held by the Third Reich, killed more than 9,000 Londoners. The Allies attempted to erase the source of the rockets with assaults on Peenemunde, where German rocket engineers using slave labor developed and then deployed the V-2.

During the run-up to D-day, there had been one exception to the concentration on the defenses against the Allied invasion. After much debate about priorities for bombing campaigns, the British and Americans agreed to target oil, which was critical to the enemy war effort. Although the initial strike at Ploesti cost the Air Corps dearly, Allied planes pounded the Romanian fields and refineries regularly, reducing the flow to Germany. After the Soviet Red Army invaded Romania, driving the country to change sides and shut off the spigot, the Third Reich relied on its reserves and production of synthetics.

Starting in May 1944, the Americans struck at depots and manufacturing sites for ersatz oil 127 times, while the RAF mounted 53 raids. Acutely aware of the threat to their lifeline, the Germans massed antiaircraft around the synthetic fuel installations. Tail gunner Eddie Picardo recalled one mission: “The flak was so thick it blotted out the sun. For a full 10 seconds it was like a total eclipse.” The ship next to his disappeared in a bright flash of fire. His plane returned home with basketball-sized holes in the fuselage.

On a single day in October 1944, during the missions to Politz, Ruhland, Bohlen, and Rothensee, the Eighth Air Force counted 40 planes shot down, only 3 percent of the more than 1,400 on the raid, but still more than 358 air crew missing in action. Furthermore, 700 bombers reported damage. However, the campaign against synthetic petroleum paid off. The amount available to the Wehrmacht and the Luftwaffe fell to half the total needed. The Me-262 jet fighters were towed to runways by cows, recruits received ever fewer hours of flight instruction, and artillery literally depended on horsepower to move.

As the strategic bombing campaign resumed in earnest after D-day, the Allied air forces attempted to extend their reach farther east. Diplomatic negotiations resulted in an agreement that U.S. bombers flying out of England could blast the most distant targets of the Third Reich and then continue several hundred miles to land at Soviet airbases. Refueled and rearmed, they could hit enemy installations on the way back to their home fields. The Soviets welcomed planes and crews. Unfortunately, their hospitality did not include the right for P-51s to fly protective patrols while the hosts threw a lavish banquet for their guests. A German reconnaissance plane discovered the sleek bombers sitting on the ground. A subsequent raid wrecked nearly 70 aircraft. The shuttle program fizzled out after a few more operations.

“If You Saw London Like I Saw It, You Wouldn’t Have Any Remorse. I Don’t Know Anyone Who Was Remorseful.”

With streams of bombers blasting targets even as far as Poland, there was talk of using the aircraft to halt the genocidal program at the Auschwitz concentration camp, either by targeting the buildings there or the rail lines that hauled the condemned to the gas chambers. The U.S. War Department opposed any diversion for that purpose as weakening “decisive operations elsewhere.” It was suggested that surely a handful of aircraft could have been spared from the thousand plane raids, but a detour that split off a few bombers would have denied them the protection of the massive formations. Furthermore, high-altitude strikes often missed small objectives like a rail line or bridge and the enemy could repair smashed tracks rather quickly. In any event, there was little political or military desire to attack the murder camps.

While the British openly wreaked havoc on civilians, the United States claimed it restricted its bombing to war facilities. That may have been a guiding principle, but invariably American bombers killed or maimed noncombatants. In the turbulence of flak and enemy fighters, with targets obscured by weather, and due to navigation errors, ordnance frequently exploded well off the mark. A miss by only 500 yards could plant a bomb in a residential area, and there were instances in which the drop struck miles from the objective. Toward the end of the war, the U.S. air command accepted the RAF policy and struck Berlin and Dresden without any firm strategic goal.

Few airmen cringed at the indiscriminate use of air power. Dave Nagel, an engineer and gunner with the 305th Bomb Group, said, “If you saw London like I saw it, you wouldn’t have any remorse. I don’t know anyone who was remorseful. We didn’t know whether an area was populated or not. We were supposed to be over a target, normally a factory, when we let the bombs go, but we assumed it was surrounded by civilians.”

Curtis LeMay, who departed Europe to direct the devastation from the air upon Japan, said, “As to worrying about the morality of what we were doing, nuts! I was a soldier, soldiers fight. If we made it through the day without exterminating too many of our own people, we thought we’d had a pretty good day.”

The advocates of strategic bombing and carrying the war to the civilian population had argued that these campaigns would bring the Third Reich to its knees without the need for brutal, bloody combat on the ground. They were wrong. By May 1945, the people of Germany may have lost their enthusiasm for Adolf Hitler’s regime and its wars, but they continued to carry on. It was only after the Allied armies with their superior manpower and firepower overran the German forces that surrender came.

A post-V-E Day survey estimated that Germany lost less than 4 percent of its productive capacity, and even a devastated city like Hamburg recuperated to 80 percent of its output within a few weeks. That said, the air war contributed significantly to the eventual defeat of the enemy. Foremost, the raids on fuel depots and synthetic plants curtailed the Luftwaffe’s ability to train pilots and deploy their new jet fighters in sufficient numbers. The Allied ground forces operated without interference from the air. The attacks on fuel sources destroyed the vaunted mobility of German armor, and the battering of rail and road nets strangled supply lines.To be sure, the successes of the Soviet forces on the Eastern Front played a major role in weakening the ability to resist, but at the same time, the armies on the Western Front could not have advanced as swiftly without the strategic bombing campaigns.

Acclaimed author Gerald Astor has written numerous books on the topic of World War II, including Voices of D-Day and The Mighty Eighth. He resides in Scarsdale, New York.

This article originally appeared on Warfare History Network. This article appeared in 2018.

Image: Wikipedia.

Fact: More Than Sixty Countries Rely on This One Sniper Rifle

Tue, 25/08/2020 - 13:15

Kyle Mizokami

Security,

Meet the Barrett M82.

Key Point: The utility of the Barrett M82 sniper rifle has been repeatedly proven over numerous conflicts.

One weapon system not only revolutionized the field of military sniping but also created an entire new category of weapon systems. Using an existing large caliber bullet and adapting it to the precision rifle platform, the innovative Barrett M82 sniper rifle practically created the category of large caliber rifles that equip military snipers worldwide to this day.

In 1982, Ronnie Barrett was a professional photographer taking photos of a military patrol boat on Tennessee’s Stones River. The patrol boat was armed with two M2 .50 caliber heavy machine gun mounts. Barrett was intrigued by the guns and wondered if a rifle could be designed to fire the .50 BMG bullet.

With no firearms design experience or training, Barrett hand drew a design for a .50 caliber rifle. Barrett drew the rifle in three dimensions, to show how it should function, and then took his design to local machinists. Nobody was interested in helping him, believing that if a .50 caliber rifle was useful someone would have developed one by then. Barrett finally found one sympathetic machinist, Bob Mitchell, and the two set to work. Less than four months later, they had a prototype rifle.

Barrett’s first rifle, the Barrett .50 BMG, was completed in 1982. The Barrett .50 BMG was a shoulder fired, semi-automatic rifle designed around the .50 BMG cartridge. Unique among firearms, Barrett rifle’s barrel recoiled backward after firing. A rotating-lock breech block equipped with an accelerator arm used part of the recoil energy to push back the block on firing. This cycled the action, cocked the firing pin, and loaded a new round from a ten-round steel magazine.

The result was a weapon that should generate sufficient recoil to make repeated firings uncomfortable, but the use of recoil energy to cycle the action and the weapon’s weight reduced felt recoil. A double baffle muzzle brake that vented exhaust gasses to the left and right was added later and further reduced recoil.

Barrett built thirty initial production rifles and placed an ad in The Shotgun News. The initial order quickly sold out and Barrett increased production. The Central Intelligence Agency saw the ad and placed an order for rifles to equip the Mujahideen guerrillas that were fighting Soviet Army occupiers in Afghanistan. The CIA saw the Barrett rifle as the ideal weapon for engaging the Soviets from long range. The Barrett’s ability to destroy enemy war material such as communications equipment, vehicles, weapons and other items with the heavy .50 BMG round created a new category of weapon—the antimaterial rifle.

The Barrett M82 was fifty-seven inches long, had a twenty-nine-inch barrel, and weighed 28.44 pounds. The M82 delivered previously unheard levels of energy and distance in a sniper rifle. The M33 .50 BMG bullet weighed 661 grains, or 1.5 ounces, compared to the fifty-five grains of 5.56mm ammunition used in M16 type rifles. The M33 round had a velocity of 2,750 feet per second at the muzzle and delivered an amazing 11,169 foot pounds of energy, compared to just 1,330 foot pounds for the 5.56 round. The Barrett round was so powerful it still retained 1,300 foot pounds of energy after traveling 2,000 yards downrange. At a distance of 1.4 miles the M33 round still packs 1,000 foot pounds of energy—more than three times the power of a 9mm pistol bullet.

The Barrett M82A1/M33 combination could also hit at very long ranges. While the M16 series of rifles had a maximum effective range of approximately 600 yards, the Barrett can reach out to 1,500 yards or more, and the company warns new owners that stray bullets can travel up to five miles. Properly trained shooters can push the round out to 2,000 yards or more but must contend with a considerable amount of bullet drop due to the effect of gravity on a slowing bullet.

In 1989, the Swedish Army placed the first military order for the Barrett Model M82A1, ordering 100 rifles. In 1990, the U.S. Marine Corps placed an order for 125 M82A1s and the rifles participated in Operation Desert Storm, the campaign to liberate Kuwait. The Marines bought 400 more rifles in the 1990s, and the U.S. Army finally came onboard and purchased the rifle as the M107 in 2002. The utility of the heavy caliber sniper rifle, which as Ronnie Barrett once pointed out can disable a multi-million dollar jet on the ground with a two dollar bullet, has been repeatedly proven over numerous conflicts, including the wars in Iraq, Afghanistan and against the Islamic State.

Today the Barrett M82A1 is used by more than sixty countries, mostly NATO countries and U.S. allies in Asia and the Middle East. All the major military powers field their own 12.7mm/.50 caliber-class sniper rifles, with Russia’s OSV-96 rifle serving with the Russian Ground Forces and China’s Zijiang M99 serving with the People’s Liberation Army. The Barrett M82A1, the rifle nobody wanted to build, ended up starting a revolution.

Kyle Mizokami is a defense and national security writer based in San Francisco who has appeared in the Diplomat, Foreign Policy, War is Boring and the Daily Beast. In 2009 he cofounded the defense and security blog Japan Security Watch. You can follow him on Twitter: @KyleMizokami.

This article first appeared in 2018 and is reprinted here due to reader interest.

Image: Wikimedia Commons

The Future of the U.S. Marine Corps: Narcosubmarines

Tue, 25/08/2020 - 13:00

Caleb Larson

Security, Americas

A new paper by three officers suggest that low-profile, semi-submersible vessels like those that traffic illicit drugs into the United States could provide an inexpensive, low risk solution to keeping groups of Marines fed and supplied during a war in the Western Pacific.

The Marine Corps is changing. In a future high-end fight in the Pacific, teams of Marines could be flung far and wide throughout the Pacific operating relatively autonomously. These groups of Marines, stationed on islands or small boats hundreds or even thousands of miles away from Navy ships and deep in enemy territory would have to battle not only the enemy, but also the tyranny of distance and keep themselves supplied. But how?

One unlikely source of inspiration: narcotrafficers.

Homemade Submarines

This was the advice of three officers in a piece recently published in the foreign policy and commentary website War on the Rocks. In the piece, the three officers argue that in order to keep groups of scattered Marines supplied with food, fuel, and ammunition, the Corp needs to ditch ships and aircraft and try to “mimic drug traffickers.”

Specifically, the Marine Corps could do well to study the narco-submarine. “Manned, semi-submersible, low-profile vessels, also known as narco-submarines, have profitably solved covert logistics across the maritime tyranny of distance. These air-breathing vessels evade detection by staying almost entirely underwater, trading speed for semi-submerged invisibility.”

They argue that a vessel optimized for long-range transport of illicit substances could easily be adapted into an unmanned system for transporting tons of critical supplies for the Marine Corps.

The authors also suggested that the Navy’s current fleet of shipping vessels are dangerously exposed when close to shore, especially in hotly contested environments. They are “too few, too visible, and therefore too vulnerable.”

The authors cite the Drug Enforcement Administration, which says that about a third of the U.S.-bound cocaine travels on semi-submersibles designed and built in remote jungle locations. The homemade vessels are quite adept at staying hidden too.

“Drug traffickers have evolved low-profile vessels to be incredibly difficult to detect without specialized equipment. A surface vessel has about a 5 percent chance of detecting a low-profile vessel at sea without an embarked helicopter or support from shore-based aviation. Consequently, very few interdictions come from stumbling across a low-profile vessel on patrol. Only 10 to 15 percent of low-profile vessels are intercepted at all, meaning that known trafficking activity represents just the tip of the iceberg.”

The authors envision something that combines both the semi-submersible narcosubmarine’s wave-piercing bow with a Higgins Boat-style offloading ramp for quick and easy cargo unloading into one low cost, nearly-expendable platform.

Postscript

Admittedly, a narcosubmarine-like low-profile vessel would not fulfill all of the Marine Corps’ logistical needs in the Western Pacific and are limited by their modest payload when compared to other naval supply ships. Still, they would be a cheap, low-risk solution that could easily be put into mass production and would be the “very epitome of the Marine Corps commandant’s “low signature, affordable, and risk-worthy platforms.”

Caleb Larson is a defense writer with the National Interest. He holds a Master of Public Policy and covers U.S. and Russian security, European defense issues, and German politics and culture.

Image: Reuters

America's Middle Class Is Dying. Here's Why That's a Good Thing.

Tue, 25/08/2020 - 12:44

Scott Lincicome

Politics, Americas

It turns out, people just keep getting richer.

A frequent refrain among Washington populists (and in omnipresent political advertising) these days is that decades of global capitalism have “hollowed out” the center of the American workforce, leaving two predominant classes in the United States: the poor and the super‐​rich (aka “the 1%”). The alleged “polarization” of the American workforce drives domestic and international economic policies on the left and the right, all of which seek to “restore” the middle class to its long‐​past glory (and often blame the “rich” for deliberately causing the current predicament).

New research, however, calls this narrative into question by examining individuals over time instead of the prevailing methods of analyzing certain groups – most notably income classes or occupations – during certain periods. These “real world” analyses reveal that, while the American middle class is indeed shrinking, this trend has been caused less by “polarization” (i.e., Americans moving both up and down the economic ladder) and more by Americans simply getting richer.

First, in a new Brookings Institution analysis, George Washington University’s Stephen Rose conducted a “longitudinal” analysis of the same people over time using the popular Panel Study on Income Dynamics (PSID) dataset. These data, Rose tells the Washington Post’s Robert Samuelson, “provide a picture of what is really happening to people because they have data on each specific person for many years.” In particular, Rose examined individuals aged 25 to 44 during two different 15‐​year periods: 1967–1981 and 2002–2016. To standardize the data, he converted all individuals into “family units” of three and adjusted for inflation (using Bureau of Economic Analysis’ Personal Consumption Expenditure price deflator). The results, which the Post helpfully summarized in the table below, run counter to the prevailing “polarization” narrative:

As the table makes clear, the predominant trend during and across the two periods – although certainly less so during second one – is fewer poor and lower/​middle‐​class Americans and more upper‐​middle and rich Americans. Rose thus draws two important conclusions from these data:

First, while the benefits of economic growth have not accrued equally, they have not gone solely to the top 1%. The upper middle class has grown. Second, the main reason for the shrinking of the middle class (defined in absolute terms) is the increase in the number of people with higher incomes.

He also finds another bit of good news: “many more Black people are in higher income classes” during the latter period (though there is still more work to do in that regard). Samuelson helpfully adds in his Post piece that these figures likely understate the individuals’ actual gains, because the income figures do not include government transfers (which, as this CBO analysis demonstrates, substantially improves poor and middle class income gains over time) or non‐​wage employer compensation like healthcare premium contributions. (I’d also note that the latter period not only featured the 2008-09 Great Recession and the 2015–16 “mini‐​recession” in U.S. manufacturing, but could show even more improvement if it ended last year instead of 2016 and thus captured the strong gains for middle‐​class workers in 2018–19.)

Second, a 2019 paper from Jennifer Hunt and Ryan Nunn provides similar conclusions when examining wages at the individual level (rather than occupation‐​average wages) over time. They explain that “[w]hile occupations may provide reliable information about tasks and the nature of work at a point in time, average occupation wages are in general not appropriate proxies for individual wages, nor is the distribution of occupations by average wage very informative about the distribution of workers’ wages” (and thus wage inequality and polarization), in large part because wages can vary dramatically within occupations (even those classified as “low wage”).

Instead, they assign workers to real hourly wage “bins,” ignoring specific occupations and their average wages, and then track annual changes in the shares of workers in each bin between 1973 and 2018, again adjusting for inflation. The groups examined here are different from those used by Rose – the highest‐​wage bin starts at $35.08 (equivalent to about $70,000 per year), and these individuals are not converted to 3‐​person “family equivalents” – but the findings are nevertheless similar: low and middle‐​wage jobs are slowly declining while higher‐​wage jobs are increasing more quickly:

These general (and positive) long‐​term trends do not change when adjusting for worker hours or business cycles, or when increasing the number of wage bins from four to ten. In summarizing their research at VoxEU, the authors conclude: “The share of workers earning middle wages has declined for the last 50 years…. But in a departure from employment polarisation, the figure does not show simultaneous increases in the top and bottom group shares…. [S]hares of workers earning bottom and top wages have generally moved in opposite directions, and do not rise together in the polarising fashion that would provide a link to rising wage inequality.” They subsequently find that women benefited most from these trends, though their analysis of male wages still shows a long‐​run increase in high‐​wage jobs and no clear “polarization” trend:

These “real world” analyses effectively counter the “polarization” narrative that we hear so frequently from politicians, pundits and the media. They (and some other recent assessments, including one on this very blog) show that America’s middle class is indeed getting smaller, but it is primarily because Americans are moving up the economic ladder.

This good news, of course, does not mean that all is well in U.S. labor markets or with the American middle class. The studies show, for example, less significant and less stable gains for male workers (Hunt/​Nunn) or for all workers in more recent periods (Rose), as well as the increasing importance of education for upward mobility in the United States (both). And, while each study accounts for inflation (including housing costs), they do not address important cost‐​based challenges that are specific to certain American workers (e.g., higher education costs) or to important U.S. labor markets (e.g., coastal “megacities” like New York). Nevertheless, those very real challenges are very different, and more nuanced, than the common narrative peddled by populists like Elizabeth Warren and Josh Hawley about nefarious elites and multinational corporations “hollowing out” the American middle class. These challenges also require different policy responses (many at the state and local level), such as improving educational outcomes, eliminating occupational licensing or other regulations that stifle mobility, or lowering the cost of essential goods and services (seee.g., my colleague Ryan Bourne’s suggestions or my recent presentation on housing costs), instead of punitive federal wealth taxes, wage controls, protectionism, or abandoning capitalism altogether.

U.S. policymakers should therefore focus their attention on these specific challenges and reforms — and maybe save the populist uprising for another day.

This article by Scott Lincicome first appeared in CATO on August 20, 2020.

Image: Reuters.

How Alexei Navalny Became Putin's Worst Enemy

Tue, 25/08/2020 - 12:33

Regina Smyth

Politics, Eurasia

Navalny’s efforts have captured the imagination of young Russians and demonstrated the effects of generational change.

The harrowing videos of Alexei Navalny, a blogger who has captured popular frustration in Russia, screaming in agony on Aug. 20, 2020 before being removed unconscious from a plane to a waiting ambulance, demonstrate the Kremlin’s increasing reliance on coercion to control dissent.

This attack is not the first Navalny has endured. In 2017, he was doused with a green antibiotic that compromised his vision. In 2019, while in jail for organizing protests, he suspected he had been poisoned. Navalny has also been wrongly convicted on charges of financial wrong-doing three times. Although he was released to prevent him from becoming a national martyr, his brother and co-defendant, Oleg, served three-and-a-half years in jail.

Throughout this period, the Kremlin worked to discredit Navalny without making him a martyr.

My book “Elections, Protest, and Authoritarian Regime Stability: Russia 2008-2020,” reveals the nature of Navalny’s threat to the Kremlin – one strong enough to make the claims that he has been poisoned credible.

Focus on Corruption

When he came onto the national stage 2010, Navalny brought a new type of opposition to Russian politics. He is in tune with popular concerns and able to find common ground across nationalist and liberal activists. He calls for removing President Vladimir Putin through elections, while articulating a new vision for Russia.

Navalny’s importance is not about popularity. The Kremlin’s arrests and disinformation campaigns have raised enough suspicions among voters that polling shows he would not win a national election, even in the unlikely event of a fair fight.

Instead, Navalny’s challenge to Putin’s regime rests on his innovative ideas and organizing strategies that have made him a force in Russian politics.

He began as a lawyer, challenging the large Russian energy companies by buying stock and thus gaining the right to attend shareholders’ meetings. He used his access to defy corporate leadership and release documents to demonstrate malfeasance.

He established The Anti-Corruption Foundation – now labeled a “Foreign Agent” by the Kremlin – which collected citizens’ reports of corrupt practices. His RosYama project, literally “Russian Hole,” allows citizens to go online to report potholes – a widespread, chronic problem in Russia – and track the government response.

Navalny amplified his anti-corruption fight in 2011, when he labeled Putin’s political party, United Russia, the “Party of Crooks and Thieves”. When these efforts contributed to mass protest against electoral fraud, Navalny was at the fore. Addressing an unprecedented crowd in 2011, he said, “I see enough people here to take the Kremlin and [Government House] right now but we are peaceful people and won’t do that just yet.”

He joined the movement’s Coordination Council and forged ties across the diverse opposition with the goal of reforming Putinism.

His canny use of social media has given thousands of Russians – both old and, especially, young – new insight and ways to protest against their government.

New Model of Opposition

Navalny drew on the resources of these protests – activists, themes, online fundraising strategies and new coalitions – to build an opposition strategy that links elections and a variety of forms of protest. He brought together an impressive team of young activists who challenge the regime at every step of the election process, from party formation to candidate registration and vote counting.

Volunteers go door-to-door or accompany candidates to meet voters on their daily commutes or in apartment courtyards. They build temporary structures, called “cubes,” on busy streets, where they educate voters about policy. Campaign leaders urge activists to share online messages offline with those who do not use the internet.

New Electoral Technologies

When he fell ill, Navalny was campaigning on behalf of a new generation of local candidates.

By demonstrating that Russian elections are little more than performances of the state’s capacity to manufacture votes, the Navalny team reveals the lack of choice and accountability in the system.

In summer 2019, this strategy led to significant protests after the regime barred almost all of the opposition candidates in Moscow’s municipal elections. When the government cracked down on pro-democracy demonstrators, Navalny’s team built a web-based way to identify any candidate who shared its values and urged voters to support that candidate – even if the candidate was in a party that they detested.

Recent work by political scientists Mikhail Turchenko and Grigorii Golosov demonstrate that Navalny’s “Smart Vote” strategy made a real difference in Russias’s 2019 local elections, helping to defeat nearly a third of Putin-aligned candidates in Moscow. Navalny’s team was gearing up to do the same thing in the September 2020 vote.

Social Media Innovation

Navalny’s creative use of new media is not limited to pothole repairs and voting apps. Beginning in 2006, he wrote a popular blog on the Live Journal social networking service. When the Kremlin shut down his blog in 2012, he reinvented his social media presence.

The Anti-Corruption Foundation produced a short film, “Don’t Call Him Dimon,” that lampooned former President and Prime Minister Dmitry Medvedev by showing his vast sneaker collection and flying a drone over his duck pond. Like ducks, sneakers became symbols of the opposition. The expose revealed the myth of Medvedev as an honest leader.

The exposes have continued on Navalny’s YouTube channel. His broadcasts have probed Russian intervention in U.S. elections, the Kremlin’s failure to provide COVID-19 relief and rigged Russian elections. These stories challenge the narrative presented in Russian state media, combating the regime’s systematic disinformation campaign.

Inspiring a New Generation

Navalny’s efforts have captured the imagination of young Russians and demonstrated the effects of generational change. Following “Don’t Call Him Dimon,” tens of thousands of young people took to the streets, shocking a country that believed Putin’s opposition was played out. Months later, they flocked to join Navalny’s presidential campaign organization.

Navalny knew the dangers of being the face of opposition to the Putin regime. The day before he fell ill, he joked with young supporters that his death would do more harm to the Kremlin than his activism.

It’s clear that Russians – who have taken to Twitter to urge him to hold on – don’t want to test that hypothesis.

Regina Smyth, Professor, Indiana University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image: Reuters

How America Will Be Able to Defend Itself in Space from Russia and China

Tue, 25/08/2020 - 12:00

Kris Osborn

Security, Space

The U.S. Air Force is taking fast new steps to rev up for warfare beyond the Earth’s atmosphere by networking satellites and prototyping innovative new weapons for use in space, due in large measure to rapid Chinese and Russian advances.

The U.S. Air Force is taking fast new steps to rev up for warfare beyond the Earth’s atmosphere by networking satellites and prototyping innovative new weapons for use in space, due in large measure to rapid Chinese and Russian advances.

“Almost all of our military capabilities depend upon space capabilities, and our adversaries are weaponizing space,” Lieutenant General JT Thompson, Commander, Space and Missile Center, told The Mitchell Institute in a video series. 

Space war technology increasingly involves a wide span of sensors and weapons systems, including satellite-mounted missiles or lasers, cyber hardening, electronic warfare (EW) applications and defenses against anti-satellite weapons (ASAT). 

These are areas, Thompson explained, where China continues to make fast progress.

“China’s strategic support force with their EW and cyber force has begun training specialized units with ASAT weapons,” he said. 

The U.S. strategy, in response, is multifaceted, including ASAT defenses, new missile defense systems, new infrared missile warning sensors, space-based offensive weapons and even the possibility of some kind of space drone. 

Of greatest significance, and very close to being a possibility, is the Pentagon effort to network satellites to one another and launch additional smaller, agile, and faster lower-altitude sensors. Moreover, the sensors built into satellites themselves will increasingly perform more independent command and control functions, in some instances helping to discern the difference between an actual ICBM or nearby decoys intended to thwart and confuse defenses. Most seekers at the moment often rely upon missile-integrated sensors tasked with guiding various anti-missile weapons called “Kill Vehicles.”

“Gone are the days of every satellite having to have its own unique ground control segment. We have fueled innovation with rapid prototyping and have been making schedule and delivery a key performance parameter,” Thompson explained. 

Networking satellites to one another through hardened connections with both ground terminals and other space-operating satellites can enable U.S. Commanders to establish a more accurate, continuous track on approaching weapons traveling through space. Instead of tracking or handing off threat data from one field of view to another in a separated or stovepiped fashion, meshed networks of satellites can massively improve sensor-to-shooter time for human decision-makers. 

There are several interesting technical elements here, such as current initiatives to integrate ground terminals successfully to each other through large-scale cloud migration with various kinds of hardened transmission systems. 

One element of this includes U.S. efforts to construct and add large constellations of Very Low Earth Orbit Satellites (vLEOs). These assets can move quickly, operate closer to the Earth, and quickly connect data while in space, all while closely tracking ground activity. Not only does this naturally improve space warfighting operations, but it also furthers the Air Force’s “redundancy” strategy, meaning a larger, more dispersed collection of satellites can ensure operations continue in the event that one is destroyed or disabled. 

China is also quickly adding new satellites to space, as evidenced by the recent launch of an upgraded Long March 11A system, described in a May report from China’s Online People’s Daily.

“The Long March 11A will be able to send 1.5 tons of payload to a sun-synchronous orbit at an altitude of 700 kilometers, nearly four times the Long March 11s capacity,” the paper says. 

The larger payload will allow a wider range of sensors, cameras, long-range data link signals, and possibly even function as a weapons detection or delivery system.

“Two satellites carried by a Long March-2D carrier rocket were launched from the Jiuquan Satellite Launch Center, Northwest China’s Gansu province, May 31, 2020,” the report said.

Kris Osborn is defense editor for the National Interest. Osborn previously served at the Pentagon as a Highly Qualified Expert with the Office of the Assistant Secretary of the Army—Acquisition, Logistics & Technology. Osborn has also worked as an anchor and on-air military specialist at national TV networks. He has appeared as a guest military expert on Fox News, MSNBC, The Military Channel, and The History Channel. He also has a Masters Degree in Comparative Literature from Columbia University.

Image: Reuters

How a Horrifying Chemical Weapons Disaster Helped Develop Chemotherapy

Tue, 25/08/2020 - 11:45

Sebastien Roblin

History, Europe

A forgotten tale of World War II.

Key Point: The catastrophe's impact would far outlast the Allied war effort.

On November 26, 1943 the SS John Harvey cruised into the harbor of Bari, Italy. The placid port city of 250,000, which had an old quarter dating back to the Middle Ages, had been seized by British paratroopers several months earlier without a fight. Located in southern Italy near the heel of the Italian boot, it was safely distant from the frontline to the north.

The Harvey’s cargo security officer seemed especially eager to expedite the unloading of his ship, but he could not explain to port authorities why his ship should be given priority over the dozens of others—so instead the Harvey await her turn at Pier 29 for five days. By December 2, Harvey was merely one of nine huge 14,000-ton Liberty ships in the harbor.

Indeed, the small port was swarming with more than two dozen Allied ships—one convoy lined up at the quays, unloading vast cargoes of aviation fuel and munitions, the second packed together, awaiting its turn. This vast logistical outpouring was intended to supply the British troops of the Eighth Army, then preparing for a slugging match over the fortress monastery of Monte Cassino, as well as to provide fuel and bombs for the hulking strategic bombers of the Fifteenth Air Force, which would soon begin a wide-ranging aerial bombing campaign over southeastern Europe.

The British did not think Bari was at much risk of attack, and correspondingly did not deploy any fighters for air defense and only a small number of anti-aircraft guns. The German Luftwaffe had fought intense air battles over the Mediterranean throughout 1943, but the by the end of the year its strength seemed to be spent.

In reality, however, German feldmarschall Albert Kesselring had been convinced by Wolfram von Richthofen (cousin of the famed Red Baron from World War I), that the port was a target ripe for the plucking. On December 2, the same day that British air marshal Arthur Coningham announced at a press conference that the air war over Italy had been won, a German twin-engine Me.210 fighter photographed the harbor—the transport ships moored dangerously close to each other in an effort to expedite unloading.

By the late afternoon 105 Ju-88A4 bombers of Kampfgeschwader 54 were flying southeast into the Adriatic Sea—then swerved west to attack Bari from the east. Their approach was preceded by a few aircraft dropping strips of metallic foil to cloud Allied radars.

The harbor remained brilliantly illuminated, as port officials had kept the lights on to expedite the unloading process. This made the rows of densely packed ships the perfect target as the twin-engine bombers swooped down at 7:30 PM and unleashed their 3,000-pound bomb loads and Italian-made Motobomba circling torpedos.

One of the harbor’s refueling pipelines was severed, causing it to pour petrol into the harbor. Two ships packed full of ammunition were struck SC250 bombs, causing massive explosions. The fires reached ammunition stored inside the Harvey, causing a titanic eruption that blew the vessel apart in a cloud of flame and sent a massive concussion wave rippling across the armor that knocked ships ajar.

The flames mixed with spilled petrol and aviation fuel, causing a massive wall of fire to course across the harbor, setting additional vessels aflame. A huge cloud of oily smoke swept over the port and drifted into the nearby city, smelling rather oddly of garlic.

The German air attacks was over in less than twenty minutes, with only a single bomber shot down by flak. It left the port an inferno. No less than five huge Liberty ships had been destroyed, as well as two Canadian Fort ships of equivalent size and more than a dozen smaller transports. The ships that had escaped destruction were damaged. 34,000 tons of war materiel had been lost. You can see a footage of the aftermath here.

Hundreds of wounded, oil-slicked sailors were left floundering for their lives in the waters of the harbor. A massive rescue effort was launched to recover the mariners, while other vessels towed burning hulks out of the way of surviving ships. Two damaged British Hunt-class destroyers, HMS Bicester and Zetland, managed to wend their way through an obstacle course of burning wrecks to escape, picking up dozens of survivors as they went. British motor torpedo boats cruised around the harbor, picking up survivors from the morass. Once removed from the water, the mariners were wrapped in blankets to treat for exposure.

The fires on some of the ships raged for weeks. Meanwhile, doctors treating survivors of the attacks began to notice strange chemical burns and blistering on the survivors, many of whom complained of lesion in their eyes and swelling blisters in bodily cavities. Many had developed violent coughs as well. Some burn victims seemed to fall ill and die without apparent cause. Hundreds of civilians in Bari poured into Allied hospitals as well—even those not exposed to bomb blasts.

Rescue crews unharmed by the attack began to fall ill. The command crew of HMS Bicester became so afflicted by blindness that none were able to pilot the ship; a replacement crew had to be shuttled in to bring the vessel into port at Taranto.

Responding to suspicions of a German poison gas attack, chemical warfare expert Lt. Col. Stewart Francis Alexander flew down to Bari and examined the survivor’s injuries. He concluded that they corresponded to mustard gas, but almost no one was left alive in Bari who knew about the gas bombs. He also observed that the majority of affected mariners came from ships berthed next to Harvey. He directed an inquiry up the chain of command, which finally uncovered the astounding truth.

The Allies’ Secret Stash

Prior to the attack, Allied intelligence had received word that German chemical weapons had been moved to the Italian theater. Though all major powers in World War II maintained chemical warfare stocks, they mostly refrained from deploying them in battle for fear of the retaliatory attacks. Fearing that the German policy might change, President Roosevelt secretly authorized the movement of chemical warfare assets to Italy.

This was why the Harvey’s hold contained more than 2000 bullet-shaped M47A2 bombs pumped full of ‘Agent H’—mustard gas. A thin steel shell, less than a millimeter thick, contained the liquid sulfur mustard in the notoriously leaky bombs. Despite its low lethality rate, mustard gas was one the most dreaded chemical weapons of World War I. Mere skin contact with the carcinogenic gas would eventually cause large, agonizing blisters full of yellow pus to form on the skin and blinding conjunctivitis in the eyes. If inhaled, bleeding ulcers could form in the lungs. The gas was so corrosive, the air-dropped M47 bombs had to be coated with a special oil to protect from being eaten from the inside out.

The secrecy around the munitions meant that even the Harvey’s captain, Eliot Knowles, was not supposed to be in the know. The Harvey’s cargomaster did know about the bombs, but he wasn’t supposed to reveal that to Allied authorities in Bari.

The explosion of the Harvey had vaporized some of the gas, causing to mix with the clouds of oily smoke pouring over the harbor, which was inhaled by numerous sailors. The mustard gas had also mixed with the oil slicking through the harbor into a fatal cocktail that adhered to the skin of shipwrecked sailors. The correct treatment for such mustard-gas exposure is to remove clothing and wash off the skin. By wrapping them in blankets while wearing their poison-drenched clothing, rescued sailors had been inadvertently super-exposed. Even before he received confirmation, Steward ordered medical personal to treat victims as victims of a mustard gas attack, saving lives.

Altogether, more than 628 Allied personnel and civilians were treated for exposure to mustard gas, of whom eighty-three did not survive. However, many additional civilians fled the city after the raid and may have succumbed to the poison. Altogether, in addition to 1000 dead allied personnel, it is estimated that at least another 1000 Italian civilians perished as a result of the raid and the chemical spill.

Winston Churchill fiercely asserted that the incident needed to be covered up. Churchill believed that if the Germans got the impression that an Allied chemical attack was imminent, they might respond with a chemical attack of their own. A Nazi chemical attack would have been deadly indeed, as the Germans had invented nerve gas—an agent many times more lethal than mustard gas. Documents were destroyed, and the incident was hushed up. Still, words somehow reached Axis intelligence, as radio propagandist Axis Sally taunted Allied troops over the radio waves about the chemical spell. Nonetheless, the chemical accident wasn’t declassified until 1959, and did not become widely known until Glen Infield published Disaster at Bari in 1967.

The Bari raid ranks among the most effective air strikes of the war. The harbor was closed for three weeks and not fully operational until February. The attack delayed both the advance of the British Eighth  Army and the operations of the Fifteenth Air Force for two entire months. However, the chemical disaster was a result of excessive secrecy and irresponsible management.

Still, the catastrophe did come with one consequence that would far outlast its impact on the Allied war effort. Colonel Alexander had noted that the mustard gas victims had exhibited decreased white blood cell counts, even though such cells divided very quickly. Alexander kept tissue samples of the victims and wrote in his report that mustard sulfur, or some other similar compound, might be effective in killing fast-replicating cancer cells. His idea and findings inspired researchers Louis Goodman and Alfred Gilman to pioneer the use of mechlorethamine, which is a nitrogen-based analogue to mustard sulfur, as an effective treatment for lymphoma and leukemia.

Thus did a horrifying disaster helped spur the development of chemotherapy, which has saved countless lives in the last sixty years.

Sébastien Roblin holds a master’s degree in conflict resolution from Georgetown University and served as a university instructor for the Peace Corps in China. He has also worked in education, editing and refugee resettlement in France and the United States. He currently writes on security and military history for War Is Boring.

This article first appeared in 2018 and is reprinted due to reader interest.

Image: Reuters

Explained: How Rogue "Killer" T-Cells Cause Intestinal Diseases

Tue, 25/08/2020 - 11:33

John Chang

Public Health, World

Although there are many treatments for IBD, for as many as 75% of individuals with IBD there are no effective long-term treatments.

Between 6 and 8 million people worldwide suffer from inflammatory bowel disease, a group of chronic intestinal disorders that can cause belly pain, urgent and frequent bowel movements, bloody stools and weight loss. New research suggests that a malfunctioning member of the patient’s own immune system called a killer T cell may be one of the culprits. This discovery may provide a new target for IBD medicines.

The two main types of IBD are ulcerative colitis, which mainly affects the colon, and Crohn’s disease, which can affect the entire digestive tract. Researchers currently believe that IBD is triggered when an overactive immune system attacks harmless bacteria in the intestines. Although there are many treatments for IBD, for as many as 75% of individuals with IBD there are no effective long-term treatments. This leaves many patients without good options.

I am a physician-scientist conducting research in immunology and IBD and in a new study, my team and our colleagues specializing in immunology, gastroenterology and genomics examined immune cells from the blood and intestines of healthy individuals and compared them with those collected from patients with ulcerative colitis to gain a better understanding of how the immune system malfunctions in IBD. There are many reasons why current treatments aren’t permanent, but one reason is that scientists don’t fully understand how the immune system is involved in IBD. It is our hope that closing the current knowledge gap about how the immune system is involved in this disorder will eventually lead to new durable treatments for IBD that target the right immune cells.

Immunology 101

The immune system can be divided into innate and adaptive branches. The innate branch is our first line of defense and acts quickly – within minutes to hours. But this system senses changes caused by microbes generally. It does not mount a targeted response against a specific pathogen, which means that some invaders can be overlooked.

The adaptive branch is designed to detect specific threats, but is slower and takes a couple of days to get going. T cells are a part of the adaptive immune system and can be further subdivided into CD4⁺ and CD8⁺ T cells.

CD4⁺ T cells are helpers that aid other immune cells by releasing soluble molecules called cytokines that can induce inflammation.

CD8⁺ T cells can also release cytokines, but their main function is to kill cells infected by microbial invaders. This is why CD8⁺ T cells are often referred to as serial killers.

After the infection is cleared and the pathogen has been destroyed, cells called memory T cells remain. These memory T cells “remember” the pathogen they’ve just encountered and if they see it again, they mount a stronger and faster response than the first time. They and their descendants can also live for a long time, even decades in the case of certain infections like measles.

The goal of a vaccine is to provide a preview of the microbe so that the immune system can build an army of memory cells against an infectious agent, such as SARS-CoV-2, the virus that causes COVID-19. That way, if the virus attacks, the memory T cells will spring into action and activate an immune response including the production of antibodies from B cells.

Memory T Cells that Reside in Organs

Immunologists further subdivide memory T cells into different classes depending on if and where they travel in the body. Circulating memory T cells are scouts that look for signs of infection by patrolling the blood, lymph nodes and spleen.

Tissue-resident memory cells, abbreviated TRM, are sentries stationed at key ports of entry into the human body – including the skin, lungs, and intestines – and act rapidly to counter an infectious threat. Intestinal TRM also function as peacekeepers and do not tend to overreact against the many harmless microbes living in the intestines.

In the new study, our team analyzed blood and intestinal samples to discover that intestinal CD8⁺ TRM come in at least four different varieties, each with unique features and functions.

We noticed that individuals with ulcerative colitis had higher numbers and proportions of cells belonging to one of these four varieties. This particular variety, which we’ll call inflammatory TRM here, carried instructions to make very large amounts of cytokines and other protein factors that allow them to kill other cells. High levels of certain cytokines can cause inflammation and tissue damage in the body.

It seems that in individuals with ulcerative colitis, the balance of memory cells is shifted in favor of this rogue population of inflammatory TRM that may become part of the problem by causing persistent inflammation and tissue damage.

We also found evidence consistent with the possibility that these inflammatory TRM might be exiting the intestinal tissue and entering the blood. Other studies in mice and people have shown that TRM, despite being called “tissue-resident,” can leave tissues in certain circumstances.

By leaving the tissue and entering the blood, inflammatory TRM may be able to travel to other parts of the body and cause damage. This possibility may explain why autoimmune diseases that start in one organ, like IBD in the digestive tract or psoriasis in the skin, often affect other parts of the body.

IBD and Other Autoimmune Diseases as a Memory Problem

The very features that make memory T cells so desirable for vaccines – their capacity to live for such a long time and mount a stronger response when they encounter a microbial invader for the second time – may explain why autoimmune diseases are chronic and lifelong.

It is important to point out that none of the current drug treatments for IBD specifically target long-lived memory cells, which might be a reason why these therapies don’t work long-term in many individuals. One therapeutic approach might be to target inflammatory TRM for destruction, but this could result in side effects like suppression of the immune system and increased infections.

Our findings build on previous studies showing that different TRM varieties, like the CD4⁺ subtype, may also be involved in IBD, while other studies show that TRM play a role in autoimmune diseases affecting other organs like the skin and kidneys.

The possibility that T cell memory is co-opted in IBD is exciting, but there is much that we still don’t understand about TRM. Can we selectively target inflammatory TRM for destruction? Would this be an effective treatment for IBD? Can we do so without causing major side effects? Further research will be needed to answer these important questions and to strengthen the link between TRM and IBD.

John Chang, Professor of Medicine, University of California San Diego

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image: Reuters

The Army Rejected The M9A3 Beretta Pistol, But You Can Still Get One

Tue, 25/08/2020 - 11:00

Kyle Mizokami

Security, North America

Beretta officials protested that the new pistol solved many of the problems with the older handgun.

Here's What You Need To Remember: The U.S. Army may have declined to adopt the M9A3, but the pistol is still available on the civilian market. Shooters can take advantage of the research and development that went into further developing the classic Beretta 92 design. The -A3 is sold commercially with three seventeen-round magazines, giving the user access to a total of fifty-one rounds when carrying a loaded pistol and two loaded spare magazines.

In 1985, the U.S. Army made an abrupt shift, aging fleet of World War II-era handguns for a foreign model. The Italian-made Beretta M92 was classified the M9 handgun and served with the U.S. Armed Forces for more than thirty years. The U.S. Army recently chose a new handgun, the M17 Modular Handgun System, and excluded a new, updated M9A3 from the competition. Although the proposed M9A3 failed to gain traction, it is still available on the civilian market and its improvements, adopted by Beretta for the twenty-first century battlefield, are worth examining.

In the early 1980s, the U.S. Army began searching for a replacement for the Colt M1911A1 handgun. Introduced after World War I, the bulk of the Army’s guns were built during World War II. The Army’s huge wartime inventory, plus their relatively infrequent use by officers, medics, vehicle crews allowed the M1911A1s to soldier on for nearly four decades. The U.S. Army held a competition and to the surprise of nearly everyone the Italian Beretta company won, and M9 handgun entered service with the Army, Air Force, Navy, Marine Corps, and Coast Guard. Over thirty years, the Pentagon purchased more than six hundred thousand Beretta pistols.

In the mid-2010s the Army started up a competition for a new handgun. Although Beretta had a newly redesigned M9A3 waiting in the wings the Army declined to evaluate it, citing a history of reliability and design problems with the original M9. Beretta officials protested that the new pistol solved many of the problems with the older handgun.

The M9A3 Beretta looks like a futuristic, high tech version of its Reagan-era ancestor—which of course it is. The A3 is finished in a three-tone black, coyote, and flat dark earth scheme, unlike the flat black of the M9, a bit of marketing that reflects the type of environment U.S. forces have been fighting in for the last seventeen years. It has harder lines than the original Beretta, with a flattened mainspring housing eliminating the bulge along the backstrap, creating a more angular grip reminiscent of the M1911A1.

The Beretta is largely unchanged underneath the hood. The A3 is still a hammer-fired single action/double action handgun, with an initial long double action trigger pull for the first shot the Army preferred for infrequent shooters. Subsequent shots are single action. One major change: the safety lever can act as a safety-decocker mode, or just a decocker mode depending on how the user wants it. This allows the shooter to lower the hammer without discharging the weapon.

The A3’s barrel is still partially visible along the top of the frame, and barrel has been increased slightly, to five inches. The barrel tip is threaded for use with a suppressor, and in the absence of a suppressor, a knurled knob can be screwed on to protect the barrel threads. The A3 also features tritium sights for low light shooting, and a three point Picatinny rail underneath the barrel for aiming lasers and flashlights.

The Beretta M9A3’s magazine has been increased to hold seventeen rounds of nine millimeter ammunition instead of the original fifteen, placing the newly redesigned handgun up there with the Glock 17 in terms of ammo capacity. However, the wide, double stack magazine that makes a seventeen-round magazine possible put it at odds with the Modular Handgun System’s goal of a pistol that could fit different hand sizes and types. In order to reduce grip width and accommodate smaller hands the Beretta comes stock with Vertec-style thin grips, and alternately wraparound grips for larger hands. The result is a very reasonable 1.3-inch grip width.

The U.S. Army may have declined to adopt the M9A3, but the pistol is still available on the civilian market. Shooters can take advantage of the research and development that went into further developing the classic Beretta 92 design. The -A3 is sold commercially with three seventeen-round magazines, giving the user access to a total of fifty-one rounds when carrying a loaded pistol and two loaded spare magazines. While the M9A3 was a long time coming, if you’re a Beretta fan it was likely worth the wait.

Kyle Mizokami is a defense and national-security writer based in San Francisco who has appeared in the Diplomat, Foreign Policy, War is Boring and the Daily Beast. In 2009, he cofounded the defense and security blog Japan Security Watch. You can follow him on Twitter: @KyleMizokami.

Image: Beretta

How Syria Almost Sparked a Nuclear War Between America and the Soviet Union

Tue, 25/08/2020 - 10:45

Michael Peck

Security, Middle East

A terryfying history.

Key Point: The 1973 Yom Kippur War could have become a nuclear Armageddon.

On the night of October 24, 1973, came the dreaded words: Assume Defcon 3.

On bases and ships around the world, U.S. forces went to Defense Condition 3. As paratroopers prepared to deploy, B-52 nuclear bombers on Guam returned to bases in the United States in preparation for launch. On another October day eleven years before, the United States had gone to the next highest alert, Defcon 2, during the Cuban Missile Crisis.

This time the catalyst of potential Armageddon wasn't the Caribbean, but the Middle East.

In fact, the flashpoint was Syria. And as tensions rise today between America and Russia over the Syrian Civil War, and U.S. and Russian troops and aircraft operate in uncomfortable proximity in support of rival factions in the conflict, it is worth remembering what happened forty-five years ago.

One of the most remarkable aspects of the Cold War is what didn't happen: the United States and Soviet Union managed to avoid fighting each other directly, and instead waged their conflict through proxies.

But as usual, the Middle East upset the status quo. On October 6, 1973, on the Jewish holy day of Yom Kippur, Egypt and Syria launched a surprise attack on the Sinai and the Golan Heights. The stunned Israeli defenders held on desperately, even as their leaders and senior commanders feared this might be the end for their nation. Meanwhile, the Soviet Union, followed by the United States, airlifted in massive amounts of military equipment and supplies.

By October 11, Israel had halted the Syrian offensive: Israeli armor and infantry had crossed into Syria, and would eventually advance to within artillery range of Damascus. In the Sinai, an Israeli force, led by the flamboyant and aggressive Gen. Ariel Sharon, had stealthily crossed the Suez Canal on October 15 and seized a bridgehead on the Egyptian side of the waterway. This time it was the Egyptians who were surprised as their Third Army found itself trapped in its positions on the Israeli side of the canal, its supply lines cut.

With attempts at working out a ceasefire failing, and with their Arab clients facing military defeat, Soviet leader Leonid Brezhnev sent a message to Richard Nixon's White House: "I will say it straight that if you find it impossible to act jointly with us in this matter, we should be faced with the necessity urgently to consider taking appropriate steps unilaterally."

A crisis atmosphere gripped the White House as reports arrived that that Soviet airborne divisions and amphibious troops had been placed on alert, while Moscow nearly doubled its Mediterranean fleet to a hundred ships. The Minister of Defense, Marshal Andrei Grechko, "recommended in particular that an order be given to recruit 50,000-70,000 men in the Ukraine and in the northern Caucasus," recalled Soviet Foreign Ministry official Victor Israelian. "His view was that, in order to save Syria, Soviet troops should occupy the Golan Heights."

After having just extricated itself from Vietnam, America was in no mood for another war. Yet, the White House felt it could not risk the loss of prestige and influence—especially in the oil-rich Middle East. "We were determined to resist by force if necessary the introduction of Soviet forces into the Middle East regardless of the pretext under which they arrived," Secretary of State Henry Kissinger recounted in his memoir Years of Upheaval.

It may or may not have been coincidental—and cynics wondered—that the U.S. alert came as Nixon's presidency was beginning to crumble under the Watergate scandal. Nonetheless, Moscow appeared ready to cross a red line that Washington could not allow.

In the confined waters of the Mediterranean, the tension was palpable. "Nerves in both fleets frayed," wrote Abraham Rabinovich, an historian of the 1973 October War. "The solitary Soviet destroyers that normally shadowed the carriers—'tattle tales' the Americans called them—were reinforced by heavier warships armed with missiles. Although ranking officers had never before been noted on the tattle tales, the Americans now became aware of two admirals on the ships following them. The Americans, in turn, kept planes over the Soviet fleet prepared to attack missile launchers being readied for firing. Both sides were aware that their major vessels were being tracked by submarines."

Soviet leaders were shocked by the American response. "Who could have imagined the Americans would be so easily frightened?” asked Soviet premier Nikolai Podgorny, according to Rabinovich in his book The Yom Kippur War. Soviet premier Alexei Kosygin said "it is not reasonable to become engaged in a war with the United States because of Egypt and Syria,” while KGB chief Yuri Andropov vowed "we shall not unleash the Third World War.”

Whatever the reason, the Soviet kept their forces on alert, but agreed not to dispatch troops to the Middle East. By the end of October, a tenuous ceasefire put an end to that chapter of the Arab-Israeli conflict.

In the forty-five years since that troubled autumn of 1973, the world has changed. The Soviet Union is no more, Egypt is a U.S. ally, and Syria...well, is not Syria anymore. But it not hard to imagine a scenario where the superpowers—or rather one current and one former superpower—find themselves at odds again. For example, Israel may strike Syria in order to drive out Iranian and Hezbollah forces that are edging toward the Israeli-Syrian border. Russia could choose to intervene to save its Cold War client, perhaps by providing air or air defense cover, which leads to a real or threatened clash between Israeli and Russian forces.

As in 1973, it is hard to imagine that Washington would allow the Russians to get away with attacking its Israeli ally.

But what was really different about the 1973 crisis? Nixon and Brezhnev weren't firing belligerent Tweets at each other (heck, tweets back then were something that birds did outside your window). There is little reason to be nostalgic for the cynical realpolitik game of the Cold War. But at least the game had rules, and the players were conscious that a false move could end in mutual annihilation. Cooler heads prevailed, and the crisis was resolved.

Imagine such a crisis now, with Trump's prickly belligerence and Putin's macho nationalism. This time, the world might not be so lucky.

For further reading:

The Yom Kippur War: The Epic Encounter that Transformed the Middle East by Abraham Rabinovich

Years of Upheaval by Henry Kissinger

Michael Peck is a contributing writer for the National Interest. He can be found on Twitter and Facebook.

This article first appeared in 2018 and is reprinted here due to reader interest.

Image: Wikimedia Commons

What We Eat Is Destroying the Environment (And Making It Easier for Coronavirus to Spread)

Tue, 25/08/2020 - 10:30

Terry Sunderland

Environment, World

We must harness the interconnected nature of our forests and food systems more effectively if we are to avoid future crises.

As the global population has doubled to 7.8 billion in about 50 years, industrial agriculture has increased the output from fields and farms to feed humanity. One of the negative outcomes of this transformation has been the extreme simplification of ecological systems, with complex multi-functional landscapes converted to vast swaths of monocultures.

From cattle farming to oil palm plantations, industrial agriculture remains the greatest driver of deforestation, particularly in the tropics. And as agricultural activities expand and intensify, ecosystems lose plants, wildlife and other biodiversity.

The permanent transformation of forested landscapes for commodity crops currently drives more than a quarter of all global deforestation. This includes soy, palm oil, beef cattle, coffee, cocoa, sugar and other key ingredients of our increasingly simplified and highly processed diets.

The erosion of the forest frontier has also increased our exposure to infectious diseases, such as Ebola, malaria and other zoonotic diseases. Spillover incidents would be far less prevalent without human encroachment into the forest.

We need to examine our global food system: Is it doing its job, or is it contributing to forest destruction and biodiversity loss — and putting human life at risk?

What Are We Eating?

The food most associated with biodiversity loss also tends to also be connected to unhealthy diets across the globe. Fifty years after the Green Revolution — the transition to intensive, high yielding food production reliant on a limited number of crop and livestock species — nearly 800 million people still go to bed hungry; one in three is malnourished; and up to two billion people suffer some sort of micronutrient deficiency and associated health impacts, such as stunting or wasting.

The environmental impacts of our agricultural systems are also severe. The agricultural sector is responsible for up to 30 per cent of greenhouse gas emissions, soil erosion, excessive water use, the loss of important pollinators and chemical pollution, among other impacts. It is pushing planetary boundaries even further.

In short, modern agriculture is failing to sustain the people and the ecological resources on which they rely. The incidence of infectious diseases correlates with the current loss of biodiversity.

Deforestation and Disease

Few viruses have generated more global response than the SARS-CoV-2 virus responsible for the current pandemic. Yet in the past 20 years, humanity has also faced SARS, MERS, H1N1, Chikungunya, Zika and numerous local outbreaks of Ebola. All of them are zoonotic diseases and at least one, Ebola, has been linked to deforestation.

Farming large numbers of genetically similar livestock along the forest frontier may provide a route for pathogens to mutate and become transmissible to humans. Forest loss and landscape change bring humans and wildlife into ever-increasing proximity, heightening the risk of an infectious disease spillover.

An estimated 70 per cent of the global forest estate is now within just one kilometre of a forest edge — a statistic that starkly illustrates the problem. We are destroying that critical buffer that forests provide.

Zoonoses may be more prevalent in simplified systems with lower levels of biodiversity. In contrast, more diverse communities lower the risk of spillover into human populations. This form of natural control is known as the “dilution effect” and illustrates why biodiversity is an important regulatory mechanism.

The pandemic is further heightening pressures on forests. Increased unemployment, poverty and food insecurity in urban areas is forcing internal migration, as people return to their rural homes, particularly in the tropics. This trend will no doubt increase demands on remaining forest resources for fuel wood, timber and further conversion for small-scale agriculture.

Wet Markets Under Scrutiny

The links between zoonoses and wildlife has led to many calls during the current pandemic to ban the harvest and sale of wild meat and other forms of animal source foods. That might be too hasty a reaction: wild meat is an essential resource for millions of rural people, particularly in the absence of alternative animal food sources.

It is, however, not necessarily essential for urban dwellers who do have alternative sources of animal protein to purchase wild meat as a “luxury” item. Urban markets selling wild meat could increase the risk of zoonotic spillover but not all wet markets are the same. There are countless wet markets throughout the world that do not sell wildlife products and such markets are fundamental to the food security and nutrition as well as the livelihoods of hundreds of millions of people.

Even before the COVID-19 pandemic took hold, international agencies, including the Committee on World Food Security, have been concerned about the long-term viability of our current food system: could it provide diverse and nutritious diets while maintaining environmental sustainability and landscape diversity? The current pandemic has highlighted major shortfalls in our environmental stewardship.

We must harness the interconnected nature of our forests and food systems more effectively if we are to avoid future crises. Better integration of forests, agroforests (the incorporation of trees into agricultural systems) at the broader landscape scale, breaking down the institutional, economic, political and spatial separation of forestry and agriculture, can provide the key to a more sustainable, food secure and healthier future.

Terry Sunderland, Professor in the Faculty of Forestry, University of British Columbia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image: Reuters

Why Great Britain Was Desperate To Sink Germany's Bismark Battleship

Tue, 25/08/2020 - 10:00

Warfare History Network

Security,

Of all the German surface warships, the British feared Bismarck the most.

Here's What You Need To Remember: Like a pack of lions, the chasing British battleships Rodney and King George V caught and engaged Bismarck at a range of 16,000 yards. The German gunners’ return fire was ineffective, and the helpless Bismarck was torn apart. At 10:40 am on May 27, 1941, the German battleship sank some 300 nautical miles west of Ushant, France.

The British Admiralty Board of Enquiry into the loss of the battlecruiser HMS Hood, presided over by Vice Admiral Sir Geoffrey Blake, concluded, “The sinking of Hood was due to a hit from Bismarck’s 15-inch shell in or adjacent to Hood’s 4-inch or 15-inch magazines, causing them to explode and wreck the after part of the ship.”

Director of Naval Construction Sir Stanley Goodall, however, found this conclusion unsatisfactory and in his report pointed out the explosion was observed near the mainmast 65 feet further forward from the aft magazines. A second board of enquiry was convened under Rear Admiral H.T.C Walker. Even given eyewitness accounts that described fires on deck, that board still found a hit by Bismarck being the likely cause, although finishing with, “The probability is that the 4-inch magazines exploded first.”

Taking on the Feared Bismarck

In May 1941, Admiral Sir John C. Tovey, commander of the British Home Fleet at Scapa Flow in Scotland’s Orkney Islands, was ordered to attack the German battleship Bismarck and heavy cruiser Prinz Eugen that had just been spotted in the Denmark Strait. Tovey’s fleet consisted of two new battleships, King George V and Prince of Wales, the battlecruisers Hood and Repulse, and the aircraft carrier Victorious, plus many additional cruisers and destroyers. Also hurrying north to join him was the older battleship Rodney, mounting nine 16-inch guns, the largest caliber in the fleet.

Of all the German surface warships, the British feared Bismarck the most. Her size, speed, and firepower made her a definite threat to Allied shipping in the Atlantic, and it was imperative that she be neutralized.

On May 21, 1941, Hood and Prince of Wales left Scapa Flow with six destroyers under the command of Admiral Lancelot Holland flying his flag in Hood, their mission to provide heavy support to the cruisers Suffolk and Norfolk covering the Denmark Strait between Greenland and Iceland––one of the likely routes the German naval squadron would take to reach the North Atlantic. The rest of the fleet was gathering to cover the area between Iceland and the Orkney Islands.

Early on the evening May 23, Suffolk made contact with the enemy ships, quickly turning away toward the coast of Iceland and into a fog bank. Suffolk immediately transmitted a sighting report to the Admiralty and then came around astern of the German ships to shadow them on radar.

Norfolk came up as well, a little too boldly, for Bismarck opened fire on her; like Suffolk, she raced for the fog bank. The blast from Bismarck’s 15-inch guns disabled her own forward radar, and overall German commander Admiral Gunther Lütjens ordered Prinz Eugen to take the lead.

The Germans had picked up the sighting report from Suffolk and advised their own high command. Lütjens was shocked their presence had been discovered so easily and had little intelligence on what his two warships might face.

The Dwindling British Advantage

As the two forces moved toward each other, Holland had a marked two-to-one superiority in firepower. However, this was offset by the age of the Hood (commissioned in 1920) and the newness (commissioned in January 1941) and lack of combat readiness of Prince of Wales, which was still having trouble with her main armament.

Holland soon realized he was in a favorable position to bring the enemy to action that evening, sailing northwesterly toward the Denmark Strait with the enemy on a southwesterly course. He hoped to catch the Germans just before sunset at around 2 am at 65 degrees north latitude. He also hoped to cross the German squadron’s “T,” which would give him a great advantage. “Crossing the T” is a tactic in naval warfare in which a line of warships crosses in front of a line of enemy ships, allowing them to bring all their guns to bear while receiving fire from only the forward guns of the enemy.

During the evening of May 23, the forces converged. Suffolk continued to shadow and update the Admiralty, Holland on Hood, and Tovey on King George V.

Around midnight, Suffolk lost contact because her radar was blinded by a snowstorm the German ships had entered. Holland waited an hour but, hearing no news, turned more northerly in case the enemy turned south. He could not afford a German breakout into the North Atlantic. At 2 am, still with no news, he turned southwesterly hoping to cut off the enemy before total darkness.

About an hour later, Suffolk regained radar contact and discovered the German ships were still on their original course. Holland must have cursed his luck, for his maneuvering had lost time and space, and the opportunity to cross the T was gone; this would prove critical in the coming battle.

Failing to Concentrate Fire on the Bismarck

Not wanting a night engagement, Holland brought his ships onto a course to intercept the German squadron at first light, keeping up a good speed but in the heavy seas dropping the escorting destroyers astern. By dawn, the destroyers were an hour behind.

Lookouts scanned the horizon for a glimpse of their quarry. At 5:37 am the two ships were spotted to the northwest, 30,000 yards (17 miles) away. The heavy guns could fire that far but the chance of a hit was remote; they needed to reduce the range to 25,000 yards or less––and quickly.

Prinz Eugen had already picked up the sound of ships with her underwater detection gear at some 20 miles to the southeast. At about the same time as the British lookouts spotted them, the Germans spotted smoke on the horizon. Lütjens believed that these were likely more cruisers, and he was under orders to avoid contact with British warships. He turned to starboard and headed almost due west, confident that he could outrun them.

Holland was soon aware the enemy had turned away, but he had to maintain his intercept course. Turning toward them would merely put his ships behind the Germans and make it a chase.

By 5:50 am, the range was down to 26,000 yards, and Holland would soon give the order to open fire. He was fully aware of Hood’s vulnerability to plunging fire at long range and wanted to pass through the critical zone as fast as possible. Therefore, he compromised by turning 20 degrees to starboard on a new course of 300 degrees toward the enemy. This would close with the enemy faster but make it impossible for the rear turrets of the British ships to bear on the Germans.

At 5:52 am, Holland designated the lead ship as the target and gave the order to open fire. This caused Captain John Leach of Prince of Wales a few anxious moments, for he was convinced that the rear ship was Bismarck, posing the greater threat. He ignored the orders from Holland and concentrated on Bismarck.

In seconds, huge columns of water erupted around Prinz Eugen, followed seconds later around Bismarck. Lütjens now had no doubt about what he faced. However, the British angle of approach still made identification difficult.

Marking the fall of shots, the British ships fired another salvo still firing at different targets. Leach had not informed Holland of his opinion and later had not been informed by his own gunners that Prince of Wales was firing at the second ship.

The Hood is Struck

The range was down to 24,000 yards when Lütjens ordered his ships to turn 65 degrees to port toward the British on a new course of 200 degrees and directed his ships to open fire as soon as they had turned. Lütjens was now on course to cross the British T and would be able to employ all his ships’ heavy guns.

Prinz Eugen opened fire first at 5:53 am, concentrating on the lead British ship, Hood, with her fast-firing 8-inch guns at four salvos per minute; she was firing high-explosive, not armor-piercing shells. After a few ranging salvos, Prinz Eugen hit Hood, her shells starting a large fire amidships among the ammunition lockers of the 4-inch antiaircraft guns, as well as ammunition for the unrifled projectile launchers used for defense against aircraft. Attempts to put out the fire were frustrated by the exploding ammunition.

Both British ships were still firing but at different targets. As yet, Bismarck had not opened fire. By now the range was down to 22,000 yards. After turning, Bismarck opened fire on Hood at 5:55 am with all eight 15-inch guns. Her first salvo fell close to Hood. At last Hood’s gunners realized they had been firing at the wrong ship. About this time, Holland ordered another 20 degree turn to port. This turn still would not allow the British ships to use their rear turrets.

At 5:59 am, Holland ordered another 20-degree turn to port, which would finally allow his ships to bring their full armament to bear. The range was now down to 18,000 yards. Bismarck fired three salvos in rapid succession about 30 seconds apart. The first, the fourth in total, again straddled Hood, but the fifth hit with devastating effect at about 6 am.

For the sailors aboard Hood, their worst nightmares were about to come true.

Explosion on the Hood

Ted Briggs had joined Hood as a signal boy on March 7, 1938, at just 15 years of age. Three years later he was an ordinary signalman on Hood’s compass platform, manning the voice pipe to the flag deck.

During the battle, Hood’s X Turret fired for the first time, but Y Turret was silent. Seconds later, Briggs saw a blinding flash sweep around the outside of the compass platform. However, he said there “was not a terrific explosion at all regards noise.” He felt the ship “jar” and begin listing to port. The “jar” was the ship breaking in two. The list got worse, and the men began leaving his area. By the time Briggs climbed down the ladder to the admiral’s bridge, the icy sea was already around his legs.

Eighteen-year-old Midshipman William Dundas had the duty of watching Prince of Wales to make sure she was keeping station; he was not far from Ted Briggs on the compass platform. He remembered bodies falling past his position from the higher spotting positions––the result, he felt, of Bismarck’s shells hitting without exploding. He recalled a mass of brown smoke just before the list to port began. Dundas escaped by kicking out a window on the starboard side of the compass platform. Even so, he was dragged under the water by the sinking ship but miraculously regained the surface.

Twenty-year-old Able Seaman Robert Tilburn was stationed at Hood’s aft-port 4-inch antiaircraft gun and witnessed the fire started by Prinz Eugen’s shells. The heat of the blaze made fire fighting impossible as the flames were being fanned by Hood’s 28-knot speed. Then he said, “The ship shook like mad” and began listing to port. Tilburn got onto the forecastle but was washed over the side by a great wave.

At the second board of enquiry, Tilburn told the admirals, “The Bismarck hit us. There was no doubt about that. She hit us at least three times before the final blow.”

Briggs, Dundas, and Tilburn were the only survivors from Hood; her 1,415 other crewmen were lost. But there were other witnesses, such as Lieutenant Esmond Knight. Aboard Prince of Walesobserving Hood, he remembered thinking, “It would be a most tremendous explosion, but I don’t remember hearing an explosion at all.” Chief Petty Officer French, also on Prince of Wales, said that the middle of the Hood’s boat deck appeared to rise before the mainmast.

Leading Sick-Berth Attendant Sam Wood, also on Prince of Wales, observed, “I was watching the orange flashes coming from Bismarck, so naturally I was on the starboard side. The leading seaman who was with me said, ‘Christ, look how close the firing is getting to Hood.’ As I looked out, suddenly Hood exploded. She was one pall of black smoke. Then she disappeared into a big orange flash and a huge pall of smoke which blacked us out…. The bows pointed out of the smoke, just the bows, tilted up, and then this whole apparition slid out of sight, all in slow motion, just slid away.” Within three minutes Hood was gone.

What Destroyed the Hood?

So what did happen to Hood? Were the boards of enquiry right that a 15-inch shell from Bismarck had hit close to her 4-inch and/or 15-inch magazines, causing an explosion that wrecked the after part of the ship?

What evidence we have would seem to shed some doubt on this. First, Hood was about 17,000 yards from Bismarck by 6 am. By that time, the heavy shells from both sides were travelling on a fairly low trajectory. As the range decreased, the guns would have been progressively depressed. Therefore, any hit would have been more likely to strike the belt. Hood’s belt armor was 12-inches thick and superior to any ship in the fleet; it was also inclined at 12 degrees.

It is still possible a shell could have hit the deck with its thin armor of three inches, but not with the plunging effect Holland had feared at long range. The shell likely fell at a rather oblique angle, which would make penetration of four decks to the main magazine under X Turret unlikely.

Also, it was witnessed aboard Hood and Prince of Wales that Bismarck’s 15-inch shells were likely defective, that most failed to explode.

Could there have been some sort of cordite flash explosion similar to those that destroyed three British battle cruisers during the battle of Jutland in May 1916?

This again seems highly unlikely as Hood’s shell-handling rooms were situated well below the X and Y Turrets’ magazine and the engine room thanks to lessons learned from that tragic Jutland episode. Also at Jutland, all three battlecruisers were destroyed by massive explosions, and there was none audible on Hood. One question about the magazine theory is why Y Turret did not fire like X had. Was something already happening there?

Then there is the fire started by Prinz Eugen’s 8-inch shells. Captain Leach of Prince of Wales described the fire as “a vast blowlamp.” The fire consumed much combustible material on the deck and upper superstructure, but the two- and three-inch deck armor and forecastle armor prevented this fire from penetrating below. The ventilation systems were fitted with gas-tight flaps and, at action stations, all should have been closed. Thus, it is fairly certain that the deck fires could not have resulted in Hood breaking in two or could even have contributed to this significantly.

The second board of enquiry did look at the possibility of Hood’s own above-deck torpedoes causing her to sink. Sir Stanley Goodall, who had supervised Hood’s design, believed an enemy shell could have detonated the torpedo warheads in their  tubes.

Four 21-inch MK IV torpedoes were kept in tubes, two on either side of the mainmast, and four reloads were nearby in a three-inch armored box. These torpedoes were certainly capable of breaking Hood’s back and could have been set off either by a direct hit from an enemy shell or by an intense fire. the TNT in the warheads would ignite at around 250 degrees Fahrenheit and explode at around 280 degrees. Again, however, there was no explosion. It is worth noting that similar torpedo tubes on the battlecruisers Repulse and Renown were later removed.

Was there some sort of underwater penetration? This seems even more unlikely. Hood was outside torpedo range of the German ships. One of Bismarck’s 15-inch shells could have penetrated the side and exploded in or near Hood’s shell-handling rooms––again unlikely without evidence of a massive explosion.

A Lucky Shot

The final theory or possibility is that Prinz Eugen’s 8-inch guns, firing at over half their maximum range, would have been falling on the target at a much steeper trajectory than Bismarck’s 15-inch guns and that one of her high-explosive 8-inch shells might have gone down Hood’s after funnel. If this did happen, it would have been just before Lütjens ordered Prinz Eugen to shift her fire to Prince of Wales, about the time Hood was engulfed.

The wire cage that covered the top of the funnel would not stop a shell and would be unlikely to explode it. The next obstacle on a shell’s journey would have been a steel grating positioned in vents at the level of the lower deck to protect the boiler room. If an 8-inch shell exploded here, it would have detonated in the boiler room. A high-explosive shell bursting in one of the boiler rooms or nearby might have resulted in an enormous buildup of pressure, resulting in an explosion inside the ship. The line of least resistance to this would have been up through Hood’s thin decks, not through the heavily armored sides or bottom.

Was this the result, a muffled explosion within the ship only heard below decks, the flash seen above decks near the mainmast with the propellers still turning driving the rear into the severely weakened midsection and breaking Hood in two parts? Could it have been a fatal combination of two of these theories?

In July 2001, the wreck of the Hood was found 9,334 feet below the surface of the  Denmark Straight. She lies in three sections  with the bow on its side, the mid section upside down, and the stern speared into the seabed. In 2013, the wreck was more fully explored with a remote-control vehicle. The exploration appears to confirm a massive explosion had taken place in the magazine feeding Y Turret and breaking the back of the ship. However, it remains a mystery, given the low trajectory of any shell, how one could have passed through four decks and the magazine armor. It must have been a lucky shot, indeed.

Prince of Wales Escapes

After the loss of the Hood, the battle continued. Prince of Wales was about 1,000 yards astern of Hood. Seeing the flagship explode, Captain Leach ordered a hard turn to starboard to avoid the wreckage. Hood was engulfed in smoke, but the stern was still above the water. The forward section still had some momentum but was listing to port and sinking rapidly. After clearing the wreckage, Leach swung Prince of Wales back onto 260 degrees, bringing his full broadside to bear.

The turn of Prince of Wales disrupted the gunners on Bismarck and Prinz Eugen, but they soon found the range again. With the range down to 15,000 yards, the fire from both sides was finding its mark.

A 15-inch shell from Bismarck hit the bridge of Prince of Wales. Although it did not explode, it killed several key personnel, and for a short period disrupted command of the ship. Direction was transferred to the aft position. She was also hit by an 8-inch shell from Prinz Eugen, knocking out fire control of several 5.25-inch guns, and two more hits caused minor flooding. At 6:03 am, Bismarck passed the port beam of Prinz Eugen, causing that ship to temporarily cease fire. The heavy cruiser had turned away because of suspected torpedoes.

Leach did not close the range. Prince of Wales had managed to hit Bismarck three times, although no explosions had been observed. Bismarck struck back, hitting the starboard crane of Prince of Wales, causing much splinter damage. Another shell hit amidships below the rear funnel under the waterline but failed to explode. It did cause some flooding and required the ship to counter flood to maintain trim.

Leach felt his ship was taking heavy damage––it had been hit four times by Bismarck and three times by Prinz Eugen. His own ship’s main armament was still not working properly, and his crew lacked the experience to adjust for this. Believing his ship might suffer serious damage, Leach ordered Prince of Wales to withdraw behind a smoke screen at 6:05. Also, Bismarck had completed passing Prinz Eugen so that ship’s guns would soon be back in action. Whether this influenced Leach is unknown.

Admiral Lütjens was surprised to see Prince of Wales turn away, but he dismissed calls from some of his men to pursue the British ship. It was doubtful they would be able to catch her. Also, Bismarckherself had been hit. Two shells had caused minor damage. However, one 14-inch shell had struck below the waterline causing some flooding and reduction in speed. Worse, some fuel tanks had been ruptured causing the loss of several hundred tons of precious fuel oil.

Lütjens soon realized he could not continue with the mission to attack British convoys due to the loss of fuel. Prinz Eugen was therefore detached to proceed with raiding while Bismarck turned back. The battleship headed for the nearest port with a drydock big enough to take Bismarck, at St. Nazaire, France.

Sinking the Bismarck

On May 26, a British aircraft spotted the battleship and radioed her position to other warships in the area. A force of 15 Fairey Swordfish torpedo planes from the carrier Ark Royal converged on Lütjens’ ship, and one torpedo damaged Bismarck’s rudder so badly that all the giant ship could do was sail helplessly in a circle.

Like a pack of lions, the chasing British battleships Rodney and King George V caught and engaged Bismarck at a range of 16,000 yards. The German gunners’ return fire was ineffective, and the helpless Bismarck was torn apart. At 10:40 am on May 27, 1941, the German battleship sank some 300 nautical miles west of Ushant, France. Only 110 of her crew of 2,222 survived the sinking. Admiral Lütjens went down with the ship.

This article By Mark Simmons originally appeared on Warfare History Network. This first appeared in 2018.

Image: Wikipedia.

Gas Flares Cause Preterm Births, Study Shows

Tue, 25/08/2020 - 09:45

Jill Johnston, Lara Cushing

economy, Americas

Low-income communities and communities of color here bear the brunt of the energy industry’s pollution.

Through the southern reaches of Texas, communities are scattered across a flat landscape of dry brush lands, ranches and agricultural fields. This large rural region near the U.S.-Mexico border is known for its persistent poverty. Over 25% of the families here live in poverty, and many lack access to basic services like water, sewer and primary health care.

This is also home to the Eagle Ford shale, where domestic oil and gas production has boomed. The Eagle Ford is widely considered the most profitable U.S. shale play, producing more than 1.2 million barrels of oil daily in 2019, up from fewer than 350,000 barrels per day just a decade earlier.

The rapid production growth here has not led to substantial shared economic benefits at the local level, however.

Low-income communities and communities of color here bear the brunt of the energy industry’s pollution, our research shows. And we now know those risks also extend to the unborn. Our latest study documents how women living near gas flaring sites have significantly higher risks of giving birth prematurely than others, and that this risk falls mainly on Latina women.

Gas Flaring and Health Risks

Many low-income residents and seniors living in the Eagle Ford shale believe the wastes from energy production – including disposal wells for oil production wastewater and gas flaring – are harming their communities.

In our research in the region as professors of environmental health and preventive medicine, we have shown how poor communities and communities of color bear more of the burden of these wastes.

It happens with fracking wastewater disposal wells, where “flowback” water from fracked wells containing toxic chemicals is injected back into the ground. Disposal wells bring new truck traffic to neighborhoods and may contaminate groundwater. In a study in 2016, we found these disposal wells were disproportionately in high-poverty areas in the region. They were also more than twice as common in areas where the population was more than 80% people of color than in majority-white areas.

These communities also bear more of the burden of gas flaring, the highly visible practice of burning off waste gas during oil production. Flaring releases greenhouse gases and hazardous air pollutants, including particulate matter, black carbon, benzene and hydrogen sulfide, pollutants that have been linked to respiratory and cardiovascular problems. We found areas with majority Hispanic populations were exposed to twice as many nightly flares as those with few Hispanics.

Flares are so common in the Eagle Ford shale that they are visible from space.

In our latest study, we used satellite observations and Texas birth records for more than 23,000 births in the region to study connections between flaring and health in pregnant women. We found that women who lived in areas where flaring is common had 50% higher odds of giving birth prematurely than those with no flaring within three miles of their homes.

Preterm birth can be life-threatening, especially for babies born very early, who typically have difficulty feeding and breathing and require special medical care. Being born prematurely can also cause long-term health problems, including hearing loss, neurological disorders and asthma.

The increased risk we found associated with flaring was similar to the increased risk others have seen for women who smoke during pregnancy. This risk fell almost entirely on Latina women, who were exposed to more flaring than white women. In all, about 14% of babies whose mothers lived within three miles of flaring and were exposed to at least 10 flares were born prematurely.

While women in the region also face other stressors related to poverty, health and racism, we think flaring may impact preterm birth for those living closest by exposing them to air pollutants, which research has shown are associated with preterm births.

Together, our work points to longstanding issues of environmental racism in rural energy extraction communities in the U.S.

Environmental Justice and the Urban-Rural Divide

Rural America is often singled out by locally unwanted industries. The rural policy scholar Celia Carroll Jones put it this way: “For the majority of Americans who live in metropolitan areas, rural dumping becomes a logical choice: Undeveloped land is inexpensive and available, fewer residents will be harmed should containment measures fail, and, most importantly, nuisances and dangers are removed from their own neighborhoods.”

It isn’t just the energy industry. Urban human and industrial solid waste, a byproduct of wastewater treatment plants, is frequently disposed of on rural land. Touted as fertilizer, this sewage sludge contains mixtures of chemical and biological contaminants. Residents complain of symptoms like mucous membrane irritation, respiratory distress, headaches and skin rashes when sewage sludge is being applied to land.

In Decatur, Alabama, where about 20% of the population lives in poverty, contaminated sludge was applied to land used for cattle grazing and crops. This resulted in detectable levels of toxic perflourinated compounds in soil, grass, beef and groundwater in the area.

This pattern extends to our food production systems. For example, the industrialization of hog production has led to the concentration of numerous biological and chemical pollutants that threaten environmental quality. The health impacts are concentrated disproportionately in Black communities in rural eastern North Carolina. Industrial hog operations have been linked to asthma, higher blood pressure and greater risk of premature death.

These examples illustrate a larger pattern of environmental injustice that characterizes relationships between urban areas that create waste and rural areas that receive that waste.

This undermines health in communities that are already at higher risk, as our research has shown. Ultimately, it also undermines progress toward more sustainable energy and food supplies, because the people who use the most energy and agriculture products don’t experience the health impacts of their production and waste.

Jill Johnston, Assistant Professor of Preventive Medicine, University of Southern California and Lara Cushing, Assistant Professor of Environmental Health Sciences, Fielding School of Public Health, University of California, Los Angeles

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image: Reuters

Study: Shoulder-Fired Weapons Can Cause Brain Injury

Tue, 25/08/2020 - 09:30

Adam Linehan

Security, Americas

Is there a possible solution?

Key Point: New helmet designs could improve protection for soldiers.

Service members risk brain damage when operating shoulder-fired heavy weapons like the AT4, LAW, and Carl Gustaf recoilless rifle, according to report by the Center for a New American Security.

- “[Department of Defense] studies have demonstrated that some service members experience cognitive deficits in delayed verbal memory, visual-spatial memory, and executive function after firing heavy weapons,” CNAS reports.

- Whether those symptoms can become permanent is unclear. However, DoD studies have also “found higher rates of confusion and post-concussion associated symptoms among individuals with a history of prolonged exposure to low-level blasts,” according to CNAS.

- “When you fire [heavy weapons], the pressure wave feels like getting hit in the face,” Paul Scharre, a co-author of the report, told National Public Radio on Monday.

- CNAS found that helmets currently worn by U.S. Army soldiers provide “modest protection” against blast-waves produced by shoulder-fired weapons. The report advises the Army to begin researching “new helmet materials, shapes, and designs that might dramatically improve protection from primary blast injury.”

- Additionally, the report recommends that the Army better “protect soldiers against blast-induced brain injury” by increasing the use of sub-caliber rounds in training and enforcing limits of firing heavy weapons.

- To better assess how “blast pressure exposure” impacts the brain, the report urges the Army to expand the use of small devices that measure the intensity of blast waves. So-called “blast gauges” can be attached to the helmet or shoulder.

- “Every service member who is in a position where he or she might be exposed to blast waves should be wearing these devices,” Sharre told NPR. “And we need to be recording that data, putting it in their record and then putting it in a database for medical studies.”

- CNAS also recommends that “blast exposure history”  be included in soldiers’ service records so medical issues that later arise as a result of operating heavy weapons can be treated as service-connected injuries.

Adam Linehan is a senior staff writer for Task & Purpose. Between 2006-2012, he served as a combat medic in the U.S. Army, and is a veteran of Iraq and Afghanistan. Follow Adam Linehan on Twitter @adam_linehan.

This article originally appeared at Task & Purpose. Follow Task & Purpose on Twitter.

Image: U.S. Army / Flickr

This Cheap Submarine From Sweden 'Sank' a U.S. Aircraft Carrier

Tue, 25/08/2020 - 09:00

Sebastien Roblin

Security, Europe

How was the Gotland able to evade the U.S. Navy's elaborate antisubmarine defense?

Key Point: The advent of cheap, stealthy and long-enduring diesel submarines is yet another factor placing carriers at greater risk when operating close to defended coastlines.

In 2005, USS Ronald Reagan, a newly constructed $6.2 billion dollar aircraft carrier, sank after being hit by multiple torpedoes.

Fortunately, this did not occur in actual combat, but was simulated as part of a war game pitting a carrier task force including numerous antisubmarine escorts against HSMS Gotland, a small Swedish diesel-powered submarine displacing 1,600 tons. Yet despite making multiple attacks runs on the Reagan, the Gotland was never detected.

This outcome was replicated time and time again over two years of war games, with opposing destroyers and nuclear attack submarines succumbing to the stealthy Swedish sub. Naval analyst Norman Polmar said the Gotland “ran rings” around the American carrier task force. Another source claimed U.S. antisubmarine specialists were “demoralized” by the experience.

How was the Gotland able to evade the Reagan’s elaborate antisubmarine defenses involving multiple ships and aircraft employing a multitude of sensors? And even more importantly, how was a relatively cheap submarine costing around $100 million—roughly the cost of a single F-35 stealth fighter today—able to accomplish that? After all, the U.S. Navy decommissioned its last diesel submarine in 1990.

Diesel submarines in the past were limited by the need to operate noisy, air-consuming engines that meant they could remain underwater for only a few days before needing to surface. Naturally, a submarine is most vulnerable, and can be most easily tracked, when surfaced, even when using a snorkel. Submarines powered by nuclear reactors, on the other hand, do not require large air supplies to operate, and can run much more quietly for months at a time underwater—and they can swim faster while at it.

However, the two-hundred-foot-long Swedish Gotland-class submarines, introduced in 1996, were the first to employ an Air Independent Propulsion (AIP) system—in this case, the Stirling engine. A Stirling engine charges the submarine’s seventy-five-kilowatt battery using liquid oxygen.

With the Stirling, a Gotland-class submarine can remain undersea for up to two weeks sustaining an average speed of six miles per hour—or it can expend its battery power to surge up to twenty-three miles per hour. A conventional diesel engine is used for operation on the surface or while employing the snorkel. The Stirling-powered Gotland runs more quietly than even a nuclear-powered sub, which must employ noise-producing coolant pumps in their reactors.

The Gotland class does possess many other features that make it adept at evading detection. It mounts twenty-seven electromagnets designed to counteract its magnetic signature to Magnetic Anomaly Detectors. Its hull benefits from sonar-resistant coatings, while the tower is made of radar-absorbent materials. Machinery on the interior is coated with rubber acoustic-deadening buffers to minimize detectability by sonar. The Gotland is also exceedingly maneuverable thanks to the combined six maneuvering surfaces on its X-shaped rudder and sail, allowing it to operate close to the sea floor and pull off tight turns.

Because the stealthy boat proved the ultimate challenge to U.S. antisubmarine ships in international exercises, the U.S. Navy leased the Gotland and its crew for two entire years to conduct antisubmarine exercises. The results convinced the U.S. Navy its undersea sensors simply were not up to dealing with the stealthy AIP boats.

However, the Gotland was merely the first of many AIP-powered submarine designs—some with twice the underwater endurance. And Sweden is by no means the only country to be fielding them.

China has two diesel submarine types using Stirling engines. Fifteen of the earlier Type 039A Yuan class have been built in four different variants, with more than twenty more planned or already under construction. Beijing also has a single Type 032 Qing-class vessel that can remain underwater for thirty days. It believed to be the largest operational diesel submarine in the world, and boasts seven Vertical Launch System cells capable of firing off cruise missiles and ballistic missiles.

Russia debuted with the experimental Lada-class Sankt Peterburg, which uses hydrogen fuel cells for powerIt is an evolution of its widely produced Kilo-class submarine. However, sea trials found that the cells provided only half of the expected output, and the type was not approved for production. However, in 2013 the Russian Navy announced it would produce two heavily redesigned Ladas, the Kronstadt and Velikiye Luki, expected by the end of the decade.

Other producers of AIP diesel submarines include Spain, France, Japan and Germany. These countries have in turn sold them to navies across the world, including to India, Israel, Pakistan and South Korea. Submarines using AIP systems have evolved into larger, more heavily armed and more expensive types, including the German Dolphin-class and the French Scorpene-class submarines.

The U.S. Navy has no intention to field diesel submarines again, however, preferring to stick to nuclear submarines that cost multiple billions of dollars. It’s tempting to see that as the Pentagon choosing once again a more expensive weapon system over a vastly more cost-efficient alternative. It’s not quite that simple, however.

Diesel submarines are ideal for patrolling close to friendly shores. But U.S. subs off Asia and Europe need to travel thousands of miles just to get there, and then remain deployed for months at a time. A diesel submarine may be able to traverse that distance—but it would then require frequent refueling at sea to complete a long deployment.

Remember the Gotland? It was shipped back to Sweden on a mobile dry dock rather than making the journey on its own power.

Though the new AIP-equipped diesel subs may be able to go weeks without surfacing, that’s still not as good as going months without having to do so. And furthermore, a diesel submarine—with or without AIP—can’t sustain high underwater speeds for very long, unlike a nuclear submarine. A diesel sub will be most effective when ambushing a hostile fleet whose position has already been “cued” by friendly intelligence assets. However, the slow, sustainable underwater speed of AIP-powered diesel submarines make them less than ideal for stalking prey over vast expanses of water.

These limitations don’t pose a problem to diesel subs operating relatively close to friendly bases, defending littoral waters. But while diesel submarines may be great while operating close to home—the U.S. Navy usually doesn’t.

Still, the fact that one could build or acquire three or four diesel submarines costing $500 to $800 million each for the price of a single nuclear submarine gives them undeniable appeal. Proponents argue that the United States could forward deploy diesel subs to bases in allied nations, without facing the political constraints posed by nuclear submarines. Furthermore, advanced diesel submarines might serve as a good counter to an adversary’s stealthy sub fleet.

However, the U.S. Navy is more interested in pursuing the development of unmanned drone submarines. Meanwhile, China is working on long-enduring AIP systems using lithium-ion batteries, and France is developing a new large AIP-equipped diesel submarine version of its Barracuda-class nuclear attack submarine.

The advent of cheap, stealthy and long-enduring diesel submarines is yet another factor placing carriers and other expensive surface warships at greater risk when operating close to defended coastlines. Diesel submarines benefitting from AIP will serve as a deadly and cost-effective means of defending littoral waters, though whether they will can carve out a role for themselves in blue water naval forces operating far from home is less clear.

Sébastien Roblin holds a Master’s Degree in Conflict Resolution from Georgetown University and served as a university instructor for the Peace Corps in China. He has also worked in education, editing, and refugee resettlement in France and the United States. He currently writes on security and military history for War Is Boring.

This article originally ran in November of 2016.

Image: Wikimedia Commons

India is a Powerhouse in Vaccine Manufacturing

Tue, 25/08/2020 - 08:45

Rory Horner

Public Health, Asia

Thanks to its vast manufacturing capacity, India will undoubtedly export vaccines, continuing its role as the “pharmacy of the developing world”.

The great COVID-19 vaccine race is on. Pharmaceutical companies around the world are going head to head, while governments scramble to get priority access to the most promising candidates.

But a richest-takes-all approach in the fight against the deadliest pandemic in living memory is bound to be counter productive, especially for the recovery of low and middle income countries. If governments cannot come together to agree a global strategy, then the global south may need to pin its hopes on the manufacturing might of India.

Tedros Adhanom Ghebreyesus, the director general of the World Health Organization, has warned that a nationalist approach “will not help” and will slow down the world’s recovery. Yet vaccine nationalism looms large over the search for vaccines, with the US, the UK and the European Commission all signing various advance purchase agreements with manufacturers to secure privileged access to doses of the most promising candidates. The US alone has paid over US$10 billion (£7.6 billion) for such access.

The ideal global distribution of a successful COVID-19 vaccine would look beyond which countries have the deepest pockets and instead prioritise health workers, followed by countries with major outbreaks and then those people who are particularly at risk.

India has the potential to play a key role in overcoming vaccine nationalism because it is the major supplier of medicines to the global south. Médecins Sans Frontières once dubbed the country the “pharmacy of the world”. India also has, by far, the largest capacity to produce COVID-19 vaccines. Its role in manufacturing a vaccine could come in two different ways – mass-producing one developed elsewhere (likely) or developing a new vaccine as well as manufacturing it (less likely, though not impossible).

Scaling Up Existing Vaccines

India’s Serum Institute has already started manufacturing the University of Oxford/AstraZeneca vaccine candidate before clinical trials have even been completed. This is to avoid any subsequent delay if the vaccine is approved. It is seen by many, including the WHO’s chief scientist, as the world’s leading prospect.

Serum Institute, based in the western city of Pune, is the largest vaccine manufacturer in the world and has a deal to supply 400 million doses by the end of 2020 (1 billion in total). It has also inked a deal for the manufacturing and commercialisation of American firm Novavax’s COVID-19 candidate.

Another Indian pharma company, Biological E (BE), has agreed to manufacture the vaccine candidate of Johnson & Johnson’s subsidiary, Janssen Pharmaceutica NV. The Hyderabad-based firm has since announced its acquisition of Akorn India in order to boost its manufacturing capacity.

Despite India’s success in mass manufacturing, the transition to innovation and new product development has been more of a struggle. Nevertheless the Serum Institute, Aurobindo Pharma, Bharat Biotech, BE, Indian Immunologicals, Mynvax, Panacea Biotech and Zydus Cadila are all attempting to develop their own vaccines.

Bharat Biotech’s Covaxin has attracted the most attention and controversy. The Indian Council of Medical Research wrote to a number of hospitals seeking their help in fast-tracking the clinical trials of the drug, which was developed in collaboration with the National Institute of Virology. The aim had been to launch it by August 15 (Indian Independence Day). Despite the feasibility of that timeline being widely questioned, trials of Covaxin did begin in Delhi on July 15.

Who Gets the Vaccines?

Uncertainty reigns over who will get these vaccines manufactured in India – and there have been very mixed messages. Regarding the much-hyped Oxford/AstraZeneca vaccine, Adar Poonawalla, the Serum Institute’s CEO, said, “a majority of the vaccine, at least initially, would have to go to our countrymen before it goes abroad”.

He added that the Indian government would decide how much other countries get, and when. In a later interview the CEO went further, adding: “Out of whatever I produce, 50% to India and 50% to the rest of the world”. He also said the Indian government had not objected to this idea.

Vaccine diplomacy may come into play, as indicated by India’s foreign minister, Harsh Shringla, on a visit to Dhaka. He promised that India would supply vaccines to Bangladesh on “priority basis”, stating that India’s “closest neighbours, friends, partners and other countries” will receive privileged access.

Meanwhile, a recent agreement provided a firmer guarantee of Serum-Institute-produced vaccines being supplied outside of the country – at least in 2021. On August 7, Gavi (the global vaccine alliance) announced a collaboration with the Serum Institute and the Bill & Melinda Gates Foundation. The deal provides US$150m of financial support for the Serum Institute to manufacture and supply 100 million doses of vaccines to the COVID-19 Vaccine Global Access Facility (COVAX) for distribution in low and middle-income countries in 2021.

The deal will support the company’s manufacture of both the AstraZeneca and Novavax candidates and guarantees a price of US$3 per dose. AstraZeneca’s candidate will be available to 57 Gavi-eligible countries, while the Novavax treatment will be available to 92.

With almost 18% of the world’s population, India has strong demand for COVID-19 vaccines. Export bans on some personal protective equipment and key medicines in March set a precedent for prioritising supply to India first. But the bans were short lived and exports continued.

Thanks to its vast manufacturing capacity, India will undoubtedly export vaccines, continuing its role as the “pharmacy of the developing world”. Vinod Paul, chair of India’s National COVID-19 Task Force, has spoken openly of his desire to see India play a global role, saying: “The vaccine is not just for India and Indians but for the world and humanity.” The question is when. Many in low and middle-income countries will undoubtedly be hoping it will be sooner rather than later.

Rory Horner, Senior Lecturer, Global Development Institute, University of Manchester

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image: Reuters

From Civil To Proxy War: How Syria Became Iran And Israel's Battlefield

Tue, 25/08/2020 - 08:30

Sebastien Roblin

Security, Middle East

By one count, there have been over 150 Israeli strikes in Syria stretching all the way back to 2012.

Here's What You Need To Remember: Now that Assad’s hold on power is secure, it’s becoming clear that growing infrastructure of Iranian bases in Syria is likely there to stay—and that Iran intends to use them to funnel drones, artillery, anti-aircraft and anti-tank weapons to Hezbollah to challenge Israeli military dominance. This has elicited an intensifying Israeli bombardment campaign to knock out the buildup, both through preemptive strikes and reactive counterattacks.

At midnight on the Syrian-Israeli border on May 8–9, 2018 a multiple-rocket launcher system operated by the Quds force—an expeditionary special forces unit of the Iranian Revolution Guard Corps—fired a salvo of twenty unguided 333mm Fajr-5 rockets towards Israel. (You can see the apparent rocket launch here.) Four of the rockets were shot down by Israeli Iron Dome air defense system and the rest missed and landed in Syrian territory.

A few hours later, around ten Israeli surface-to-surface missile launchers and twenty-eight F-15I and F-16I jets unleashed seventy cruise missiles and precision-guided glide bombs that struck Iranian logistical bases and outposts throughout Syria. The Iranian rocket launcher was destroyed, and when Syrian air defenses attempted to engage the Israeli fighters, five batteries were knocked out.

The May 9 clash is considered the first direct clash between Iranian and Israeli forces, an event likely linked to the Washington’s withdrawal from the Iran nuclear deal the day before the attack. However, observers of the region might recall that Israeli warplanes had struck an Iranian convoy in Syria earlier that same day. There were additional strikes on May 6 and April 29 that killed scores of Syrian and Iranian troops—possibly including an Iranian general—and knocked out an S-200 surface-to-air missile battery.

By one count, there have been over 150 Israeli strikes in Syria stretching all the way back to 2012. Many of the raids have targeted transfers of advanced weapon systems to Hezbollah, or been made in response to cross-border attacks.

If you connect the dots, it becomes clear that the May 9 exchange was the most overt flare-up of a long-running proxy war. Iranian and Hezbollah troops—having effectively secured the government of Bashar al-Assad from possible overthrow—are busily establishing a long-term presence and transferring advanced weapons into Syria and Hezbollah, including drones, surface-to-air missile systems and rocket artillery. At the same time, Israeli warplanes are attempting to destroy these sites and weapon systems before they get deeply entrenched. The U.S. exit from the nuclear deal appears to have prompted Tehran to finally authorize a direct retaliatory attack on Israel—even if it was an ineffective one.

It’s unusual for two regional powers without a common a border to be at each other’s throats. However, a series of historical circumstances have brought them to more more open military conflict than ever before.

Syria, Lebanon and Hezbollah

Under the regime of the Shah, Iran developed relatively close economic and military ties to Israel. The Iranian Revolution in 1979 brought hardline clerics to power and a formal end to diplomatic ties, but Israel continued to supply Iran with over $500 million in desperately needed weapons during the bloody Iran-Iraq War. At the time, the Israeli government saw Iraq as a more proximate threat—mainly due to its nuclear weapons and ballistic missile programs.

However, the seeds of a deeper Iranian-Israeli conflict were sown in the civil war in Lebanon. The creation of Israel in 1948—and its conquest of additional territories in 1967—displaced hundreds of thousands of Palestinian refugees. Unable to return home, the Palestinian refugees became a permanent, nationless population in neighboring Arab countries. Their presence in the small, multicultural state of Lebanon eventually destabilized a precarious balance of power between diverse factions divided by ethnicity, religion and ideology, contributing to the outbreak of a civil war in 1975.

Seven years later, Israeli troops entered Lebanon in Operation Peace for Galilee, an effort to tilt the war in the favor of Christian factions in Lebanon. Syria—which had fought multiple wars with Israel for control of the Golan Heights—had already deployed troops in Lebanon in support of Palestinian factions, so Syrian tanks and jet fighters clashed with Israeli forces in massive battles. President Ronald Reagan also dispatched troops to Lebanon in an attempt to influence the conflict, but withdrawn them in October 1983 after a truck bombing killed 241 Americans.

Meanwhile, religious divisions facilitated Iran’s involvement in the conflict. The most important schisms in the Islamic faith lies between the Sunni and Shia branches of Islam, comparable to the Catholic-Protestant divide of Christianity. Iran—the chief Shia country in the Islamic world—organized and armed a coalition of Shia fighters called Hezbollah to fight the Israelis.

Though Hezbollah initially opposed Syrian influence in Lebanon, Damascus would eventually become a junior-partner in the management of Hezbollah. While Syria has a majority Sunni population, the ruling Assad family were Alawites—a minority associated with Shia Islam—which may have contributed to warm Iranian-Syrian ties.

Israel eventually withdrew from the Lebanese quagmire, and the war ground to its conclusion in 1990 with the defeat of the Maronite Christian faction, and the restoration of a tenuous multiparty democratic system—in which Hezbollah was entrenched as a key player. The Shia group became an odd combination of political party, de facto regional government in southern Lebanon, standing army and international terrorist group (with its sights set on the Israeli forces on the Lebanese border).

Tehran and Damascus also used Hezbollah as a lever to influence Lebanese politics. For example, Syria and Hezbollah are generally believed to be culpable in the assassinations of two-time Lebanese prime minister Rafic Hariri in 2006 using a 4,000-pound truck bomb.

Washington Clears the Way for Tehran’s Rise as Regional Power

During the 1990s, Iran ramped up anti-Israel rhetoric to cultivate political support in the wider Muslim world. However, in 2003, the moderate government of Mohammad Khatami transmitted a wide-ranging peace offer to restore relations with the United States and Israel. However, the Bush administration did not bother replying to a members of the ‘Axis of Evil.’

In April of the same year, U.S. forces invaded Iran’s old enemy, Iraq. It may be difficult to recall today, but in the weeks following the fall of Baghdad, many in Washington openly speculated that Tehran or Damascus might be the next to fall.

However, Iraq had been a Shia-majority nation ruled by a Sunni dictator. With the overthrow of Saddam Hussein, Iraqi Shia politicians and clerics ascended to power and soon shifted the nation towards warmer relations with Tehran. Ironically, the United States had removed one of the chief geographic barriers to Iranian influence. Shia repression of their erstwhile Sunni persecutors would also inspire radical Sunni groups (such as ISIS), prompting the formation of violent, Iran-backed Shia militias to deal with them in a vicious sectarian cycle.

In 2005, the hardline Mahmoud Ahmadinejad was elected president of Iran. A zealous ideologue, Ahmadinejad combined demagogic anti-Israel rhetoric and pageantry—for example, he sponsored an international Holocaust-denial conference—with increased military support and training for Hezbollah. Tehran also began sponsoring Hamas, a Sunni Islamist group that eventually secured control of the Palestinian Gaza Strip territory in 2007.

In the summer of 2006, this dynamic escalated into the 2006 Lebanon War, when Israel—in response to an ambush in which two Israeli soldiers were kidnapped—began a large-scale bombing campaign targeting Hezbollah forces in Lebanon, later escalating into an invasion.

The sheer scale of the bombardment reportedly stunned Hezbollah’s leadership—as well as Lebanese civilians. (The Israel Defense Forces estimates it killed 600 to 700 Hezbollah fighters in the war, while most sources put the death toll in both combatants and noncombatants in Lebanon at around 1,100.) However, Russian-built Kornet-E and Metis anti-tank missiles and a dense network of defensive positions allowed Hezbollah fighters—backed up by Iranian Revolutionary Guard personnel—to inflict unexpectedly heavy losses on Israeli tanks and infantry.

The war ended in a ceasefire and clashes diminished in frequency, in part due to a successful surge in from UN peacekeeping forces. Hezbollah and Israel licked their wounds and were soon were engaged in other conflicts.

Israel was also concerned with Iran and Syria’s covert nuclear weapons program, and Iran’s growing ballistic missile capabilities in particular. In Syria, Iranian, Syrian and North Korean engineers had been collaborating on a nuclear reactor facility and a new chemical weapons plant. However, in July 2007 the al-Safir chemical plant suffered a ‘mysterious’ explosion, while Israeli warplanes destroyed the reactor in Deir-ez-Zor that September, effectively bringing an end to Syria’s nuclear ambitions.

Iran, however, lies beyond the easy striking distance of Israeli jets, with several intervening international borders. Israel therefore began lobbying Washington to launch a preventive war to knock out the Iranian research program. Though this idea was popular with neoconservatives in the Bush administration, the Iraq War by then had proven such a debacle that political support was lacking.

Instead, the Israeli military employed clandestine means to strike at Tehran’s nuclear program. Starting in 2010, Israeli assassination attempts killed at least five and wounded one Iranian nuclear scientist. Iranian attempts to retaliate—via international terror attacks—were mostly unsuccessful.

Iran, Hezbollah and Russia Team Up to Save Assad

In 2011, the Arab Spring caused a wave of political unrest to sweep across the Middle East, igniting a civil war in a drought-stricken Syria. By 2012, the Damascus had lost control of vast swathes of the country to a disunited Sunni Arab and Kurdish rebel groups.

Tehran did everything it could to save Assad, dispatching teams of Iranian Revolutionary Guard Corps soldiers and officers to train and lead Syrian government troops into battle. However, the fragmenting Syrian military required more manpower as the rebellion spread—so in 2012, around 3,000 Hezbollah fighters poured over the Lebanese border to join the fight on behalf of Assad.

In truth, even the support of Iran and Hezbollah was not enough for Assad to win the war—but they kept his regime on life support. It was the intervention of Russian military and mercenary forces starting in the fall of 2015 that finally turned the tide.

Syria remained the last major outpost of Russian influence in the Middle East, notably in the form of a naval base at Latakia. Moscow calculated that not only would intervention in the war preserve Syria as an asset, but also afford it a chance to test numerous weapons system which had never been used in combat before, thereby advertising their capabilities to potential export clients.

Now that Assad’s hold on power is secure, it’s becoming clear that growing infrastructure of Iranian bases in Syria is likely there to stay—and that Iran intends to use them to funnel drones, artillery, anti-aircraft and anti-tank weapons to Hezbollah to challenge Israeli military dominance. This has elicited an intensifying Israeli bombardment campaign to knock out the buildup, both through preemptive strikes and reactive counterattacks.

So far, the exchange has been a lopsided one, with only a single Israeli warplane shot down by Syrian air defense over seven years, in exchange for dozens of targets hit by Israeli guided munitions and numerous facilities and advanced weapons destroyed. Iranian and Hezbollah forces, however, have continued to test Israeli defenses with drones and artillery, so the proxy war could potentially continue escalating for some time.

Sébastien Roblin holds a Master’s Degree in Conflict Resolution from Georgetown University and served as a university instructor for the Peace Corps in China. He has also worked in education, editing, and refugee resettlement in France and the United States. He currently writes on security and military history for War Is Boring. This article first appeared in 2018.

Image: Reuters.

Is The F-5E Tiger II Or Soviet MiG-21 Superior? Here's What The Iran-Iraq War Has To Say

Tue, 25/08/2020 - 08:00

Robert Beckhusen

Security, Middle East

It became obvious that both the F-5E and the MiG-21 lacked the advanced sensors, weapons and electronic countermeasures necessary for survival over a battlefield saturated with massive volumes of anti-aircraft weaponry.

Here's What You Need To Remember: As far as is currently known, the Iran-Iraq War thus ended without a clear winner in this duel. Four each F-5Es and MiG-21s were destroyed.

There have been countless discussions over which is the better fighter jet— the U.S.-made Northrop F-5E Tiger II or the Soviet MiG-21 Fishbed.

That can be a hard argument to settle. The Iran-Iraq war was probably a draw for the two types.

More than 15,000 of these two cheap, lightweight, simple-to-maintain and -operate fighters were produced and, over the time, they’ve served in more than 60 different air forces — some of which operated both of them.

The usual story is that they never met in combat and thus the ultimate question about their mutual superiority remains unanswered. But actually, they did clash — and not only once.

Their first air battles — fought in the course of the long-forgotten conflict over the Horn of Africa in summer 1977 — ended with a rather one-sided victory for F-5Es of the Ethiopian Air Force. These shot down nine MiG-21s — not to mention two MiG-17s — of the Somali Air Force while suffering zero losses in air combat.

Slightly more than three years later, the two types clashed again in the course of the Iran-Iraq War. Iraq had opportunistically exploited internal chaos resulting from the Islamic Revolution in Iran in 1978 and ’79. The Revolution toppled the U.S.-allied Shah Mohammed Reza Pahlavi and installed a regime that nearly disbanded the Iranian military.

The former Imperial Iranian Air Force became the Islamic Republic of Iran Air Force, or IRIAF. The air arm lost nearly two-thirds of its officers and other ranks to arrests, executions or forced early retirements. By the time Iraq invaded Iran on Sept. 22, 1980, the IRIAF was a shadow of its former self.

Nevertheless, the IRIAF still had five squadrons equipped with around 115 F-5E/Fs, and one squadron flying reconnaissance-optimized RF-5As. Some 40 additional Tiger IIs were in storage. Their primary air-to-air weapon was the AIM-9J variant of the U.S.-made Sidewinder missile, and two 20-millimeter Colt M39 cannons installed internally. But in Iran the type was primarily deployed as a fighter-bomber, armed with different bombs.

Operated by nine squadrons, the MiG-21 was the backbone of the Iraqi Air Force, or IrAF. The best units were equipped with the ultimate MiG-21bis-variant and the latest Soviet-made air-to-air missiles such as the AA-2C Advanced Atoll and the AA-8 Aphid.

A few MiG-21s were locally modified to carry French-made R.500 Magic air-to-air missiles, a small batch of which was delivered to Iraq in summer 1980 pending the first deliveries of Dassault Mirage F.1EQ interceptors.

Training of Iraqi and Iranian pilots was of similar quality. All the Iranians underwent extensive training in the United States and some in Pakistan. Meanwhile, the Iraqis developed their own tactical procedures informed by the Arab experience in the October 1973 war with Israel. Indian, French and Soviet input also shaped Iraqi training.

Iranian F-5Es and Iraqi MiG-21s clashed for the first time on Sept. 24, 1980, two days after Iraq invaded Iran. Two MiGs sneaked up unobserved on a four-ship of Tigers that was approaching Hurrya Air Base loaded with Mk.82 bombs.

One of the Iraqi’s missiles detonated harmlessly under the aircraft flown by Capt. Yadollah Sharifi-Ra’ad, alerting him to the enemy’s presence. Sharifi-Ra’ad then splashed one of the Iraqis with a single Sidewinder.

In attempt to destroy the main Iraqi source of income, the IRIAF began bombing the Iraqi oil industry starting on Sept. 26, 1980. This operation resulted in most of the clashes between the two fighter types.

The first of these occurred on the same day and developed when a pair of Tiger IIs was intercepted by a pair of MiG-21s while approaching the Qanaqin oil refinery. While Iraqis claimed to have shot down an aircraft flown by famous Iranian pilot Capt. Zarif-Khadem — formerly a member of the Taj-Talee Acrojet team, an Iranian equivalent of the U.S. Air Force’s Thunderbirds — the Iranians insisted he hit the ground while attempting to evade one of the MiGs.

On Nov. 14, 1980, a pair of MiG-21bis from №47 Squadron, led by a captain we’ll call “Zaki,” caught an F-5E that had separated from its formation after striking an oil refinery in Mosul. Wrongly describing his target as an F-4 Phantom, the Iraqi described what happened next.

“Our order was to patrol an area we thought was used as a waypoint by the enemy. We carefully scanned the skies and few minutes later saw a lonesome Phantom [sic]. I dove and accelerated while cutting the corner, got a good tone and pressed the trigger.”

“I never fired a Magic before, but heard of the first kill scored with it nearly a month before,” the Iraqi MiG-21 pilot continued. “The missile guided and everything looked fine — until it passed harmlessly by the tail of my target! My first reaction was that of a shock, but friction of second later the Magic detonated near the cockpit, causing a big fireball.”

The biggest air battle between the two types in this war took place on Nov. 26, 1980, when eight IRIAF F-5Es entered Iraqi air space with intention of simultaneously striking a power plant in Dukan, a radar station outside Halabcheh, an observation post outside Suleimaniyah and Al Hurrya Air Base.

By that time, Iraqi MiG-21s were regularly patrolling the northern section of Iraq’s border with Iran. A pair of MiGs led by Capt. Nawfal from №47 Squadron intercepted two F-5Es that approached Dukan and, launching an AA-8, shot down the aircraft flown by 1st Lt. Abul-Hassan, killing him.

The other Iraqi pair intercepted the formation led by Capt. Sharifi-Ra’ad, tasked with bombing a target outside Suleimaniyah. Once again, however, the experienced Iranian outsmarted his opponents.

“I found our target not occupied and decided to re-route towards the telecommunication facility outside Suleimaniyah instead,” Sharifi-Ra’ad recalled. “Once there, my plane shook and I warned my wingman about enemy flak. Then I glanced to the left and sighted a MiG-21: that was the reason for my aircraft shaking.”

“I released my bombs and prepared for air combat while decreasing my altitude to a very low level and then turning hard to force the MiG to overshoot. The Iraqi pilot did a mistake and reduced his speed, while I did another mistake by firing an AIM-9J at him much too early. The Sidewinder failed to lock-on and missed its target.”

“I switched to guns and fired a burst at his right wing from short range,” Sharifi-Ra’ad added. “He was watching me as we descended very low, and then his left wing touched the ground — and his aircraft exploded.”

Both sides agree that an F-5E and a MiG-21 collided during this dogfight, and both pilots perished, but the Iraqis insist that 1st Lt. Abdullah Lau’aybi intentionally rammed his MiG into Zanjani’s Tiger II — an act that made him a sort of local legend.

The most intensive period of aerial warfare between Iran and Iraq came to an end in late 1980. Both sides were physically and materially exhausted by four months of intensive operations and heavy losses.

Furthermore, it became obvious that both the F-5E and the MiG-21 lacked the advanced sensors, weapons and electronic countermeasures necessary for survival over a battlefield saturated with massive volumes of anti-aircraft weaponry.

Unsurprisingly, both types were increasingly relegated to secondary duties, and their mutual clashes became outright rarities. The last known air combat between Iranian F-5Es and Iraqi MiG-21s took place on Nov. 13, 1983, when Capt. Ibrahim Bazargan shot down an Iraqi involved in air strike on Ahwaz International Airport.

As far as is currently known, the Iran-Iraq War thus ended without a clear winner in this duel. Four each F-5Es and MiG-21s were destroyed.

This article first appeared in 2018.

Image: Wikipedia.

Pages