Connect with us

Politics

A New US-Run Pier Off Gaza Could Help Deliver 2 Million Meals A Day – But It Comes With Security Risks

Published

on

a-new-us-run-pier-off-gaza-could-help-deliver-2-million-meals-a-day-–-but-it-comes-with-security-risks
Crew members of the Army ship James A. Loux in Hampton, Va., prepare on March 12, 2024, for the ship to go to the Middle East to build the Gaza pier. Roberto Schmidt/AFP via Getty Images

The U.S. has dispatched eight Army and Navy vessels from Virginia to build a temporary pier off the coast of the Gaza Strip. The aim of this work: to supply food and other necessary items for Palestinians as the war between Israel and Hamas continues and the resulting humanitarian crisis worsens.

Even before Oct. 7, 2023, and the massacre by Hamas of Israeli citizens that sparked the war, about 80% of Palestinians in Gaza relied on foreign humanitarian assistance to meet their basic needs, including food. Now, the United Nations is warning that half of Palestinians in Gaza face famine within the next few months.

The new pier, which is expected to be operational sometime in May 2024, could help deliver 2 million meals a day to Gaza’s estimated 2.2 million residents.

A complex set of factors, including limited entryways into Gaza, Israeli restrictions on what enters Gaza, poor road conditions and security concerns, have resulted in aid groups being unable to deliver sufficient amounts of food to people in Gaza. Israel says it is not directly obstructing aid deliveries, but some critics – including South Africa, which is bringing a genocide case against Israel before the International Court of Justice – disagree.

The U.N. is pressing for Israel to approve food truck convoys run by the main U.N. aid agency supporting people in Gaza, known by the acronym UNRWA, which Israel announced on March 25, 2024, that it would no longer work with.

Feeding the entire population of Gaza would require a ninetyfold increase in daily deliveries of food by air drops and 500 daily trucks, instead of the dozen or so vehicles that enter Gaza each day.

As a former White House national security aide and former U.S. diplomat, I understand the internal workings of the civilian-military sides of constructing a pier and other projects like this during war. I also am aware of the security dimensions that accompany this kind of endeavor.

The temporary pier could offer a partial solution to averting famine in Gaza. But the operation also involves complex logistics, high costs and security risks.

Two young boys hold a food container through a hole in a fence, surrounded by other men and hands reaching toward them.
A crowd of Palestinians waits to receive food distributed by a charity organization on March 27, 2024.
Mahmoud Issa/Anadolu via Getty Images

How the floating pier works

About 1,000 U.S. soldiers will construct this temporary port, which will serve as a relay site for food that comes by ship from Cyprus, before the goods are ferried by water into Gaza.

No U.S. soldiers are expected to set foot in Gaza. Government contractors will reportedly be responsible for moving products by boat across the approximately 3 miles that will separate the pier from Gaza.

What will the finished project look like?

Imagine standing on a beach and there is a long plank that one can walk out from the shoreline of the beach over the water. This leads to a large, floating pier that is surrounded by boats.

The components of this Gaza project are similar: a floating pier, an 1,800-foot-long (549-meter-long) causeway attached to the shore, boats pulled up alongside to help with sorting and moving food, and barges to transport aid from the pier.

Large ships must be able to unload supplies onto the pier, including tons of food, water and medicine. Smaller boats will need to get the aid closer to the shore because Gaza has lost the functions of its port, and its waters are too shallow for large vessels. The new pier is the point between those two activities.

Not the first time the US has used this kind of operation

The Pentagon has erected temporary piers for decades, both for military support during wartime and emergency humanitarian assistance in times of conflict.

This work is done through a 30-year-old program that integrates the Navy and the Army. But as far back as World War II, the United States’ Allied forces landing on Normandy was aided by the construction of a floating dry dock pier.

During Operation Desert Storm in 1991 – a military operation to oust Iraqi forces from Kuwait – the U.S. created a floating pier for military purposes because the Iraqis had mined Kuwait’s port and the U.S. often resupplied troops on the ground via the sea.

Most often, floating piers are built to create a port after or during a crisis.

The U.S. built a floating pier in the Port-au-Prince Bay following the 2010 Haiti earthquake. This allowed them to get food and medicine to help humanitarian agencies that could not travel on badly damaged roads get food and medicine to civilians in need.

The U.S. military continues to build floating piers in training missions, like one it temporarily constructed off South Korea in 2015 to test cargo deliveries in the event of a crisis.

A large crowd of people huddle together near a white truck and some rubble, with the ocean in the distance.
Palestinians rush to the coast after humanitarian air drops land in Gaza on March 25, 2024.
Mahmoud Issa/Anadolu via Getty Images

Security risks persist

Security is of paramount concern with this type of construction during an active war.

The Biden administration has made clear from the start of the war that there would be no U.S. boots on the ground in Gaza, but this mission brings troops dangerously close to the action.

Israeli officials have given the green light to Biden to pursue this operation, and there will be security checks of ships in Cyprus before they head to the port, which should quicken the unloading. Reports from unnamed defense officials say that Israeli soldiers will also surround the pier in an unspecified position to keep them safe.

But the pier could become a target for Hamas or other Iranian-backed proxy groups in Gaza or elsewhere that still have mortar, rockets, drones and other ways to harass or attack the ship.

It also could lead to stampedes for the aid. Twelve people drowned off Gaza’s northern coast trying to retrieve food from the Mediterranean Sea on March 26, 2024.

An unknown cost

Major military operations are expensive, and there is no exact, publicly available price tag for the Gaza pier project.

To me, there is a certain irony in the fact that Israel is the largest recipient of U.S. foreign aid, including major weapons systems, and the U.S. is now spending money on building a pier in order to deliver aid to the very people that are harmed by this U.S. ally using those weapons.

The Conversation

Tara Sonenshine does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Politics

AI Chatbots Refuse To Produce ‘controversial’ Output − Why That’s A Free Speech Problem

Published

on

By

ai-chatbots-refuse-to-produce-‘controversial’-output-−-why-that’s-a-free-speech-problem
AI chatbots restrict their output according to vague and broad policies. taviox/iStock via Getty Images

Google recently made headlines globally because its chatbot Gemini generated images of people of color instead of white people in historical settings that featured white people. Adobe Firefly’s image creation tool saw similar issues. This led some commentators to complain that AI had gone “woke.” Others suggested these issues resulted from faulty efforts to fight AI bias and better serve a global audience.

The discussions over AI’s political leanings and efforts to fight bias are important. Still, the conversation on AI ignores another crucial issue: What is the AI industry’s approach to free speech, and does it embrace international free speech standards?

We are policy researchers who study free speech, as well as executive director and a research fellow at The Future of Free Speech, an independent, nonpartisan think tank based at Vanderbilt University. In a recent report, we found that generative AI has important shortcomings regarding freedom of expression and access to information.

Generative AI is a type of AI that creates content, like text or images, based on the data it has been trained with. In particular, we found that the use policies of major chatbots do not meet United Nations standards. In practice, this means that AI chatbots often censor output when dealing with issues the companies deem controversial. Without a solid culture of free speech, the companies producing generative AI tools are likely to continue to face backlash in these increasingly polarized times.

Vague and broad use policies

Our report analyzed the use policies of six major AI chatbots, including Google’s Gemini and OpenAI’s ChatGPT. Companies issue policies to set the rules for how people can use their models. With international human rights law as a benchmark, we found that companies’ misinformation and hate speech policies are too vague and expansive. It is worth noting that international human rights law is less protective of free speech than the U.S. First Amendment.

Our analysis found that companies’ hate speech policies contain extremely broad prohibitions. For example, Google bans the generation of “content that promotes or encourages hatred.” Though hate speech is detestable and can cause harm, policies that are as broadly and vaguely defined as Google’s can backfire.

To show how vague and broad use policies can affect users, we tested a range of prompts on controversial topics. We asked chatbots questions like whether transgender women should or should not be allowed to participate in women’s sports tournaments or about the role of European colonialism in the current climate and inequality crises. We did not ask the chatbots to produce hate speech denigrating any side or group. Similar to what some users have reported, the chatbots refused to generate content for 40% of the 140 prompts we used. For example, all chatbots refused to generate posts opposing the participation of transgender women in women’s tournaments. However, most of them did produce posts supporting their participation.

Freedom of speech is a foundational right in the U.S., but what it means and how far it goes are still widely debated.

Vaguely phrased policies rely heavily on moderators’ subjective opinions about what hate speech is. Users can also perceive that the rules are unjustly applied and interpret them as too strict or too lenient.

For example, the chatbot Pi bans “content that may spread misinformation.” However, international human rights standards on freedom of expression generally protect misinformation unless a strong justification exists for limits, such as foreign interference in elections. Otherwise, human rights standards guarantee the “freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers … through any … media of … choice,” according to a key United Nations convention.

Defining what constitutes accurate information also has political implications. Governments of several countries used rules adopted in the context of the COVID-19 pandemic to repress criticism of the government. More recently, India confronted Google after Gemini noted that some experts consider the policies of the Indian prime minister, Narendra Modi, to be fascist.

Free speech culture

There are reasons AI providers may want to adopt restrictive use policies. They may wish to protect their reputations and not be associated with controversial content. If they serve a global audience, they may want to avoid content that is offensive in any region.

In general, AI providers have the right to adopt restrictive policies. They are not bound by international human rights. Still, their market power makes them different from other companies. Users who want to generate AI content will most likely end up using one of the chatbots we analyzed, especially ChatGPT or Gemini.

These companies’ policies have an outsize effect on the right to access information. This effect is likely to increase with generative AI’s integration into search, word processors, email and other applications.

This means society has an interest in ensuring such policies adequately protect free speech. In fact, the Digital Services Act, Europe’s online safety rulebook, requires that so-called “very large online platforms” assess and mitigate “systemic risks.” These risks include negative effects on freedom of expression and information.

Jacob Mchangama discusses online free speech in the context of the European Union’s 2022 Digital Services Act.

This obligation, imperfectly applied so far by the European Commission, illustrates that with great power comes great responsibility. It is unclear how this law will apply to generative AI, but the European Commission has already taken its first actions.

Even where a similar legal obligation does not apply to AI providers, we believe that the companies’ influence should require them to adopt a free speech culture. International human rights provide a useful guiding star on how to responsibly balance the different interests at stake. At least two of the companies we focused on – Google and Anthropic – have recognized as much.

Outright refusals

It’s also important to remember that users have a significant degree of autonomy over the content they see in generative AI. Like search engines, the output users receive greatly depends on their prompts. Therefore, users’ exposure to hate speech and misinformation from generative AI will typically be limited unless they specifically seek it.

This is unlike social media, where people have much less control over their own feeds. Stricter controls, including on AI-generated content, may be justified at the level of social media since they distribute content publicly. For AI providers, we believe that use policies should be less restrictive about what information users can generate than those of social media platforms.

AI companies have other ways to address hate speech and misinformation. For instance, they can provide context or countervailing facts in the content they generate. They can also allow for greater user customization. We believe that chatbots should avoid merely refusing to generate any content altogether. This is unless there are solid public interest grounds, such as preventing child sexual abuse material, something laws prohibit.

Refusals to generate content not only affect fundamental rights to free speech and access to information. They can also push users toward chatbots that specialize in generating hateful content and echo chambers. That would be a worrying outcome.

The Conversation

Jordi Calvet-Bademunt is affiliated with The Future of Free Speech. The Future of Free Speech is a non-partisan, independent think tank that has received limited financial support from Google for specific projects. However, Google did not fund the report we refer to in this article. In all cases, The Future of Free Speech retains full independence and final authority for its work, including research pursuits, methodology, analysis, conclusions, and presentation.

Jacob Mchangama is affiliated with The Future of Free Speech. The Future of Free Speech is a non-partisan, independent think tank that has received limited financial support from Google for specific projects. However, Google did not fund the report we refer to in this article. In all cases, The Future of Free Speech retains full independence and final authority for its work, including research pursuits, methodology, analysis, conclusions, and presentation.

Continue Reading

Politics

5 Years After The Mueller Report Into Russian Meddling In The 2016 US Election On Behalf Of Trump: 4 Essential Reads

Published

on

By

5-years-after-the-mueller-report-into-russian-meddling-in-the-2016-us-election-on-behalf-of-trump:-4-essential-reads
Former Special Counsel Robert Mueller testifies before the House Intelligence Committee on July 24, 2019. Alex Wong/Getty Images

In the long list of Donald Trump’s legal woes, the Mueller report – which was released in redacted form on April 18, 2019 – appears all but forgotten.

But the nearly two-year investigation into alleged Russian interference in the 2016 U.S. presidential election dominated headlines – and revealed what has become Trump’s trademark denial of any wrongdoing. For Trump, the Russia investigation was the first “ridiculous hoax” and “witch hunt.”

Mueller didn’t help matters. “While this report does not conclude that the president committed a crime, it also does not exonerate him,” the special counsel stated.

With such equivocal language, it’s easy to see how Democrats and Republicans – and the American public – responded to the report in completely different ways. While progressive Democrats wanted Trump to be impeached, some GOP leaders called for an investigation into the origins of the investigation itself.

Over the past five years, the Conversation U.S. has published the work of several scholars who followed the Mueller investigation and what it revealed about Trump. Here, we spotlight four examples of these scholars’ work.

1. Obstruction of justice

As a law professor and one-time elected official, David Orentlicher pointed out that Trump did many things that influenced federal investigations into him and his aides. They include firing FBI Director James Comey, publicly attacking the special counsel’s work and pressuring then-Attorney General Jeff Sessions not to recuse himself from overseeing Mueller’s investigation.

Some accused Trump of obstructing justice with these actions. But Orentlicher wrote that obstruction of justice is “a complicated matter.”

According to federal law, obstruction occurs when a person tries to impede or influence a trial, investigation or other official proceeding with threats or corrupt intent. The law requires a “corrupt” intention to obstruct justice as well.

But in a March 24, 2019, letter to Congress summarizing Mueller’s findings, then-Attorney General William Barr said he saw insufficient evidence to prove that Trump had obstructed justice.

William Barr’s letter to Congress summarizing the findings of Mueller’s report.
AP Photo/Jon Elswick

So it was up to Congress to further a case against Trump on obstruction charges, but then-Speaker of the House Nancy Pelosi declined, arguing that it would be too divisive for the nation and Trump “just wasn’t worth it.”




Read more:
Trump and obstruction of justice: An explainer


2. Why didn’t the full report become public?

Charles Tiefer is a professor of law at the University of Baltimore and expected that Trump and Barr would do “everything in their power to keep secret the full report and, equally important, the materials underlying the report.”

Tiefer was right. To keep Mueller’s report private, Barr invoked grand jury secrecy – the rule that attorneys, jurors and others “must not disclose a matter occurring before the grand jury.”

Attorney General William Barr was handpicked by Donald Trump to be in office when the Mueller report came in.
AP Photo/Alex Brandon/Jose Luis Magana

Trump and Barr also claimed executive privilege to further prevent the release of the report. Though it cannot be used to shield evidence of a crime, Tiefer explained, “that’s where Barr’s exoneration of Trump really helped the White House.”




Read more:
How Trump and Barr could stretch claims of executive privilege and grand jury secrecy


3. Alternative facts

Political scientists David C. Barker and Morgan Marietta asked an important question: After nearly two years of waiting, why didn’t the report help the nation achieve a consensus over what happened in the 2016 presidential election?

In their book, “One nation, Two Realities,” they found that voters see the world in ways that reinforce their values and identities, irrespective of whether they have ever watched Fox News or MSNBC.

“The conflicting factual assertions that have emerged since the report’s release highlight just how easy it is for citizens to believe what they want, regardless of what Robert Mueller, William Barr or anyone else has to say about it,” they wrote.

Perhaps the most disappointing finding, they argued, is that there are no known fixes to this problem. They found that fact-checking has little impact on changing individual beliefs, and more education only sharpens the divisions.

And with that, they wrote, “the U.S. continues to inch ever closer to a public square in which consensus perceptions are unavailable and facts are irrelevant.”




Read more:
From ‘Total exoneration!’ to ‘Impeach now!’ – the Mueller report and dueling fact perceptions


4. Trump’s demand for loyalty

Political science professor Yu Ouyang studies loyalty and politics at Purdue University Northwest. He explained that it’s normal for presidents to prefer loyalists.

What sets Trump apart, Ouyang wrote, is his “exceptional emphasis on loyalty.”

Trump expects personal loyalty from his staff – especially from his attorney general.

When his first attorney general, Sessions, recused himself from overseeing the FBI’s probe into Russian meddling, Trump considered it an act of betrayal and fired him in November 2017. Session’s removal enabled Trump to hire Barr.

“Trump values loyalty over other critical qualities like competence and honesty. … And he appoints his staff accordingly,” Ouyang wrote.




Read more:
Why does a president demand loyalty from people who work for him?


The Conversation

Continue Reading

Politics

Cities With Black Women Police Chiefs Had Less Street Violence During 2020’s Black Lives Matter Protests

Published

on

By

cities-with-black-women-police-chiefs-had-less-street-violence-during-2020’s-black-lives-matter-protests
Black Lives Matter protests often pitted demonstrators against police − but not in every city. Samuel Corum/AFP via Getty Images

Black Lives Matter protests in cities with Black women police chiefs experienced significantly lower levels of violence – from both police and protesters – than cities with police chiefs of other racial backgrounds and gender, according to our newly published paper.

After George Floyd’s death at the hands of Minneapolis police on May 25, 2020, the Black Lives Matter movement surged. Advocating for social justice, the movement galvanized over 11,000 protest events across thousands of cities in all 50 states. Most demonstrations were peaceful, but others were not, and city police chiefs had the job of dealing with street violence. In some communities, they engaged in dialogue with protesters; in others, they responded with force.

Our research included analyzing 11,540 protests that occurred between May 25 and Aug. 29, 2020, in 3,338 cities, spanning 1,481 counties and all 50 states. To ensure robustness and eliminate bias, we measured violence based on an independent categorization of violence, protest event descriptions, numbers of arrests and severity of the charges. We also researched the gender and racial background of the local police chief.

Our analysis, published in the Journal of Management, found that protests in cities with police departments led by Black women tended to be relatively peaceful.

Consider, for instance, Black female Chief Catrina Thompson in Winston-Salem, North Carolina, who chose dialogue over force. She conveyed solidarity with the Black Lives Matter cause and affirmed that peaceful protests could spur change without destroying the city.

By contrast, a protest in Lincoln, Nebraska, in late May 2020 saw a group of protesters break store windows and threaten police officers, which resulted in police officers – in a department led by white male Chief Jeff Bliemeister – firing pepper spray, tear gas and rubber bullets.

This and other research has found that through their personal and professional experience as they rise through the ranks of a traditionally male, white profession, Black women tend to develop a strong understanding of racial dynamics and use their knowledge to devise flexible strategies.

Of course, not all Black women lead in exactly the same ways, but they tend to share similar experiences that can help foster peaceful outcomes in times of social unrest.

Why it matters

Amid a backdrop of widespread protests and calls for social justice, public safety depends on peaceful interactions between police and demonstrators.

The study highlights the significance of having diverse leadership voices and the importance of recognizing and elevating individual identities. Despite a rise in the appointment of Black police chiefs over the past decade, Black women continue to be underrepresented in law enforcement leadership positions. This research highlights the value to society of including diverse perspectives and leadership approaches informed by the intersections of people’s identities.

What still isn’t known

Despite these insights, several questions remain unanswered. We do not yet know the specific way in which the leadership of Black women police chiefs translates into lower violence levels. We suggest the mechanism is a complex result of their communication strategies, community engagement practices and decision-making processes – but we do not know which has the most influence.

Our study also raises questions about how these findings about Black women at a time of Black protest might be applied to other civic leaders’ handling of demonstrations from different types of social movements.

What’s next

The study paves the way for more in-depth research into how intersecting identities – such as gender and race – affect leadership approaches and outcomes across various professions, not just law enforcement.

Ongoing research efforts – our own and others’ – are directed at better understanding how people’s identities inform their leadership styles and how they handle conflict. Future studies are also needed to explore how organizations and communities can better support Black women and promote them into leadership roles, ensuring their perspectives and skills benefit society as a whole.

The Research Brief is a short take on interesting academic work.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

Continue Reading

Trending