Connect with us

Politics

What Is The National Cybersecurity Strategy? A Cybersecurity Expert Explains What It Is And What The Biden Administration Has Changed

Published

on

what-is-the-national-cybersecurity-strategy?-a-cybersecurity-expert-explains-what-it-is-and-what-the-biden-administration-has-changed
The federal government has a lot of cybersecurity resources, but the private sector plays a key role in national cyber defense. U.S. government

The Biden administration released its first National Cybersecurity Strategy on March 2, 2023. The last version was issued in 2018 during the Trump administration.

As the National Security Strategy does for national defense, the National Cybersecurity Strategy outlines a president’s priorities regarding cybersecurity issues. The document is not a directive. Rather, it describes in general terms what the administration is most concerned about, who its major adversaries are and how it might achieve its goals through legislation or executive action. These types of strategy statements are often aspirational.

As expected, the 2023 Biden National Cybersecurity Strategy reiterates previous recommendations about how to improve American cybersecurity. It calls for improved sharing of information between the government and private sector about cybersecurity threats, vulnerabilities and risks. It prescribes coordinating cybersecurity incident response across the federal government and enhancing regulations. It describes the need to expand the federal cybersecurity workforce. It emphasizes the importance of protecting the country’s critical infrastructure and federal computer systems. And it identifies China, Russia, Iran and North Korea as America’s main adversaries in cyberspace.

However, as a former cybersecurity industry practitioner and current cybersecurity researcher, I think that the 2023 document incorporates some fresh ideas and perspectives that represent a more holistic approach to cybersecurity. At the same time, though, some of what is proposed may not be as helpful as envisioned.

Some of the key provisions in the current National Cybersecurity Strategy relate to the private sector, both in terms of product liability and cybersecurity insurance. It also aims to reduce the cybersecurity burden on individuals and smaller organizations. However, I believe it doesn’t go far enough in fostering information-sharing or addressing the specific tactics and techniques used by attackers.

Acting National Cybersecurity Director Kemba Walden discusses the Biden administration’s National Cybersecurity Strategy.

The end of vendor indemnification?

For decades, the technology industry has operated under what is known as “shrink-wrap” licensing. This refers to the multiple pages of legal text that customers, both large and small, routinely are forced to accept before installing or using computer products, software and services.

While much has been written about these agreements, such licenses generally have one thing in common: They ultimately protect vendors such as Microsoft or Adobe from legal consequences for any damages or costs arising from a customer’s use of their products, even if the vendor is at fault for producing a flawed or insecure product that affects the end user.

In a groundbreaking move, the new cybersecurity strategy says that while no product is totally secure, the administration will work with Congress and the private sector to prevent companies from being shielded from liability claims over the security of their products. These products underpin most of modern society.

Removing that legal shield is likely to encourage companies to make security a priority in their product development cycles and have a greater stake in the reliability of their products beyond the point of sale.

In another noteworthy shift, the strategy observes that end users bear too great a burden for mitigating cybersecurity risks. It states that a collaborative approach to cybersecurity and resiliency “cannot rely on the constant vigilance of our smallest organizations and individual citizens.” It stresses the importance of manufacturers of critical computer systems, as well as companies that operate them, in taking a greater role in improving the security of their products. It also suggests expanded regulation toward that goal may be forthcoming.

Interestingly, the strategy places great emphasis on the threat from ransomware as the most pressing cybercrime facing the U.S. at all levels of government and business. It now calls ransomware a national security threat and not simply a criminal matter.

Backstopping cyber insurance

The new strategy also directs the federal government to consider taking on some responsibility for so-called cybersecurity insurance.

Here, the administration wants to ensure that insurance companies are adequately funded to respond to claims following a significant or catastrophic cybersecurity incident. Since 2020, the market for cybersecurity-related insurance has grown nearly 75%, and organizations of all sizes consider such policies necessary.

This is understandable given how many companies and government agencies are reliant on the internet and corporate networks to conduct daily operations. By protecting, or “backstopping,” cybersecurity insurers, the administration hopes to prevent a major systemic financial crisis for insurers and victims during a cybersecurity incident.

However, cybersecurity insurance should not be treated as a free pass for complacency. Thankfully, insurers now often require policyholders to prove they are following best cybersecurity practices before approving a policy. This helps protect them from issuing policies that are likely to face claims arising from gross negligence by policyholders.

Looking forward

In addition to dealing with present concerns, the strategy also makes a strong case for ensuring the U.S. is prepared for the future. It speaks about fostering technology research that can improve or introduce cybersecurity in such fields as artificial intelligence, critical infrastructure and industrial control systems.

The strategy specifically warns that the U.S. must be prepared for a “post-quantum future” where emerging technologies could render existing cybersecurity controls vulnerable. This includes current encryption systems that could be broken by future quantum computers.

Practical quantum computers, when they arrive, will force a change in how the internet is secured.

Where the strategy falls short

While the National Cybersecurity Strategy calls for continuing to expand information-sharing related to cybersecurity, it pledges to review federal classification policy to see where additional classified access to information is necessary.

The federal government already suffers from overclassification, so if anything, I believe less classification of cybersecurity information is needed to facilitate better information-sharing on this issue. It’s important to reduce administrative and operational obstacles to effective and timely interaction, especially where collaborative relationships are needed between industry, academia and federal and state governments. Excessive classification is one such challenge.

Further, the strategy does not address the use of cyber tactics, techniques and procedures in influence or disinformation campaigns and other actions that might target the U.S. This omission is perhaps intentional because, although cybersecurity and influence operations are often intertwined, reference to countering influence operations could lead to partisan conflicts over freedom of speech and political activity. Ideally, the National Cybersecurity Strategy should be apolitical.

That being said, the 2023 National Cybersecurity Strategy is a balanced document. While in many ways it reiterates recommendations made since the first National Cybersecurity Strategy in 2002, it also provides some innovative ideas that could strengthen U.S. cybersecurity in meaningful ways and help modernize America’s technology industry, both now and into the future.

The Conversation

Richard Forno has received research funding related to cybersecurity from the National Science Foundation (NSF) and the Department of Defense (DOD) during his academic career, and sits on the advisory board of BlindHash, a cybersecurity startup focusing on remedying the password problem. He is CoPI of UMBC’s Scholarship-for-Service program, which is referenced in the 2023 National Cybersecurity Strategy.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Politics

AI Chatbots Refuse To Produce ‘controversial’ Output − Why That’s A Free Speech Problem

Published

on

By

ai-chatbots-refuse-to-produce-‘controversial’-output-−-why-that’s-a-free-speech-problem
AI chatbots restrict their output according to vague and broad policies. taviox/iStock via Getty Images

Google recently made headlines globally because its chatbot Gemini generated images of people of color instead of white people in historical settings that featured white people. Adobe Firefly’s image creation tool saw similar issues. This led some commentators to complain that AI had gone “woke.” Others suggested these issues resulted from faulty efforts to fight AI bias and better serve a global audience.

The discussions over AI’s political leanings and efforts to fight bias are important. Still, the conversation on AI ignores another crucial issue: What is the AI industry’s approach to free speech, and does it embrace international free speech standards?

We are policy researchers who study free speech, as well as executive director and a research fellow at The Future of Free Speech, an independent, nonpartisan think tank based at Vanderbilt University. In a recent report, we found that generative AI has important shortcomings regarding freedom of expression and access to information.

Generative AI is a type of AI that creates content, like text or images, based on the data it has been trained with. In particular, we found that the use policies of major chatbots do not meet United Nations standards. In practice, this means that AI chatbots often censor output when dealing with issues the companies deem controversial. Without a solid culture of free speech, the companies producing generative AI tools are likely to continue to face backlash in these increasingly polarized times.

Vague and broad use policies

Our report analyzed the use policies of six major AI chatbots, including Google’s Gemini and OpenAI’s ChatGPT. Companies issue policies to set the rules for how people can use their models. With international human rights law as a benchmark, we found that companies’ misinformation and hate speech policies are too vague and expansive. It is worth noting that international human rights law is less protective of free speech than the U.S. First Amendment.

Our analysis found that companies’ hate speech policies contain extremely broad prohibitions. For example, Google bans the generation of “content that promotes or encourages hatred.” Though hate speech is detestable and can cause harm, policies that are as broadly and vaguely defined as Google’s can backfire.

To show how vague and broad use policies can affect users, we tested a range of prompts on controversial topics. We asked chatbots questions like whether transgender women should or should not be allowed to participate in women’s sports tournaments or about the role of European colonialism in the current climate and inequality crises. We did not ask the chatbots to produce hate speech denigrating any side or group. Similar to what some users have reported, the chatbots refused to generate content for 40% of the 140 prompts we used. For example, all chatbots refused to generate posts opposing the participation of transgender women in women’s tournaments. However, most of them did produce posts supporting their participation.

Freedom of speech is a foundational right in the U.S., but what it means and how far it goes are still widely debated.

Vaguely phrased policies rely heavily on moderators’ subjective opinions about what hate speech is. Users can also perceive that the rules are unjustly applied and interpret them as too strict or too lenient.

For example, the chatbot Pi bans “content that may spread misinformation.” However, international human rights standards on freedom of expression generally protect misinformation unless a strong justification exists for limits, such as foreign interference in elections. Otherwise, human rights standards guarantee the “freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers … through any … media of … choice,” according to a key United Nations convention.

Defining what constitutes accurate information also has political implications. Governments of several countries used rules adopted in the context of the COVID-19 pandemic to repress criticism of the government. More recently, India confronted Google after Gemini noted that some experts consider the policies of the Indian prime minister, Narendra Modi, to be fascist.

Free speech culture

There are reasons AI providers may want to adopt restrictive use policies. They may wish to protect their reputations and not be associated with controversial content. If they serve a global audience, they may want to avoid content that is offensive in any region.

In general, AI providers have the right to adopt restrictive policies. They are not bound by international human rights. Still, their market power makes them different from other companies. Users who want to generate AI content will most likely end up using one of the chatbots we analyzed, especially ChatGPT or Gemini.

These companies’ policies have an outsize effect on the right to access information. This effect is likely to increase with generative AI’s integration into search, word processors, email and other applications.

This means society has an interest in ensuring such policies adequately protect free speech. In fact, the Digital Services Act, Europe’s online safety rulebook, requires that so-called “very large online platforms” assess and mitigate “systemic risks.” These risks include negative effects on freedom of expression and information.

Jacob Mchangama discusses online free speech in the context of the European Union’s 2022 Digital Services Act.

This obligation, imperfectly applied so far by the European Commission, illustrates that with great power comes great responsibility. It is unclear how this law will apply to generative AI, but the European Commission has already taken its first actions.

Even where a similar legal obligation does not apply to AI providers, we believe that the companies’ influence should require them to adopt a free speech culture. International human rights provide a useful guiding star on how to responsibly balance the different interests at stake. At least two of the companies we focused on – Google and Anthropic – have recognized as much.

Outright refusals

It’s also important to remember that users have a significant degree of autonomy over the content they see in generative AI. Like search engines, the output users receive greatly depends on their prompts. Therefore, users’ exposure to hate speech and misinformation from generative AI will typically be limited unless they specifically seek it.

This is unlike social media, where people have much less control over their own feeds. Stricter controls, including on AI-generated content, may be justified at the level of social media since they distribute content publicly. For AI providers, we believe that use policies should be less restrictive about what information users can generate than those of social media platforms.

AI companies have other ways to address hate speech and misinformation. For instance, they can provide context or countervailing facts in the content they generate. They can also allow for greater user customization. We believe that chatbots should avoid merely refusing to generate any content altogether. This is unless there are solid public interest grounds, such as preventing child sexual abuse material, something laws prohibit.

Refusals to generate content not only affect fundamental rights to free speech and access to information. They can also push users toward chatbots that specialize in generating hateful content and echo chambers. That would be a worrying outcome.

The Conversation

Jordi Calvet-Bademunt is affiliated with The Future of Free Speech. The Future of Free Speech is a non-partisan, independent think tank that has received limited financial support from Google for specific projects. However, Google did not fund the report we refer to in this article. In all cases, The Future of Free Speech retains full independence and final authority for its work, including research pursuits, methodology, analysis, conclusions, and presentation.

Jacob Mchangama is affiliated with The Future of Free Speech. The Future of Free Speech is a non-partisan, independent think tank that has received limited financial support from Google for specific projects. However, Google did not fund the report we refer to in this article. In all cases, The Future of Free Speech retains full independence and final authority for its work, including research pursuits, methodology, analysis, conclusions, and presentation.

Continue Reading

Politics

5 Years After The Mueller Report Into Russian Meddling In The 2016 US Election On Behalf Of Trump: 4 Essential Reads

Published

on

By

5-years-after-the-mueller-report-into-russian-meddling-in-the-2016-us-election-on-behalf-of-trump:-4-essential-reads
Former Special Counsel Robert Mueller testifies before the House Intelligence Committee on July 24, 2019. Alex Wong/Getty Images

In the long list of Donald Trump’s legal woes, the Mueller report – which was released in redacted form on April 18, 2019 – appears all but forgotten.

But the nearly two-year investigation into alleged Russian interference in the 2016 U.S. presidential election dominated headlines – and revealed what has become Trump’s trademark denial of any wrongdoing. For Trump, the Russia investigation was the first “ridiculous hoax” and “witch hunt.”

Mueller didn’t help matters. “While this report does not conclude that the president committed a crime, it also does not exonerate him,” the special counsel stated.

With such equivocal language, it’s easy to see how Democrats and Republicans – and the American public – responded to the report in completely different ways. While progressive Democrats wanted Trump to be impeached, some GOP leaders called for an investigation into the origins of the investigation itself.

Over the past five years, the Conversation U.S. has published the work of several scholars who followed the Mueller investigation and what it revealed about Trump. Here, we spotlight four examples of these scholars’ work.

1. Obstruction of justice

As a law professor and one-time elected official, David Orentlicher pointed out that Trump did many things that influenced federal investigations into him and his aides. They include firing FBI Director James Comey, publicly attacking the special counsel’s work and pressuring then-Attorney General Jeff Sessions not to recuse himself from overseeing Mueller’s investigation.

Some accused Trump of obstructing justice with these actions. But Orentlicher wrote that obstruction of justice is “a complicated matter.”

According to federal law, obstruction occurs when a person tries to impede or influence a trial, investigation or other official proceeding with threats or corrupt intent. The law requires a “corrupt” intention to obstruct justice as well.

But in a March 24, 2019, letter to Congress summarizing Mueller’s findings, then-Attorney General William Barr said he saw insufficient evidence to prove that Trump had obstructed justice.

William Barr’s letter to Congress summarizing the findings of Mueller’s report.
AP Photo/Jon Elswick

So it was up to Congress to further a case against Trump on obstruction charges, but then-Speaker of the House Nancy Pelosi declined, arguing that it would be too divisive for the nation and Trump “just wasn’t worth it.”




Read more:
Trump and obstruction of justice: An explainer


2. Why didn’t the full report become public?

Charles Tiefer is a professor of law at the University of Baltimore and expected that Trump and Barr would do “everything in their power to keep secret the full report and, equally important, the materials underlying the report.”

Tiefer was right. To keep Mueller’s report private, Barr invoked grand jury secrecy – the rule that attorneys, jurors and others “must not disclose a matter occurring before the grand jury.”

Attorney General William Barr was handpicked by Donald Trump to be in office when the Mueller report came in.
AP Photo/Alex Brandon/Jose Luis Magana

Trump and Barr also claimed executive privilege to further prevent the release of the report. Though it cannot be used to shield evidence of a crime, Tiefer explained, “that’s where Barr’s exoneration of Trump really helped the White House.”




Read more:
How Trump and Barr could stretch claims of executive privilege and grand jury secrecy


3. Alternative facts

Political scientists David C. Barker and Morgan Marietta asked an important question: After nearly two years of waiting, why didn’t the report help the nation achieve a consensus over what happened in the 2016 presidential election?

In their book, “One nation, Two Realities,” they found that voters see the world in ways that reinforce their values and identities, irrespective of whether they have ever watched Fox News or MSNBC.

“The conflicting factual assertions that have emerged since the report’s release highlight just how easy it is for citizens to believe what they want, regardless of what Robert Mueller, William Barr or anyone else has to say about it,” they wrote.

Perhaps the most disappointing finding, they argued, is that there are no known fixes to this problem. They found that fact-checking has little impact on changing individual beliefs, and more education only sharpens the divisions.

And with that, they wrote, “the U.S. continues to inch ever closer to a public square in which consensus perceptions are unavailable and facts are irrelevant.”




Read more:
From ‘Total exoneration!’ to ‘Impeach now!’ – the Mueller report and dueling fact perceptions


4. Trump’s demand for loyalty

Political science professor Yu Ouyang studies loyalty and politics at Purdue University Northwest. He explained that it’s normal for presidents to prefer loyalists.

What sets Trump apart, Ouyang wrote, is his “exceptional emphasis on loyalty.”

Trump expects personal loyalty from his staff – especially from his attorney general.

When his first attorney general, Sessions, recused himself from overseeing the FBI’s probe into Russian meddling, Trump considered it an act of betrayal and fired him in November 2017. Session’s removal enabled Trump to hire Barr.

“Trump values loyalty over other critical qualities like competence and honesty. … And he appoints his staff accordingly,” Ouyang wrote.




Read more:
Why does a president demand loyalty from people who work for him?


The Conversation

Continue Reading

Politics

Cities With Black Women Police Chiefs Had Less Street Violence During 2020’s Black Lives Matter Protests

Published

on

By

cities-with-black-women-police-chiefs-had-less-street-violence-during-2020’s-black-lives-matter-protests
Black Lives Matter protests often pitted demonstrators against police − but not in every city. Samuel Corum/AFP via Getty Images

Black Lives Matter protests in cities with Black women police chiefs experienced significantly lower levels of violence – from both police and protesters – than cities with police chiefs of other racial backgrounds and gender, according to our newly published paper.

After George Floyd’s death at the hands of Minneapolis police on May 25, 2020, the Black Lives Matter movement surged. Advocating for social justice, the movement galvanized over 11,000 protest events across thousands of cities in all 50 states. Most demonstrations were peaceful, but others were not, and city police chiefs had the job of dealing with street violence. In some communities, they engaged in dialogue with protesters; in others, they responded with force.

Our research included analyzing 11,540 protests that occurred between May 25 and Aug. 29, 2020, in 3,338 cities, spanning 1,481 counties and all 50 states. To ensure robustness and eliminate bias, we measured violence based on an independent categorization of violence, protest event descriptions, numbers of arrests and severity of the charges. We also researched the gender and racial background of the local police chief.

Our analysis, published in the Journal of Management, found that protests in cities with police departments led by Black women tended to be relatively peaceful.

Consider, for instance, Black female Chief Catrina Thompson in Winston-Salem, North Carolina, who chose dialogue over force. She conveyed solidarity with the Black Lives Matter cause and affirmed that peaceful protests could spur change without destroying the city.

By contrast, a protest in Lincoln, Nebraska, in late May 2020 saw a group of protesters break store windows and threaten police officers, which resulted in police officers – in a department led by white male Chief Jeff Bliemeister – firing pepper spray, tear gas and rubber bullets.

This and other research has found that through their personal and professional experience as they rise through the ranks of a traditionally male, white profession, Black women tend to develop a strong understanding of racial dynamics and use their knowledge to devise flexible strategies.

Of course, not all Black women lead in exactly the same ways, but they tend to share similar experiences that can help foster peaceful outcomes in times of social unrest.

Why it matters

Amid a backdrop of widespread protests and calls for social justice, public safety depends on peaceful interactions between police and demonstrators.

The study highlights the significance of having diverse leadership voices and the importance of recognizing and elevating individual identities. Despite a rise in the appointment of Black police chiefs over the past decade, Black women continue to be underrepresented in law enforcement leadership positions. This research highlights the value to society of including diverse perspectives and leadership approaches informed by the intersections of people’s identities.

What still isn’t known

Despite these insights, several questions remain unanswered. We do not yet know the specific way in which the leadership of Black women police chiefs translates into lower violence levels. We suggest the mechanism is a complex result of their communication strategies, community engagement practices and decision-making processes – but we do not know which has the most influence.

Our study also raises questions about how these findings about Black women at a time of Black protest might be applied to other civic leaders’ handling of demonstrations from different types of social movements.

What’s next

The study paves the way for more in-depth research into how intersecting identities – such as gender and race – affect leadership approaches and outcomes across various professions, not just law enforcement.

Ongoing research efforts – our own and others’ – are directed at better understanding how people’s identities inform their leadership styles and how they handle conflict. Future studies are also needed to explore how organizations and communities can better support Black women and promote them into leadership roles, ensuring their perspectives and skills benefit society as a whole.

The Research Brief is a short take on interesting academic work.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

Continue Reading

Trending