Cybersecurity - Atlantic Council https://www.atlanticcouncil.org/issue/cybersecurity/ Shaping the global future together Tue, 06 Aug 2024 20:41:26 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://www.atlanticcouncil.org/wp-content/uploads/2019/09/favicon-150x150.png Cybersecurity - Atlantic Council https://www.atlanticcouncil.org/issue/cybersecurity/ 32 32 The Great IT Outage of 2024 is a wake-up call about digital public infrastructure https://www.atlanticcouncil.org/blogs/new-atlanticist/the-great-it-outage-of-2024-is-a-wake-up-call-about-digital-public-infrastructure/ Tue, 06 Aug 2024 17:24:12 +0000 https://www.atlanticcouncil.org/?p=784093 The July 19 outage serves as a symbolic outcry for solution-oriented policies and accountability to stave off future disruptions.

The post The Great IT Outage of 2024 is a wake-up call about digital public infrastructure appeared first on Atlantic Council.

]]>
On July 19, the world experienced its largest global IT outage to date, affecting 8.5 million Microsoft Windows devices. Thousands of flights were grounded. Surgeries were canceled. Users of certain online banks could not access their accounts. Even operators of 911 lines could not respond to emergencies.

The cause? One mere faulty section of code in a software update.

The update came from CrowdStrike, a cybersecurity firm whose Falcon Sensor software many Windows users employ against cyber breaches. Instead of providing improvements, the update caused devices to shut down and enter an endless reboot cycle, driving a global outage. Reports suggest that insufficient testing at CrowdStrike was likely the cause.

However, this outage is not just a technology error. It also reveals a hidden world of digital public infrastructure (DPI) that deserves more attention from policymakers.

What is digital public infrastructure?

DPI, while an evolving concept, is broadly defined by the United Nations (UN) as a combination of “networked open technology standards built for public interest, [which] enables governance and [serves] a community of innovative and competitive market players working to drive innovation, especially across public programmes.” This definition refers to DPI as essential digital systems that support critical societal functions, like how physical infrastructure—including roads, bridges, and power grids—are essential for everyday activities.

Microsoft Windows, which runs CrowdStrike’s Falcon Sensor software, is a form of DPI. And other examples of DPI within the UN definition include digital health systems, payment systems, and e-governance portals.

As the world scrambles to fix their Windows systems, policymakers need to pay particular attention to the core DPI issues that underpin the outage.

The problem of invisibility

DPI, such as Microsoft Windows, is ubiquitous but also largely invisible, which is a significant challenge when it comes to managing risks associated with it. Unlike physical infrastructure, which is tangible and visible, DPI powers essential digital services without drawing public awareness. Consequently, the potential risks posed by DPI failures—whether stemming from software bugs or cybersecurity breaches—tend to be underappreciated and underestimated by the public.

The lack of a clear definition of DPI exacerbates the issue of its invisibility. Not all digital technologies are public infrastructure: Companies build technology to generate revenue, but many of them do not directly offer critical services for the public. For instance, Fitbit, a tech company that creates fitness and health tracking devices, is not a provider of DPI. Though it utilizes technology and data services to enhance user experience, it does not provide essential infrastructure such as internet services, cloud computing platforms, or large-scale data centers that support public and business digital needs. That said, Fitbit’s new owner, Google, known for its widely used browser, popular cloud computing services, and efforts to expand digital connectivity, can be considered a provider of DPI.

Other companies that do not start out as DPI may become integral to public infrastructure by dint of becoming indispensable. Facebook, for example, started out as a social network, but it and other social media platforms have become a crucial aspect of civil discourse surrounding many elections. Regulating social media platforms as a simple technology product could potentially ignore their role as public infrastructure, which often deserve extra scrutiny to mitigate potential detrimental effects on the public.

The recent Microsoft outage, from which airlines, hospitals, and other companies are still recovering, should now sharpen the focus on the company as a provider of DPI. However, the invisibility of DPI and the absence of appropriate policy guidelines for measuring and managing its risks result in two complications. First, most users who interact with DPI often do not recognize it as a form of DPI. Second, this invisibility leads to a misplaced trust in major technology companies, as users fail to recognize how high the collective stakes of a failure in this DPI might be. Market dominance and effective advertising have helped major technology companies publicize their systems as benchmarks of reliability and resiliency. As a result, the public often perceives these systems as infallible, assuming they are more secure than they are—until a failure occurs. At the same time, an overabundance of public trust and comfort with familiar systems can foster complacency within organizations, which can lead to inadequate internal scrutiny and security audits.

How to prevent future disruptions

The Great IT Outage of 2024 revealed just how essential DPI is to societies across the globe. In many ways, the outage serves as a symbolic outcry for solution-oriented policies and accountability to stave off future disruptions.

To address DPI invisibility and misplaced trust in technology companies, US policymakers should first define DPI clearly and holistically while accounting for its status as an evolving concept. It is equally crucial to distinguish which companies are currently providers of DPI, and to educate leaders, policymakers, and the public about what that means. Such an initiative should provide a clear definition of DPI, its technical characteristics, and its various forms, while highlighting how commonly used software such as Microsoft Windows is a form of DPI. A silver lining of the recent Microsoft/CrowdStrike outage is that it offers a practical, recent case study to present to the public as real-world context for understanding the risks when DPI fails.

Finally, Microsoft has outlined technical next steps to prevent another outage, including extensive testing frameworks and backup systems to prevent the same kind of outage from happening again. However, while industry-driven self-regulation is crucial, regulation that enforces and standardizes backup systems, not just with Microsoft, but also for other technology companies that may also become providers of DPI, is also necessary. Doing so will help prevent future outages, ensuring the reliability of infrastructure which, just like roads and bridges, props up the world.


Saba Weatherspoon is a young global professional with the Atlantic Council’s Geotech Center.

Zhenwei Gao is a young global professional with the Cyber Statecraft Initiative, part of the Atlantic Council Technology Programs.

The post The Great IT Outage of 2024 is a wake-up call about digital public infrastructure appeared first on Atlantic Council.

]]>
Ukraine’s drone success offers a blueprint for cybersecurity strategy https://www.atlanticcouncil.org/blogs/ukrainealert/ukraines-drone-success-offers-a-blueprint-for-cybersecurity-strategy/ Thu, 18 Jul 2024 20:28:12 +0000 https://www.atlanticcouncil.org/?p=780918 Ukraine's rapidly expanding domestic drone industry offers a potentially appealing blueprint for the development of the country's cybersecurity capabilities, writes Anatoly Motkin.

The post Ukraine’s drone success offers a blueprint for cybersecurity strategy appeared first on Atlantic Council.

]]>
In December 2023, Ukraine’s largest telecom operator, Kyivstar, experienced a massive outage. Mobile and internet services went down for approximately twenty four million subscribers across the country. Company president Alexander Komarov called it “the largest hacker attack on telecom infrastructure in the world.” The Russian hacker group Solntsepyok claimed responsibility for the attack.

This and similar incidents have highlighted the importance of the cyber front in the Russian invasion of Ukraine. Ukraine has invested significant funds in cybersecurity and can call upon an impressive array of international partners. However, the country currently lacks sufficient domestic cybersecurity system manufacturers.

Ukraine’s rapidly expanding drone manufacturing sector may offer the solution. The growth of Ukrainian domestic drone production over the past two and a half years is arguably the country’s most significant defense tech success story since the start of Russia’s full-scale invasion. If correctly implemented, it could serve as a model for the creation of a more robust domestic cybersecurity industry.

Stay updated

As the world watches the Russian invasion of Ukraine unfold, UkraineAlert delivers the best Atlantic Council expert insight and analysis on Ukraine twice a week directly to your inbox.

Speaking in summer 2023, Ukraine’s Minister of Digital Transformation Mykhailo Fedorov outlined the country’s drone strategy of bringing together drone manufacturers and military officials to address problems, approve designs, secure funding, and streamline collaboration. Thanks to this approach, he predicted a one hundred fold increase in output by the end of the year.

The Ukrainian drone production industry began as a volunteer project in the early days of the Russian invasion, and quickly became a nationwide movement. The initial goal was to provide the Ukrainian military with 10,000 FPV (first person view) drones along with ammunition. This was soon replaced by far more ambitious objectives. Since the start of Russia’s full-scale invasion, more the one billion US dollars has been collected by Ukrainians via fundraising efforts for the purchase of drones. According to online polls, Ukrainians are more inclined to donate money for drones than any other cause.

Today, Ukrainian drone production has evolved from volunteer effort to national strategic priority. According to Ukrainian President Volodymyr Zelenskyy, the country will produce more than one million drones in 2024. This includes various types of drone models, not just small FPV drones for targeting personnel and armored vehicles on the battlefield. By early 2024, Ukraine had reportedly caught up with Russia in the production of kamikaze drones similar in characteristics to the large Iranian Shahed drones used by Russia to attack Ukrainian energy infrastructure. This progress owes much to cooperation between state bodies and private manufacturers.

Marine drones are a separate Ukrainian success story. Since February 2022, Ukraine has used domestically developed marine drones to damage or sink around one third of the entire Russian Black Sea Fleet, forcing Putin to withdraw most of his remaining warships from occupied Crimea to the port of Novorossiysk in Russia. New Russian defensive measures are consistently met with upgraded Ukrainian marine drones.

In May 2024, Ukraine became the first country in the world to create an entire branch of the armed forces dedicated to drone warfare. The commander of this new drone branch, Vadym Sukharevsky, has since identified the diversity of country’s drone production as a major asset. As end users, the Ukrainian military is interested in as wide a selection of manufacturers and products as possible. To date, contracts have been signed with more than 125 manufacturers.

The lessons learned from the successful development of Ukraine’s drone manufacturing ecosystem should now be applied to the country’s cybersecurity strategy. “Ukraine has the talent to develop cutting-edge cyber products, but lacks investment. Government support is crucial, as can be seen in the drone industry. Allocating budgets to buy local cybersecurity products will create a thriving market and attract investors. Importing technologies strengthens capabilities but this approach doesn’t build a robust national industry,” commented Oleh Derevianko, co-founder and chairman of Information Systems Security Partners.

The development of Ukraine’s domestic drone capabilities has been so striking because local manufacturers are able to test and refine their products in authentic combat conditions. This allows them to respond on a daily basis to new defensive measures employed by the Russians. The same principle is necessary in cybersecurity. Ukraine regularly faces fresh challenges from Russian cyber forces and hacker groups; the most effective approach would involve developing solutions on-site. Among other things, this would make it possible to conduct immediate tests in genuine wartime conditions, as is done with drones.

At present, Ukraine’s primary cybersecurity funding comes from the Ukrainian defense budget and international donors. These investments would be more effective if one of the conditions was the procurement of some solutions from local Ukrainian companies. Today, only a handful of Ukrainian IT companies supply the Ukrainian authorities with cybersecurity solutions. Increasing this number to at least dozens of companies would create a local industry capable of producing world-class products. As we have seen with the rapid growth of the Ukrainian drone industry, this strategy would likely strengthen Ukraine’s own cyber defenses while also boosting the cybersecurity of the wider Western world.

Anatoly Motkin is president of StrategEast, a non-profit organization with offices in the United States, Ukraine, Georgia, Kazakhstan, and Kyrgyzstan dedicated to developing knowledge-driven economies in the Eurasian region.

Further reading

The views expressed in UkraineAlert are solely those of the authors and do not necessarily reflect the views of the Atlantic Council, its staff, or its supporters.

The Eurasia Center’s mission is to enhance transatlantic cooperation in promoting stability, democratic values and prosperity in Eurasia, from Eastern Europe and Turkey in the West to the Caucasus, Russia and Central Asia in the East.

Follow us on social media
and support our work

The post Ukraine’s drone success offers a blueprint for cybersecurity strategy appeared first on Atlantic Council.

]]>
Strengthening Taiwan’s resiliency https://www.atlanticcouncil.org/in-depth-research-reports/report/strengthening-taiwans-resiliency/ Tue, 02 Jul 2024 13:00:00 +0000 https://www.atlanticcouncil.org/?p=776535 Resilience is a nation’s ability to understand, address, respond to, and recover from any type of national security risk. Given the scale of risk Taiwan faces from mainland China, domestic resilience should be front and center in Taiwan’s national security strategy, encompassing areas such as cybersecurity, energy security, and defense resilience.

The post Strengthening Taiwan’s resiliency appeared first on Atlantic Council.

]]>

Table of contents

Introduction

This report recommends actions for the new leadership of Taiwan to take to enhance its societal resilience against Chinese aggression in the context of both “gray zone” conflict and wartime attacks. The report focuses on establishing a comprehensive security strategy and analyzes three key areas particularly important for effective resilience: enhancing cybersecurity for critical infrastructures; improving energy security; and accelerating defense transformation.

The new administration of Lai Ching-te faces both existing resilience challenges and the potential for significantly greater problems if the People’s Republic of China (PRC) pursues expanded gray zone activities or if actual conflict occurs.1 The ongoing challenges include substantial disinformation campaigns, cyberattacks, military incursions, and periodic economic coercion. Potential future challenges could involve expansion of one or more of these ongoing Chinese activities. In the context of a more contested environment such as a quarantine,2 blockade, or a kinetic conflict, Chinese actions could seek to cause leadership failures and loss of social cohesion; undertake cyberattacks to target critical infrastructures; generate energy shortages; and seek to defeat Taiwan militarily before the United States could provide effective support. The potential for such harms substantially increases the importance of resilient responses by Taiwan.

The report recommends four major sets of actions to enhance Taiwan’s resilience:

  1. Establish a comprehensive security strategy that engages government, the private sector, and individuals in cooperative efforts to ensure all facets of resilience including:
    1. Risk analyses and priority requirements.
    2. Organization of data relevant to responding to challenges from the PRC.
    3. Development of expertise in key areas required for response.
    4. Provision of governmental leadership and activation of the whole nation as part of a comprehensive approach.
  2. Enhance cybersecurity by establishing:
    1. Off-island, cloud-based capabilities to duplicate governmental and other critical functions.
    2. Working arrangements with high-end, private-sector cybersecurity providers.
    3. A surge capability of cybersecurity experts.
    4. Regular engagement with US Cyber Command’s Hunt Forward program.
    5. Alternatives to undersea cables through low-earth orbit (LEO) communications satellites.
  3. Bolster energy security resilience by:
    1. Rationalizing—that is, increasing—energy prices, especially for electricity.
    2. Supporting indigenous supply, including nuclear energy.
    3. Prioritizing energy needs.
    4. Dispersing and hardening energy storage facilities.
    5. Preparing comprehensive rationing plans for energy.
  4. Enhance defense resilience by:
    1. Continuing the trend of higher defense spending to at least 3 percent of gross domestic product (GDP).
    2. Leveraging Taiwan’s strength in high tech manufacturing and shipbuilding to accelerate the development of a Ukraine-style, public-private “capability accelerator”3 for emerging technologies.
    3. Fielding low-cost, high-effectiveness capabilities including unmanned surface vessels, unmanned aerial vehicles, and naval mines.
    4. Incorporating training in emerging technologies and unconventional tactics for conscripts and reserves.
    5. Investing in East Coast port infrastructure as counterblockade strongholds.
    6. Raising the All-out Defense Mobilization Agency (ADMA) to the national level and implementing a larger civil defense force that fully integrates civilian agencies and local governments.

Establish a comprehensive security strategy

Resilience is not a new theme in Taiwan. Former President Tsai Ing-wen, who completed two terms in office on May 20, entitled her 2022 National Day Address “Island of Resilience,”4 and similarly identified resilience as a key factor for Taiwan in her two subsequent National Day addresses.5 “The work of making the Republic of China (Taiwan) a more resilient country is now our most important national development priority,” she stated in that 2022 speech, in which she articulated four key areas of  resilience: economy and industry, social safety net, free and democratic government system, and national defense. What is left undone, however, is aligning these and other resilience elements into a comprehensive security strategy similar to those undertaken by Finland6 and Sweden,7 which utilize a whole-of society approach to enhance resilience.

Resilience is a nation’s ability to understand, address, respond to, and recover from any type of national security risk. Given the scale of risk Taiwan faces from China, domestic resilience should be front and center in Taiwan’s national security strategy.8 Comparable comprehensive national security approaches, such as the Finnish model, aim to foster and enable an engaged national ecosystem of partners, each with a clear understanding of their roles and responsibilities. Finland’s model is instructive, underscoring the importance of engagement by the entire society:

  • The Security Strategy for Society lays out the general principles governing preparedness in Finnish society. The preparedness is based on the principle of comprehensive security in which the vital functions of society are jointly safeguarded by the authorities, business operators, organisations and citizens.9

Comprehensive security thus is far more than just government activities:

  • Comprehensive security has evolved into a cooperation model in which actors share and analyse security information, prepare joint plans, as well as train and work together. The cooperation model covers all relevant actors, from citizens to the authorities. The cooperation is based on statutory tasks, cooperation agreements and the Security Strategy for Society.10

The Finnish strategy identifies seven “vital functions” as key areas: leadership; international and European Union activities; defense capability; internal security; economy, infrastructure, and security of supply; functional capacity of the population and services; and psychological resilience.11

Taiwan has taken a variety of actions to enhance resilience including the establishment in 2022 of the All-out Defense Mobilization Agency.12 That agency has a useful but limited scope with its mandate of “comprehensive management of ‘planning for mobilization, management, service, civil defense, [and] building reserve capacity.’ ”13 But while defense is important (and further discussed below), as the Finnish and Swedish strategies underscore, Taiwan should expand its approach to resilience to include the full spectrum of governmental, private sector, and individual tasks—and the necessary cooperative efforts to make them most effective.

President Lai’s recent election ushered in an unprecedented third consecutive term for the Democratic Progressive Party.14 This outcome not only provides continuity in the agenda set by the island’s duly elected leader, but also presents an opportunity to sharpen the focus areas for resilience. As Taiwan transitions to a Lai presidency, the challenge of shoring up the island’s resilience should be at the forefront.

As a valuable starting point for establishing such an expanded resilience strategy, the Lai government should undertake extensive consultations with both Finland and Sweden—which could be facilitated as necessary by the United States. Taiwan should also seek to engage with the Hybrid Center of Excellence, based in Finland, which is an “autonomous, network-based international organization countering hybrid threats.”15

The discussion below describes several important elements of a comprehensive resilience strategy, and it will be a crucial task for the Lai administration to expand Taiwan’s current efforts to the full scope of such an approach. Resilience is a team game with the whole of society playing a role. But only Taiwan’s central government can act as the team captain, setting expectations, establishing priorities, formulating and communicating national strategy, and coordinating activities. Only leaders in national-level government can oversee the critical work of developing institutional effectiveness in key areas of risk management and resilience.

As a starting point, Taiwan should undertake a comprehensive audit now to uncover any gaps in the country’s ability to understand, respond to, and recover from both the chronic risks it currently faces and any more acute manifestations of PRC aggression in the future. Taiwan’s government should examine the following areas to pursue greater resilience:

  1. Activating the whole nation: Working with the private sector and local government, and communicating to households are essential to develop a truly comprehensive approach to Taiwan’s resilience.
  2. Understanding risk: Developing a set of scenarios that will help prioritize activities across government and beyond. Prioritizing is critical where resources are limited—as is identifying areas of cross-cutting work that can help to reduce risk in multiple scenarios.
  3. Building data capacity: Laying a foundation for data exploitation needs will be critical for Taiwan, which will need this capacity both ahead of and during any crisis response. Preparing for and providing this capacity is not just the preserve of government, as commercially available and industry data sources will provide critical insights. Planning to access, receive, store, and process this data needs to start early, as the foundations for technical infrastructure, capabilities, data-sharing policies, and data expertise in government all require time and cannot just be activated on the cusp of crisis. Part of this work entails developing scenarios to help analysts map out gaps in information sources (intelligence, open source, commercial, and from allies) that Taiwan will likely need in each circumstance to build situational awareness. Ahead of and during crisis, risk assessment and effective decision-making will be highly dependent on the availability, quality, and usability of intelligence, information, and data.
  4. Expanding its network of professional skills and resources: Assessing the range of skills and the levels of resourcing needed in government to manage a long-term crisis posture should start well ahead of any crisis. It would be helpful to look now at the gaps in key areas of professional expertise: analysts, data experts, crisis-response professionals, and operational planners will all be needed in larger numbers to sustain an effective response. Taiwan will also need professionally administered and well-exercised crisis facilities, resilient technical infrastructure, and business continuity approaches in place.
  5. Preparedness and planning: Thinking through potential impacts of crisis scenarios in advance and working up potential policy and operational responses will bolster the quality of adaptability, which is an essential component of resilience. The process of exercising and refining plans is also helpful to build the professional connections and networks that will be activated during a live response.

Working with countries that are already developing vanguard resilience capabilities could help Taiwan quickly establish a workable model. For example, the United Kingdom’s National Situation Centre16—built in less than a year during the COVID-19 pandemic—is a model of developing access to critical data in peacetime and lessons learned from previous crisis scenarios about the practical challenges a nation could face in a variety of scenarios. Many commercial providers offer competent ways of displaying data insights on dashboards, and while this is helpful, it is only part of what can be achieved.

As a model for its broader resilience requirements, Taiwan will have the benefit of its existing efforts including in the counterdisinformation arena, where it has programs as effective as any in the world, despite the fact that Taiwan consistently faces the world’s highest volume of targeted disinformation campaigns.17 The saturation of PRC information manipulation across Taiwan’s traditional and social media platforms is strategically designed to undermine social cohesion, erode trust in government institutions, and soften resistance to Beijing’s forced unification agenda, while sowing doubts about America’s commitment to peace and stability in the region. 

Taiwan has developed a multifaceted strategy to combat this onslaught, eschewing heavy-handed censorship in favor of promoting free speech and empowering civil society. This approach serves as a beacon for other democracies, demonstrating how to effectively counter disinformation through rapid-response mechanisms, independent fact-checking, along with widespread media literacy initiatives. Collaborative efforts such as the Taiwan FactCheck Center, Cofacts, and MyGoPen have proven instrumental in swiftly identifying and debunking false rumors, notably during the closely watched presidential election on January 13.18

Taiwan’s Minister of Digital Affairs (MoDA) attributes the island’s success in combating this “infodemic” to its sophisticated civil-sector efforts, which avoids reliance on reactive takedowns of malicious content akin to a game of whack-a-mole. Much like its handling of the pandemic—where Taiwan achieved one of the world’s lowest COVID-19 fatality rates without resorting to draconian lockdowns—it has demonstrated resilience and innovation in the digital sphere.19

Taiwan’s response to disinformation demonstrates that it is well-positioned to establish a comprehensive approach to societal resilience. The discussion below describes several important elements of a comprehensive resilience strategy, but it will be a crucial task for the Lai administration to expand Taiwan’s current efforts to the full scope of such an approach.

Cybersecurity and critical infrastructure resilience

Cyber risks to critical infrastructures

Like all advanced economies, Taiwan depends on its critical infrastructures. Critical infrastructures have been described as “sectors whose assets, systems, and networks, whether physical or virtual, are considered so vital . . .  that their incapacitation or destruction would have a debilitating effect on security, national economic security, national public health or safety.”20 Since several critical infrastructures are interlinked, it is important in evaluating resilience to “capture cross-cutting risks and associated dependencies that may have cascading impacts within and across sectors.”21 Among those interlinked critical infrastructures are energy, communications, transportation, and water. Each of these are critical to society as a whole and each are dependent on digital technology for their operations.

In Taiwan, the Administration for Cyber Security has identified critical infrastructures “by their feature types into the following eight fields: energy, water resource, telecommunications, transportation, banking and finance, emergency aid and hospitals, central and local governments, and high-tech parks.”22 It is worth underscoring that several of Taiwan’s critical infrastructures, such as the electric grid23 and the water system,24 are significantly centralized or have other notable vulnerabilities such as the dependency on undersea cables for international communications25 that increase the potential consequences from a successful cyberattack.

The Taiwan government has fully recognized the significant risks from cyberattacks. As described by Taiwan’s Administration for Cyber Security, “Due to Taiwan’s unique political and economic situation, the country faces not only a complex global cyber security environment but also severe cyber security threats, making the continuous implementation and improvement of cyber security measures a necessity.”26

The number of cyberattacks against Taiwan is notable.27 Published estimates range from five million cyberattacks per day against Taiwanese government agencies28 to the detection of 15,000 cyberattacks per second, including attempted intrusions, in Taiwan during the first half of 2023.29

The attacks often focus on key societal infrastructures. A recent Voice of America report noted that just prior to the January 2024 elections:

  • Most of the attacks appeared to focus on government offices, police departments, and financial institutions, with the attackers focused on internal communications, police reports, bank statements and insurance information.30

Google researchers have likewise described the cyber threat to key critical infrastructures, revealing that it is “tracking close to 100 hacking groups out of China [and that these] malicious groups are attacking a wide spectrum of organizations, including the government, private industry players and defense organizations.”31

The attacks themselves are often relatively sophisticated. Trellix, a cybersecurity firm, described multiple techniques utilized by attackers that “focused on defense evasion, discovery, and command and control . . . to subvert system defenses to gather information about accounts, systems, and networks.” Among them are “living-off-the-land” techniques, which allow attackers to maintain their intrusions over time with smaller chances of detection.32

While no one can say with certainty what actions the PRC would take in the context of a blockade of or outright conflict with Taiwan, the United States is clear-eyed about the potential for attacks on its own critical infrastructures if engaged in conflict with China. The February 2023 Annual Threat Assessment of the US Intelligence Community notes the likelihood of such PRC cyberattacks in that context:

  • If Beijing feared that a major conflict with the United States were imminent, it almost certainly would consider undertaking aggressive cyber operations against U.S. homeland critical infrastructure and military assets worldwide . . .  China almost certainly is capable of launching cyber attacks that could disrupt critical infrastructure services within the United States, including against oil and gas pipelines, and rail systems.33

The ongoing Russian cyberattacks against Ukraine in the Russia-Ukraine war further underscore the reality of critical infrastructures as a target in a conflict. It seems reasonable to assume that comparable actions (and perhaps even more) would be undertaken against Taiwan in the event of a blockade or kinetic conflict. “Probable targets,” according to James A. Lewis, would include critical infrastructures such as electrical power facilities, information and communications systems, and pipelines.34

Actions to enhance Taiwan’s cyber resilience

Taiwan can enhance its cyber resilience through its own actions and in collaborative activities with private-sector companies and with the United States. While cyberattacks can be highly disruptive, one of the important lessons of the Ukraine-Russia conflict is that the effects on operations can be mitigated, as described in a CyberScoop analysis that underscores a shift in expectations:

  • The war has inspired a defensive effort that government officials and technology executives describe as unprecedented—challenging the adage in cybersecurity that if you give a well-resourced attacker enough time, they will pretty much always succeed. The relative success of the defensive effort in Ukraine is beginning to change the calculation about what a robust cyber defense might look like going forward.35

According to the analysis, the critical element for such success has been significant multinational and public-private collaboration:

  • This high level of defense capability is a consequence of a combination of Ukraine’s own effectiveness, significant support from other nations including the United States and the United Kingdom, and a key role for private sector companies.
  • The defensive cyber strategy in Ukraine has been an international effort, bringing together some of the biggest technology companies in the world such as Google and Microsoft, Western allies such as the U.S. and Britain and social media giants such as Meta who have worked together against Russia’s digital aggression.36

Actions by Taiwan

Taiwan should utilize the Ukraine model of cyber resilience—backed in part by private-sector companies—and take comparable actions to enhance its cybersecurity. Taiwan has a substantial existing cybersecurity framework on which to build such mitigating actions. Since 2022, the Ministry of Digital Affairs, through its Administration for Cyber Security, is responsible for “implementing cyber security management and defense mechanisms for national critical infrastructures” including “evaluating and auditing cyber security works at government agencies and public entities.”37 Utilizing that framework, Taiwan should undertake the following four actions that would significantly enhance the island’s cybersecurity resilience.

First, Taiwan should utilize cloud-based capabilities to establish a duplicative set of cyber-enabled governmental functions outside of Taiwan. Ukraine undertook such actions, thereby rendering Russian cyberattacks in Ukraine unable to disrupt ongoing governmental activities. Taiwan’s Ministry of Digital Affairs has been evaluating the use of public clouds including the possibility of  “digital embassies” abroad to hold data.38 Taiwan should organize such actions with key cloud providers such as Amazon Web Services, which provided support to Ukraine.39 The United States should work with Taiwan and appropriate cloud providers to help effectuate such a result.

Second, Taiwan should establish arrangements with private-sector cybersecurity providers to undertake defensive operations against PRC cyberattacks in the context of a blockade or kinetic conflict. As noted above, such private-sector actions have been instrumental to Ukraine, and would similarly be invaluable for Taiwan. The United States should also help facilitate such private-sector defensive cyber operations for Taiwan.

Third, Taiwan should organize a surge capability of individual cybersecurity experts who can be called upon to complement governmental resources. Both Estonia and the United Kingdom have very effective cyber-reserve approaches, and Taiwan should engage with each country, seeking lessons learned as part of establishing its own reserve corps.

Fourth, Taiwan needs to accelerate its low-earth orbit satellite communications program. The Ministry of Digital Affairs’ two-year, US$18 million plan to strengthen the resilience of government communications entails building more than 700 satellite receiver stations. The impetus: ships from mainland China have repeatedly severed submarine internet cables in what Taiwan perceived as “a trial of methods” that the PRC could use to prepare for a military invasion.40

The existing program involves satellites as well as ground-based receivers. The Taiwan Space Agency disclosed its plan for a “dedicated” LEO satellite communications project in late 2022,41 as a public-private partnership: 

  • Distinct from traditional government programs, this groundbreaking project is structured as a privately operated venture, wherein the Taiwanese government would retain a substantial minority ownership. . . . This project intends to enhance the Taiwan Space Agency’s initial proposal for two government-built LEO satellites by evolving it into a “2+4” configuration. This will involve constructing four additional satellites through collaborative efforts between the public and private sectors.42

Actions with the United States

In accord with the Taiwan Relations Act,43 and as a matter of long-standing policy, the United States strongly supports Taiwan’s defensive capabilities including for cybersecurity. The Integrated Country Strategy of the American Institute in Taiwan (essentially the unofficial US embassy) specifically provides that “bolster[ing] Taiwan’s cybersecurity resilience” is one of the United States’ strategic priorities for the island.44 To support that objective, the United States can enhance Taiwan cybersecurity through cooperative defensive activities.

First, US Cyber Command regularly supports the network resilience of allied countries and partners through its “Hunt Forward” operations, which are “strictly defensive” joint ventures, undertaken following an invitation from the ally or partner, to “observe and detect malicious cyber activity” on these networks, together searching out “vulnerabilities, malware, and adversary presence.”45

While Taiwan has not been specifically identified as a Hunt Forward participant, Anne Neuberger, who is the US deputy national security advisor for cyber and emerging technology, said at a Politico Tech Summit in 2023 that in the event of a major cyberattack on Taiwan, the United States would “send its best teams to help hunt down the attackers, the same approach typically used to help global allies in cyberspace.”46 She described the typical approach as:

  • Putting our best teams to hunt on their most sensitive networks to help identify any current intrusions and to help remediate and make those networks as strong as possible.”47

Neuberger also highlighted US work with Taiwan to carry out military tabletop games and exercises to prepare for potential cyberattack.48

More recently, the National Defense Authorization Act (NDAA) for Fiscal Year 2024 explicitly authorized the Defense Department to cooperate on:

  • Defensive military cybersecurity activities with the military forces of Taiwan to (1) defend military networks, infrastructure, and systems; (2) counter malicious cyber activity that has compromised such military networks, infrastructure, and systems; (3) leverage United States commercial and military cybersecurity technology and services to harden and defend such military networks, infrastructure, and systems; and (4) conduct combined cybersecurity training activities and exercises.49

Going forward, those authorities authorize not only Hunt Forward actions but also actions to  leverage commercial and military technology to harden such networks (which would seem to resolve any export control issues) and to conduct combined training and exercises, all of which underscores clear congressional approval for enhanced cybersecurity activities with Taiwan.50

Second, the United States should undertake to enhance Taiwan’s communications resilience by making available access to US commercial and military LEO networks. The important role of the commercial provider Starlink in assuring communications in the context of the Ukraine-Russia war is well-known.51 Starlink’s parent company, SpaceX, is, however, controlled by Elon Musk, whose Tesla company has major investments in China. That linkage has raised the question of whether Taiwan could rely on any commercial arrangements it might make on its own with Starlink—particularly since Starlink did impose some limitations on Ukraine’s use of the network.52 However, as previously described by one of the authors of this report, the US government has sway on such matters:

  • The Defense Production Act authorizes the [US] government to require the prioritized provision of services—which would include services from space companies—and exempts any company receiving such an order from liabilities such as inability to support other customers.53

Accordingly, the US should rely on this authority to organize appropriate arrangements with Starlink—and other space companies that provide like capabilities—to ensure access that would support Taiwan communications. One way to do this would be to incorporate appropriate terms into the commercial augmentation space reserve (CASR) program arrangements that US Space Force is currently negotiating with civil space providers,54 as part of the Department of Defense’s overall commercial space strategy.55

Additionally, the DOD is developing its own LEO capability through a variety of constellations being put in place by Space Force.56 Pursuant to the recent NDAA authorization noted above, DOD should work with the Taiwan military to ensure that those constellations will be available to support Taiwan as necessary.

Longer term, the United States should also undertake to enhance the resilience of Taiwan’s undersea cables. As previously proposed by one of the authors, the United States should lead in establishing an international undersea infrastructure protection corps. It should:

  • Combine governmental and private activities to support the resilience of undersea cables and pipelines. Membership should include the United States, allied nations with undersea maritime capabilities, and key private-sector cable and pipeline companies.57

Such an activity would include focus on cybersecurity for undersea cable networks, hardening and other protection for cable landing points, and capabilities and resources to ensure expeditious repair of cables as needed.58 To be sure, getting such an activity up and running will necessarily be a multiyear effort. However, Taiwan’s vulnerability underscores the importance of beginning promptly and working as expeditiously as possible.

Cybersecurity recommendations for Taiwan

  • Utilize cloud-based capabilities to establish a duplicative set of cyber-enabled governmental functions outside of Taiwan.
  • Establish arrangements with private-sector cybersecurity providers to undertake defensive operations against PRC cyberattacks.
  • Organize a surge capability of individual cybersecurity experts who can be called upon to complement governmental resources.
  • Accelerate the low-earth orbit satellite communications program.
  • Actively engage with Cyber Command’s Hunt Forward activities.
  • Enhance Taiwan’s communications resilience by making available access to US commercial and military LEO networks.
  • Undertake on a longer-term basis enhanced resilience of Taiwan’s undersea cables.

Energy

As part of its efforts to enhance resilience, Taiwan must mitigate its energy vulnerabilities, as its reliance on maritime imports for about 97 percent59 of its energy needs creates acute risks. To lessen its dependency on maritime imports and strengthen its resiliency in the face of potential PRC coercion, Taiwan should curb energy and electricity demand, bolster indigenous supply, overhaul its inventory management, and prepare rationing plans. A resilient energy security approach would credibly signal to the PRC that Taiwan could hold out for long durations without maritime resupply.

Curbing demand by rationalizing prices 

Taiwan’s ultra-low electricity prices are a security risk (and a black eye for its climate targets). Reliance on seaborne energy shipments presents straightforward security problems, and Taiwan’s low electricity prices subsidize consumption that is being met by imports of hydrocarbons, especially coal. The new Lai administration should make haste prudently, increasing electricity prices more frequently and significantly, without exceeding the limits of the politically possible.

Taiwan’s electricity price quandary is illustrated by Taipower, the state-owned monopoly utility. In 2022 and 2023, Taipower lost 227.2 billion New Taiwan dollars (NTD) and 198.5 billion NTD, respectively, as its per kilowatt hour cost of electricity sold substantially exceeded per unit prices.60 Taipower’s prices failed to offset the steep rise in electricity input costs amid Russia’s invasion of Ukraine and the post-COVID-19 unsnarling of supply chains.

Taiwan’s electricity costs remain too low, diminishing the island’s resiliency, although policymakers have now taken some steps in light of the problem. The Ministry of Economic Affairs’ latest electricity price review, in March 2024, raised average prices by about 11 percent, with the new tariff reaching about 3.4518 NTD, or approximately $0.11 USD/kWh.61 This rationalization of prices, while welcome, is insufficient. In the United States, the rolling twelve-month ending price in January 2024 for all sectors totaled $0.127/kWh.62 Taiwan’s heavily subsidized electricity consumers therefore enjoy a discount in excess of 13 percent compared to their US counterparts, despite US access to low-cost, abundant, and indigenously produced energy.

Taiwan’s heavily subsidized electricity prices incentivize maritime imports, especially coal. Astonishingly, Taiwan was the world’s largest per capita user of coal generation for electricity in 2022, higher than even Australia, a major coal exporter.

Taiwan’s low electricity prices and use of coal expose the island to PRC economic coercion. Taiwan’s dependency on imported coal heightens its vulnerability in the summer, when the island’s electricity-generation needs peak. Concerningly, Taiwan has already experienced electricity shortfalls in summer peacetime conditions, including a wave of outages63 between July and August 2022. With the island’s future summer cooling needs set to rise even further due to climate change and hotter temperatures, Taiwan’s electricity needs pose a vulnerability that the PRC may attempt to exploit.

Curbing Taiwan’s electricity demand during summer months is critical, necessitating a rise in prices. While this reduction is a principal energy security challenge, the island must also do more to secure supply, especially for nuclear energy.

Supply: Support indigenous production

Taiwan’s resiliency will be strengthened by producing as much indigenous energy as possible, especially during the critical summer months. Taiwan, which has virtually no hydrocarbon resources, can therefore indigenously produce only four different forms of energy at scale: nuclear energy, offshore wind, onshore wind, and solar. Taiwan should pursue each of these indigenous energy sources. Taiwan should apply “carrots” by strengthening incentives and payments for indigenous production. At the same time, applying the “stick” of higher prices to energy consumption, especially for energy imports, would bolster the island’s resiliency.

Taiwan’s renewable resources are significant and often economically viable, but they cannot secure adequate levels of resiliency by themselves. Taiwan’s wind speeds slow in the summer,64 limiting onshore and offshore wind’s effectiveness in bolstering energy security. Additionally, Taiwan’s stringent localization requirements for offshore appropriately minimize PRC components and sensors in Taiwan’s offshore wind turbines, but also raise the costs of this technology. Taiwan’s solar potential65 is also limited66 by cloudy skies, frequent rainfall, and land scarcity.

Accordingly, nuclear energy is the most viable way for Taiwan to address its summer electricity needs without turning to maritime imports. While Taiwan’s nuclear reactors must acquire fuel from abroad, this fuel can be used for approximately eighteen to twenty-four months.67 Taiwan should maintain its existing nuclear energy capacity; restart retired capacity as soon as politically and technically feasible; and seek new, incremental capacity over time.68

Unpacking Taiwan’s storage complexities: Dispersal and hardening is critical

To cope with various contingencies, including the possibility of a prolonged summertime blockade, Taiwan should increase its stockpiles of energy, disperse inventory around the island, and harden facilities.

While Taiwan’s ability to hold out against a blockade involves by many factors, energy inventories are a critical element. Taiwan’s electricity reserves are limited: it reported fifty-six days of supply of coal inventories in February 2023,69 and aims to raise its natural gas inventories from eleven days to more than twenty days by 2030.70 These inventory levels should be expanded, in part because “days of supply” fail to encapsulate uncertainty. Demand fluctuates depending on temperature and other variables, while Taiwan’s access to energy storage inventories faces the risk of sabotage and, in certain scenarios, kinetic strikes.

Taiwan’s management of petroleum reserves is a matter of great importance, given the use of these fuels, especially diesel, for military matters. Taiwan’s Energy Administration, in the Ministry of Economic Affairs, reported in April 2024 that its total oil inventories stood at 167 days of supply.71 This topline figure presents an overly optimistic portrait of Taiwan’s petroleum security, however. For instance, Taiwan’s government-controlled inventories in April 202472 included 2.6 million kiloliters of crude oil and refined products; private stocks added another 6.5 million kiloliters. Accordingly, Taiwan reports forty-seven days of supply from government stockpiles, with an additional 120 days from private inventories.73 Given that domestic sales and consumption equated to about 54,200 kiloliters per day from prior comparable periods,74 Taiwan calculated it had about 167 days of supply.

There may, however, be insufficient monitoring of private inventories. Marek Jestrab observed:

  • A concerning—and possibly significant—loophole exists in these laws, where the criteria and computation formulas for the actual on-hand security stockpiles will be determined by the central competent authority, and are not required to be disclosed. This presents the opportunity for energy that is loaded onboard merchant shipping while in transit to Taiwan to count toward these figures.75

While Taiwan should ensure that stockpiles are actually on the island, and not at sea, it also needs to carefully examine the inventory split between crude oil and crude products, such as diesel, gasoline, jet fuel, etc. Additionally, Taiwan’s policymakers should not expect to rely on its crude inventories, which only have a latent potential: crude oil cannot be used until it is refined into a crude product. Therefore, if the PRC disrupted Taiwan’s refineries via cyber or even kinetic means, Taiwan would not be able to access the totality of its crude oil reserves.

Taiwan’s military requirements for fuel would likely surge during a confrontation or conflict with the PRC, reducing the “days of supply.” Since Taiwan’s military vehicles largely run on diesel, the island should pay careful attention to this product.

Taiwan should disperse and harden its energy assets, especially diesel storage, as concentrated objects would present inviting targets for the PRC. Beijing is studying Russia’s invasion of Ukraine closely and will not fail to notice that Moscow attacked about 30 percent of Ukrainian infrastructure in a single day.76 As one author witnessed during his recent visit to Kyiv, Ukraine’s dispersal of electricity assets is achieving a reasonable degree of success. Indeed, Russia’s more recent campaign77 attacking large-scale thermal and hydroelectric power plants illustrates the utility of dispersed energy infrastructure. Like Ukraine, Taiwan should disperse and harden its energy storage inventories to the maximal feasible extent.

Rationing plans

While both Taiwan’s electricity supply and demand will be very hard to predict in a state of emergency, rationing plans must be considered—especially for the island’s manufacturing and semiconductor industries.

Taiwan’s economy is uniquely78 tied to electricity-intensive manufacturing, as industrial consumers accounted for more than 55 percent of Taiwan’s electricity consumption in 2023.79 Most of these industrial producers (such as chipmaker Taiwan Semiconductor Manufacturing Company) service export markets—not Taiwan. While the PRC might attempt to disrupt the island’s energy and electricity supply via cyber and kinetic means, Taiwan’s electricity consumption would fall dramatically during a crisis if Taiwan’s industries were forced to shut down. Although the closure of Taiwan’s industry would prove economically ruinous, it would also make the island’s electricity and energy issues much more manageable. Adding an additional layer of complication, many of Taiwan’s most valuable exports – such as chips – are shipped via civilian airliners, not on seaborne vessels, and would consequently be more difficult to interdict in circumstances short of war.80 Taiwan should prepare rationing plans for a variety of contingencies, adapting to a range of scenarios, including a quarantine, siege, or even kinetic conflict. Taiwan must be ready. 

Energy recommendations for Taiwan 

  • Gradually raise electricity and energy prices, communicating that price hikes will persist and require significant adjustments over the medium term.
  • Expand the frequency of electricity price reviews from twice a year to a quarterly basis. More frequent price adjustments will allow smaller incremental increases while also enabling Taiwan to respond more quickly to potential contingencies.
  • Expand fiscal support for indigenous forms of energy. Demand-side management programs could include virtual power plants, building efficiency measures, two-way air conditioning units, and more. On the supply side, Taiwan should incentivize indigenous energy production, including nuclear energy, onshore wind, offshore wind, and solar.
  • Extend the life of Taiwan’s nuclear energy power plants and consider expanding capacity. Nuclear energy is not only Taiwan’s best option for meeting its summer generation needs but also extremely safe and reliable. In the event of a conflict, the PRC is extremely unlikely to launch highly escalatory and provocative attacks against nuclear facilities on territory it seeks to occupy.
  • Bolster domestic energy supplies and decarbonization objectives including by considering easing localization requirements for offshore wind projects—while ensuring that PRC components and sensors are not incorporated.
  • Disperse and, where possible, harden energy and electricity assets and volumes across the island for both military and civil defense needs.
  • Examine potential alternatives to diesel, as diesel inventories can begin to degrade after several weeks, including “long-duration diesel” solutions that, while more polluting, could extend the shelf life of its inventories, enhancing the durability of Taiwan’s military and civil defense efforts.
  • Deepen liquified natural gas (LNG) ties with the United States. Contracting with US LNG producers would moderately bolster Taiwan’s energy security, as the PRC would be more reluctant to interdict US cargoes than vessels from other nations.
  • Conduct comprehensive studies into energy contingency planning, examining how energy and electricity would be prioritized and rationed during various scenarios.

Food and water resiliency

Taiwan’s food supply needs will be significant in the event of a contingency, but pale in comparison to its energy and water requirements. Taiwan’s water security is a serious concern, as it is already suffering from water access issues in noncrisis periods. Taiwan should prioritize scarce land for electricity generation, especially onshore wind and solar, which are much less water-intensive than coal and natural gas generation. Repurposing farmland for renewables would ease Taiwan’s electricity and water needs in peacetime and during any crisis.

Taiwan’s food security challenges are serious, but manageable. The island’s self-sufficiency ratio for food stands at about 40 percent, after rising somewhat in recent years. Unlike energy, however, Taiwan can both store food, especially rice, and replenish these inventories. Meals ready to eat (MREs) can store for more than eighteen months.

Additionally, the island would likely be able to resupply itself aerially in all situations short of conflict. The PRC might well be extremely reluctant to shoot down a civilian aircraft resupplying Taiwan with food. The PRC’s shootdown of a civilian aircraft would damage external perceptions of the PRC, and strengthen global support for sanctions. While there can be no certainty, the PRC’s self-interest in managing perceptions of a confrontation would increase the likelihood of the safe transit of aerial and perhaps even maritime food deliveries to the island.

Taiwan’s water access problems are serious. Water shortages have manifested even in peacetime, as Taiwan experienced a severe drought in 2021. During a contingency with the PRC, Beijing might attempt to exploit this vulnerability.

Luckily, Taiwan’s water resiliency can be strengthened by tackling agricultural consumption and, wherever politically and technically feasible, repurposing farmland for energy generation. From 2013 to 2022, 71 percent of Taiwan’s water consumption was attributable to agriculture. Meanwhile, Taiwan’s industries comprised only 10 percent of demand during that period, with domestic (i.e., residential and commercial) consumption accounting for the remainder. Taiwan’s water needs are growing, due to “thirsty” industrial customers, but the agricultural sector is primarily responsible for the majority of the island’s consumption, although consumption and supply sources vary across the island.

Taiwan’s policymakers recognize its water problems and have begun raising water prices,  especially for heavy users. Taiwan should continue to encourage efficiency by gradually but perceptibly increasing water prices. Concomitantly, it should further reduce demand by repurposing water-intensive farmland for electricity generation, when feasible. Repurposing farmland will undoubtedly prove politically difficult, but it will also improve Taiwan’s water and electricity resiliency.

Food and water security recommendations 

  • Prioritize energy and water security needs over food production.
  • Secure and disperse inventories of foodstuffs, such as MREs, medicines, and water, along with water purification tablets.
  • Bolster the island’s cold storage supply chains and overall foodstuff inventories.
  • Plan and work with partners to stage food supply if a Berlin airlift-style operation becomes necessary.
  • Continue to encourage water conservation by increasing water prices gradually but steadily.
  • Ensure redundancy of water supplies and systems, especially in the more populous northern part of the island.
  • Ensure that drinking water and sanitation systems can operate continuously, after accounting for any electricity needs.
Gustavo F. Ferreira and J. A. Critelli, “Taiwan’s Food Resiliency—or Not—in a Conflict with China,” US Army War College Quarterly: Parameters 53, no. 2 (2023), doi:10.55540/0031-1723.3222; Joseph Webster, “Does Taiwan’s Massive Reliance on Energy Imports Put Its Security at Risk?,” New Atlanticist, Atlantic Council blog, July 7, 2023, https://www.atlanticcouncil.org/blogs/new-atlanticist/does-taiwans-massive-reliance-on-energy-imports-put-its-security-at-risk/; Amy Chang Chien, Mike Ives, and Billy H. C. Kwok,  “Taiwan Prays for Rain and Scrambles to Save Water,” New York Times, May 28, 2021, https://www.nytimes.com/2021/05/28/world/asia/taiwan-drought.html; “Water Resources Utilization,” Ministry of Economic Affairs (MOEA), Water Resources Agency, 2022, https://eng.wra.gov.tw/cp.aspx?n=5154&dn=5155; Meng-hsuan Yang, “Why Did Formosa Plastics Build Its Own Desalination Facility?,” CommonWealth Magazine, May 31, 2023, https://english.cw.com.tw/article/article.action?id=3440; and Chao Li-yen and Ko Lin, “Taiwan State-Owned Utility Evaluates Water Price Adjustments,” Focus Taiwan, January 26, 2024, https://focustaiwan.tw/society/202401260017#:~:text=As%20of%20Aug.
The Berlin airlift of 1948 and 1949 demonstrates the power of aerial food replenishment logistics in an uncontested environment. From June 26, 1948, to September 30, 1949, Allied forces delivered more than 2.3 million tons of food, fuel, and supplies to West Berlin in over 278,000 airdrops. While Taiwan’s population of more than twenty-three million is significantly larger than West Berlin’s population of 2.5 million, the world civilian air cargo fleet has expanded dramatically over the past seventy-five years. In all situations short of conflict, Taiwan would be able to restock food from the air. For more on the Berlin airlift, see Katie Lange, “The Berlin Airlift: What It Was, Its Importance in the Cold War,” DOD News, June 24, 2022, https://www.defense.gov/News/Feature-Stories/Story/Article/3072635/the-berlin-airlift-what-it-was-its-importance-in-the-cold-war/.

Enhancing defense resilience

Ever since Beijing leveraged then-Speaker Nancy Pelosi’s August 2022 visit to Taiwan as an excuse to launch large-scale joint blockade military exercises, pundits have labeled the residual military situation around Taiwan as a “new normal.” Yet there is really nothing normal about a permanent presence of People’s Liberation Army (PLA) Navy warships menacingly surrounding the island along its twenty-four nautical mile contiguous zone, and nothing usual about increasing numbers of manned and unmanned military aircraft crossing the tacit median line in the Taiwan Strait—a line that held significance for seven decades as a symbol of cross-strait stability. Nor should it be viewed as normal that a steady stream of high-altitude surveillance balloons—which are suspected of collecting military intelligence—violate Taiwan’s airspace.81 Some have better described this “new normal” as a strategy akin to an anaconda noticeably tightening its grip around the island, drawing close enough to reduce warning time and provocative enough to raise the risk of inadvertent clashes. In other words, the PRC has unilaterally dialed up a military cost-imposition campaign meant to chip away at peace and stability across the Taiwan Strait, wear down Taiwan’s military, and erode confidence and social cohesion in Taiwan society. 

Russia’s full-scale invasion of Ukraine in 2022 was an additional wake-up call for the citizens of Taiwan, following mainland China’s 2019 crackdown on Hong Kong freedoms, heightening recognition of the risks presented by the PRC and, in particular, that the long-standing status quo in cross-strait relations is no longer acceptable to Beijing. Taiwan thus finds itself in the unenviable position of simultaneously countering PLA gray zone intrusions and cognitive warfare—what NATO calls affecting attitudes and behaviors to gain advantage82—while beefing itself up militarily to deter the growing threat of a blockade or assault.

With this backdrop, Taipei authorities have since embarked on long-overdue reforms in defense affairs, marked by several developments aimed at bolstering the island democracy’s military capabilities and readiness in the face of growing threats from Beijing.

First, Taiwan’s overall defense spending has undergone seven consecutive year-on-year increases, reaching 2.5 percent of gross domestic product.83 While this is commendable, Taiwan’s defense requirements are very substantial, and its budget in US dollars is only $19.1 billion.84 Accordingly, it will be important for Taiwan to continue the trend of higher defense spending to at least 3 percent of GDP both to bolster Taiwan’s military capabilities and as a deterrent signal to Beijing—and also to garner international community recognition that Taiwan is serious about its own defense. A key element will be to ensure that Taiwan has sufficient stocks of ammunition and other weapons capabilities to fight effectively until the United States could fully engage and in the event of a longer war. One area that deserves a high degree of attention is defense against ballistic and cruise missiles and unmanned vehicles. Especially in light of the recent coalition success in defeating such Iranian attacks against Israel, planning should be undertaken to assure comparable success for Taiwan against PRC attacks. Adding mobile, short-range air defenses to the high-priority list of military investments for Taiwan—such as the highly mobile National Advanced Surface-to-Air Missile System (NASAMS)85—will make it harder for the PLA to find and destroy Taiwan defenses, especially if combined with passive means for target detection and missile guidance.

Second, the new president can kick-start an enhanced approach to defense by embracing full integration of public-private innovation and adopting Ukraine’s model of grass-roots innovation for defense, which has served it well through a decade of war against a much larger Russia. Recognizing that innovation is itself a form of resilience, Taiwan can draw valuable lessons from Ukraine, particularly in leveraging private-sector expertise. By implementing what some Ukrainian defense experts term a “capability accelerator” to integrate emerging technologies into mission-focused capabilities, Taiwan can enhance its resilience and swiftly adapt to evolving security challenges, including rapidly fielding a high volume of unmanned systems to achieve distributed surveillance, redundant command and control, and higher survivability.86 This comprehensive approach, which recognizes the private sector as the greatest source of innovation in today’s complex security environment, holds significant potential for enhancing Taiwan’s defense capabilities through the utilization of disruptive technologies. The island’s overall resilience would significantly benefit by drawing the private sector in as a direct stakeholder in national defense matters. 

Ukraine’s grass-roots model of defense innovation, spearheaded by volunteers, nongovernment organizations, and international partners, is a worthy and timely model for Taiwan. Ukraine’s approach has yielded significant advancements in drone warfare, as well as sophisticated capabilities like the Delta battlefield management system—a user-friendly cloud-based situational awareness tool that provides real-time information on enemy and friendly forces through the integration of data from sources such as drones, satellites, and even civilian reports.87 This collaborative model, reliant on cooperation between civilian developers and military end users, has propelled Ukraine’s military technological revolution by integrating intelligence and surveillance tasks, while enhancing decision-making and kill-chain target acquisition. Taiwan will benefit from a comparable approach.

Third, as suggested above, Taiwan should focus a large portion of its defense budget on low-cost, highly effective systems. In terms of force structure, it appears that Taiwan has settled on a design that blends large legacy platforms of a twentieth-century military with the introduction of more survivable and distributable low-end asymmetric capabilities. The latter are best exemplified by Taiwan’s indigenously produced Ta Chiang-class of high-speed, guided-missile corvettes (PGG) and Min Jiang-class fast mine laying boats (FMLB).88 But much more must be done to bolster Taiwan’s overall defense capabilities by focusing on less expensive, but nonetheless highly effective systems.

In Ukraine’s battle against Russian Federation invaders, drones have provided Ukrainian forces with important tactical capabilities by enabling them to gather intelligence, monitor enemy movements, and conduct precision strikes on high-value targets. Taiwan can comparably utilize low-cost UAVs to establish mesh networks that connect devices for intelligence, surveillance, and reconnaissance and for targeting that would be invaluable in countering a PRC amphibious assault. Lessons from Ukraine further highlight the importance of having the right mix of drone types and capabilities in substantial stockpiles, capable of a variety of missions. Notably, Ukrainian officials have called for the production of more than one million domestically produced drones in 2024.89 Then-President Tsai’s formation of a civilian-led “drone national team” program is a commendable step in this direction and underscores the power of collaborative innovation in joint efforts between  users.90 Encouraging cooperation between Taiwan drone makers and US private industry will accelerate the development of a combat-ready unmanned systems fleet with sufficient range, endurance, and payload to enhance situational awareness and battlefield effects. 

Concurrent with those efforts utilizing unmanned systems, Taiwan should bolster its naval mining capabilities as a strategic measure against PRC aggression. Naval mines represent one of the most cost-effective and immediately impactful layers of defense.91 In this regard, Taiwan’s new Min Jiang class of FMLB represents the right type of investments in capabilities which could prove pivotal in thwarting potential invasion attempts.

Even more significantly for a Taiwan audience, Ukraine broke a blockade of its Black Sea ports using a combination of naval drones and coastal defense missiles—and repelled the once-mighty Russian Black Sea Fleet—all without a traditional navy of its own.92 Faced with clear intent by a PLA Navy practicing daily to isolate the island, the time is past due for Taiwanese authorities to hone their own counterblockade skills including a heavy reliance on unmanned surface vehicles. 

Taiwan should also make rapid investments in port infrastructure and defenses along Taiwan’s eastern seaboard in places such as Su’ao and Hualien harbors, which can serve as deepwater ports that are accessible, strategic, antiblockade strongpoints, and where any conceivable PLA blockade would be at its weakest and most vulnerable point logistically. Su’ao harbor, as a potential future homeport for Taiwan’s new indigenous Hai Kun-class diesel submarines, could also serve a dual purpose as an experimentation and development zone for public-private collaboration on unmanned-systems employment and operations. Infrastructure investments in East Coast ports could enhance the island’s ability to attain emergency resupply of energy, food, humanitarian supplies, and munitions under all conditions, broadening options for international community aid and complicating PLA efforts.

Fourth, every new capability needs trained operators who are empowered to employ and engage.  This year Taiwan began implementation of a new, one-year conscript training system for male adults born after January 1, 2005 (up from a wholly inadequate four months of conscription in the past decade).93 Taiwan’s “all-out defense” plan realigns into a frontline main battle force consisting of all-volunteer career military personnel, backed up by a standing garrison force composed mainly of conscripted military personnel guarding infrastructure, along with a civil defense system integrated with local governments and private-sector resources. Upon mobilization, there would also be a reserve force to supplement the main battle and garrison forces. 

According to details laid out in its 2023 National Defense Report, Taiwan’s revamped one-year conscript system and reorganized reserve mobilization system place significant emphasis on traditional military combat skills, such as rifle marksmanship and operation of mortars.94 However, in response to evolving security challenges and the changing nature of warfare, Taiwan’s military should incorporate greater training in emerging technologies and unconventional tactics, along with decentralized command and control, especially in the areas of drone warfare, where unmanned aerial vehicles and surface vessels play a crucial role in reconnaissance, surveillance, and targeted strikes. By integrating drone warfare training into the conscript system as well as in annual reserve call-up training, Taiwan can better prepare its military personnel to adapt to modern battlefield environments and effectively counter emerging threats.

Integrating drone operations into military operations down to the conscript and reservist level offers a cost-effective means to enhance battlefield situational awareness and operational capabilities, and also has the added benefit of enhancing the attractiveness and value of a mandatory conscription system emerging from years of low morale and characterized by Taiwan’s outgoing president as “insufficient” and “full of outmoded training.”95 Recognizing the imperative to modernize military training to face up to a rapidly expanding PLA threat, Taiwan’s military force realignment plan came with a promise to “include training in the use of Stinger missiles, Javelin missiles, Kestrel rockets, drones, and other new types of weapons . . . in accordance with mission requirements to meet the needs of modern warfare.”96 Looking at the example of Ukraine, where drones have been utilized, underscores the importance of incorporating drone warfare training into its asymmetric strategy.

The Taiwan Enhanced Resilience Act “prioritize[d] realistic training” by the United States, with Taiwan authorizing “an enduring rotational United States military presence that assists Taiwan in maintaining force readiness.”97 There have been numerous reports of US special forces in Taiwan,98 and those forces could provide training in tactical air control, dynamic targeting, urban warfare, and comparable capabilities.99 Likewise, parts of an Army Security Force Assistance Brigade could do similar work on a rotational basis, on- or off-island.

To facilitate a comprehensive and integrated approach to defense planning and preparedness between the military, government agencies, and civilian organizations, Taiwan has also established the All-out Defense Mobilization Agency, which (as noted above) is a centralized body subordinate to the Ministry of National Defense that is tasked with coordinating efforts across various sectors, down to the local level, to enhance national defense readiness. That agency would be significantly more effective if raised to the national level with a broadened mandate as part of a comprehensive approach.

The Taiwanese leadership also should consider elevating their efforts to create a large-scale civil defense force, offering practical skills training which would appeal to Taiwanese willing to dedicate time and effort toward defense of their communities and localities. These skills could include emergency medical training, casualty evacuation, additive manufacturing, drone flying, and open-source intelligence. Private, nonprofit civil defense organizations such as Taiwan’s Kuma Academy hold widespread appeal to citizens seeking to enhance basic preparedness skills.100 With a curriculum that covers topics ranging from basic first aid to cognitive warfare, Kuma Academy’s popular classes typically sell out within minutes of going online. According to a recent survey of domestic Taiwan opinions sponsored by Spirit of America, “When facing external threats, 75.3% of the people agree that Taiwanese citizens have an obligation to defend Taiwan.”101 A well-trained civil defense force and other whole-of-society resilience measures provide an additional layer of defense and enhance social cohesion to better deny Beijing’s ultimate political objective of subjugating the will of the people.

Defense resilience recommendations for Taiwan

  • Raise defense spending to at least 3 percent of GDP.
  • Adopt Ukraine’s model of grass-roots innovation in defense.
  • Focus a large portion of its defense budget on low-cost, highly effective systems including unmanned vehicles and naval mines.
  • Incorporate greater training in emerging technologies and unconventional tactics for conscripts and reserves.
  • Invest in East Coast port infrastructure as counterblockade strongholds.
  • Elevate the All-out Defense Mobilization Agency to the national level and implement a larger civil defense force that fully integrates civilian agencies and local governments.

Conclusion

On April 3, 2024, Taiwan was struck by the strongest earthquake in twenty-five years. In the face of this magnitude 7.4 quake, Taiwan’s response highlights the effectiveness of robust investment in stricter building codes, earthquake alert systems, and resilience policies, resulting in minimal casualties and low infrastructure damage.102 Taiwan’s precarious position on the seismically vulnerable Ring of Fire, a belt of volcanoes around the Pacific Ocean, mirrors its vulnerability under constant threat of military and gray zone aggression from a mainland China seeking seismic changes in geopolitical power. Drawing from its success in preparing for and mitigating the impact of natural disasters, Taiwan can apply a similarly proactive approach in its defense preparedness. Safeguarding Taiwan’s sovereignty and security requires investments in a comprehensive security strategy for resilience across society—including cybersecurity for critical infrastructures, bolstering energy security, and enhanced defense resilience. Such an approach would provide Taiwan the greatest likelihood of deterring or, if necessary, defeating PRC aggression including through blockade or kinetic conflict. 

About the authors

Franklin D. Kramer is a distinguished fellow at the Atlantic Council and a member of its board. He is a former US assistant secretary of defense for international security affairs.

Philip Yu is a nonresident senior fellow in the Indo-Pacific Security Initiative at the Atlantic Council’s Scowcroft Center for Strategy and Security, and a retired US Navy rear admiral. 

Joseph Webster is a senior fellow at the Atlantic Council’s Global Energy Center, a nonresident senior fellow in the Indo-Pacific Security Initiative at the Atlantic Council’s Scowcroft Center for Strategy and Security, and editor of the independent China-Russia Report.

Elizabeth “Beth” Sizeland is a nonresident senior fellow at the Scowcroft Strategy Initiative of the Atlantic Council’s Scowcroft Center for Strategy and Security. Earlier, she served in the United Kingdom’s government including as deputy national security adviser and as adviser to the UK prime minister on intelligence, security, and resilience issues.

This analysis reflects the personal opinions of the authors.

Acknowledgments

The authors would like to thank the following individuals for their helpful comments and feedback: Amber Lin, Elsie Hung, Kwangyin Liu, and Alison O’Neil.

Related content

1    “The gray zone describes a set of activities that occur between peace (or cooperation) and war (or armed conflict),” writes Clementine Starling. “A multitude of activities fall into this murky in-between—from nefarious economic activities, influence operations, and cyberattacks to mercenary operations, assassinations, and disinformation campaigns. Generally, gray-zone activities are considered gradualist campaigns by state and non-state actors that combine non-military and quasi-military tools and fall below the threshold of armed conflict. They aim to thwart, destabilize, weaken, or attack an adversary, and they are often tailored toward the vulnerabilities of the target state. While gray-zone activities are nothing new, the onset of new technologies has provided states with more tools to operate and avoid clear categorization, attribution, and detection—all of which complicates the United States’ and its allies’ ability to respond.” Starling, “Today’s Wars Are Fought in the ‘Gray Zone.’ Here’s Everything You Need to Know About it,” Atlantic Council, February 23, 2022, https://www.atlanticcouncil.org/blogs/new-atlanticist/todays-wars-are-fought-in-the-gray-zone-heres-everything-you-need-to-know-about-it/.
2    In a quarantine of Taiwan, Beijing would interdict shipments but allow some supplies—potentially food and medicine—to pass through unimpeded. This measure would enable the PRC to assert greater sovereignty over Taiwan without formally committing to either a war or a blockade.
3    Mykhaylo Lopatin, “Bind Ukraine’s Military-Technology Revolution to Rapid Capability Development,” War on the Rocks, January 23, 2024, https://warontherocks.com/2024/01/bind-ukraines-military-technology-revolution-to-rapid-capability-development/.
4    “President Tsai Delivers 2022 National Day Address,” Office of the President of Taiwan, October 10, 2022, https://english.president.gov.tw/News/6348.
5    “Full Text of President Tsai Ing-Wen’s National Day Address,” Focus Taiwan website, Central News Agency of Taiwan, October 10, 2023, https://focustaiwan.tw/politics/202310100004; and “President Tsai Delivers 2024 New Year’s Address,” Office of the President, Taiwan, January 1, 2024, https://english.president.gov.tw/NEWS/6662.
6    Finnish Security Committee, Security Strategy for Society: Government Resolution, Ministry of Defense, November 2, 2017, https://turvallisuuskomitea.fi/wp-content/uploads/2018/04/YTS_2017_english.pdf.
7    “Swedish Defence Commission Submits Total Defence Report,” Ministry of Defense, December 19, 2023, https://www.government.se/articles/2023/12/swedish-defence-commission-submits-total-defence-report/.
8    Pursuing a professional and structured approach to resilience against Chinese aggression will also have a “halo” effect, building approaches and expertise that will support effective work on other areas of national security risk.
9    Finnish Security Committee, Security Strategy for Society.
10    Finnish Security Committee, Security Strategy for Society.
11    Finnish Security Committee, Security Strategy for Society.
12    “All-out Defense Mobilization Agency,” agency website, n.d., https://aodm.mnd.gov.tw/aodm-en/indexE.aspx.
13    John Dotson, “Taiwan’s ‘Military Force Restructuring Plan’ and the Extension of Conscripted Military Service,” Global Taiwan Institute’s Global Taiwan Brief 8, no. 3 (2023), https://globaltaiwan.org/2023/02/taiwan-military-force-restructuring-plan-and-the-extension-of-conscripted-military-service/.
14    The party does face, however, the governance challenges that come with a hung parliament.
15    Hybrid CoE,” European Centre of Excellence for Countering Hybrid Threats, n.d., https://www.hybridcoe.fi/.
16    Lucy Fisher, “First Glimpse Inside UK’s New White House-Style Crisis Situation Centre,” Telegraph, December 14, 2021, https://www.telegraph.co.uk/news/2021/12/14/first-glimpse-inside-uks-new-white-house-style-crisis-situation/.
17    A. Rauchfleisch et al., “Taiwan’s Public Discourse About Disinformation: The Role of Journalism, Academia, and Politics,” Journalism Practice 17, no. 10 (2023): 2197–2217, https://doi.org/10.1080/17512786.2022.2110928.
18    Chee-Hann Wu, “Three Musketeers against MIS/Disinformation: Assessing Citizen-Led Fact-Checking Practices in Taiwan,” Taiwan Insight magazine, July 21, 2023, https://taiwaninsight.org/2023/03/31/three-musketeers-against-mis-disinformation-assessing-citizen-led-fact-checking-practices-in-taiwan/; and David Klepper and Huizhong Wu, “How Taiwan Beat Back Disinformation and Preserved the Integrity of Its Election,” Associated Press, January 29, 2024, https://apnews.com/article/taiwan-election-china-disinformation-vote-fraud-4968ef08fd13821e359b8e195b12919c.
19    E. Glen Weyl and Audrey Tang, “The Life of a Digital Democracy,” Plurality (open-source project on collaborative technology and democracy), accessed May 6, 2024, https://www.plurality.net/v/chapters/2-2/eng/?mode=dark.
20    “Critical Infrastructure Sectors,” US Cybersecurity and Infrastructure Security Agency (CISA), 2022, https://www.cisa.gov/topics/critical-infrastructure-security-and-resilience/critical-infrastructure-sectors.
21    “National Critical Functions,” CISA, n.d., https://www.cisa.gov/topics/risk-management/national-critical-functions.
22    Taiwan Administration for Cyber Security, “Cyber Security Defense of Critical Infrastructure: Operations,” Ministry of Digital Affairs, February 21, 2023, https://moda.gov.tw/en/ACS/operations/ciip/650.
23    “Taipower Announces Grid Resilience Strengthening Construction Plan with NT$564.5 Billion Investment Over 10 Years, Preventing Recurrence of Massive Power Outages,” Ministry of Economic Affairs, September 15, 2022,  https://www.moea.gov.tw/MNS/english/news/News.aspx?kind=6&menu_id=176&news_id=103225#:~:text=Wen%2DSheng%20Tseng%20explained%20that,of%20electricity%20demand%20in%20Taiwan.
24    Taiwan Water Corporation provides most of the water in Taiwan. See Taiwan Water Corporation, https://www.water.gov.tw/en.
25    Wen Lii, “After Chinese Vessels Cut Matsu Internet Cables, Taiwan Seeks to Improve Its Communications Resilience,” Opinion, Diplomat, April 15, 2023, https://thediplomat.com/2023/04/after-chinese-vessels-cut-matsu-internet-cables-taiwan-shows-its-communications-resilience/.
26    “About Us: History,” Administration for Cyber Security, MoDA, n.d., https://moda.gov.tw/en/ACS/aboutus/history/608. Note: US government analyses likewise underscore the significant number of attacks. As described by the US International Trade Administration (ITA), “Taiwan faces a disproportionately high number of cyberattacks, receiving as many as 30 million attacks per month in 2022.” See “Taiwan—Country Commercial Guide,” US ITA, last published January 10, 2024, https://www.trade.gov/country-commercial-guides/taiwan-cybersecurity.
27    Statistics are not entirely consistent, and attempted intrusions are sometimes counted as attacks.
28    “Taiwanese Gov’t Facing 5M Cyber Attacks per Day,” CyberTalk, Check Point Software Technologies, accessed May 2, 2024, https://www.cybertalk.org/taiwanese-govt-facing-5m-cyber-attacks-per-day/. Other private-sector companies’ analyses have reached comparable conclusions.
29    Huang Tzu-ti, “Taiwan Hit by 15,000 Cyberattacks per Second in First Half of 2023,” Taiwan News, August 17, 2023, https://www.taiwannews.com.tw/news/4973448.
30    Jeff Seldin, “Cyber Attacks Spike Suddenly prior to Taiwan’s Election,” Voice of America, February 13, 2024, https://www.voanews.com/a/cyber-attacks-spike-suddenly-prior-to-taiwan-s-election-/7485386.html.
31    Gagandeep Kaur, “Is China Waging a Cyber War with Taiwan?,” CSO Online, December 1, 2023, https://www.csoonline.com/article/1250513/is-china-waging-a-cyber-war-with-taiwan.html#:~:text=Nation%2Dstate%20hacking%20groups%20based.
32    Anne A wrote that “attackers are likely to employ living off-the-land techniques to gather policing, banking, and political information to achieve their goals. They also likely simultaneously and stealthily evaded security detections from remote endpoints.”See An, “Cyberattack on Democracy: Escalating Cyber Threats Immediately Ahead of Taiwan’s 2024 Presidential Election,” Trellix, February 13, 2024, https://www.trellix.com/blogs/research/cyberattack-on-democracy-escalating-cyber-threats-immediately-ahead-of-taiwan-2024-presidential-election/. Separately, a Microsoft Threat Intelligence blog said: “Microsoft has identified a nation-state activity group tracked as Flax Typhoon, based in China, that is targeting dozens of organizations in Taiwan with the likely intention of performing espionage. Flax Typhoon gains and maintains long-term access to Taiwanese organizations’ networks with minimal use of malware, relying on tools built into the operating system, along with some normally benign software to quietly remain in these networks.” See “Flax Typhoon Using Legitimate Software to Quietly Access Taiwanese Organizations,” Microsoft Threat Intelligence blog, August 24, 2023, https://www.microsoft.com/en-us/security/blog/2023/08/24/flax-typhoon-using-legitimate-software-to-quietly-access-taiwanese-organizations/.
33    Office of the Director of National Intelligence, Annual Threat Assessment of the US Intelligence Community, February 6, 2023, 10, https://www.dni.gov/files/ODNI/documents/assessments/ATA-2023-Unclassified-Report.pdf.
34    James Lewis, “Cyberattack on Civilian Critical Infrastructures in a Taiwan Scenario,” Center for Strategic and International Studies, August 2023, https://csis-website-prod.s3.amazonaws.com/s3fs-public/2023-08/230811_Lewis_Cyberattack_Taiwan.pdf?VersionId=l.gf7ysPjoW3.OcHvcRuNcpq3gN.Vj8b.
35    Elias Groll and Aj Vicens, “A Year After Russia’s Invasion, the Scope of Cyberwar in Ukraine Comes into Focus,” CyberScoop, February 24, 2023, https://cyberscoop.com/ukraine-russia-cyberwar-anniversary/.
36    Groll and Vicens, “A Year After Russia’s Invasion.”
37    “About Us: History,” Administration for Cyber Security.
38    Si Ying Thian, “‘Turning Conflicts into Co-creation’: Taiwan Government Harnesses Digital Policy for Democracy,” GovInsider, December 6, 2023, https://govinsider.asia/intl-en/article/turning-conflicts-into-co-creation-taiwans-digital-ministry-moda-harnesses-digital-policy-for-democracy.
39    Frank Konkel, “How a Push to the Cloud Helped a Ukrainian Bank Keep Faith with Customers amid War,” NextGov/FCW, November 30, 2023, https://www.nextgov.com/modernization/2023/11/how-push-cloud-helped-ukrainian-bank-keep-faith-customers-amid-war/392375/.
40    Eric Priezkalns, “Taiwan to Build 700 Satellite Receivers as Defense against China Cutting Submarine Cables,” CommsRisk, June 13, 2023, https://commsrisk.com/taiwan-to-build-700-satellite-receivers-as-defense-against-china-cutting-submarine-cables/.
41    Juliana Suess, “Starlink 2.0? Taiwan’s Plan for a Sovereign Satellite Communications System,” Commentary, Royal United Services Institute, January 20, 2023, https://rusi.org/explore-our-research/publications/commentary/starlink-20-taiwans-plan-sovereign-satellite-communications-system.
42    Gil Baram, “Securing Taiwan’s Satellite Infrastructure against China’s Reach,” Lawfare, November 14, 2023, https://www.lawfaremedia.org/article/securing-taiwan-s-satellite-infrastructure-against-china-s-reach.
43    Taiwan Relations Act, US Pub. L. No. 96-8, 93 Stat. 14 (1979), https://www.congress.gov/96/statute/STATUTE-93/STATUTE-93-Pg14.pdf.
44    “Integrated Country Strategy,” American Institute in Taiwan, 2022, https://www.state.gov/wp-content/uploads/2022/05/ICS_EAP_Taiwan_Public.pdf.
45    Franklin D. Kramer, The Sixth Domain: The Role of the Private Sector in Warfare, Atlantic Council, October 16, 2023, 13, https://www.atlanticcouncil.org/wp-content/uploads/2023/10/The-sixth-domain-The-role-of-the-private-sector-in-warfare-Oct16.pdf.
46    Joseph Gedeon, “Taiwan Is Bracing for Chinese Cyberattacks, White House Official Says,” Politico, September 27, 2023, https://www.politico.com/news/2023/09/27/taiwan-chinese-cyberattacks-white-house-00118492.
47    Gedeon, “Taiwan Is Bracing.”
48    Gedeon, “Taiwan Is Bracing.”
49    National Defense Authorization Act for Fiscal Year 2024, Pub. L. No. 118-31, 137 Stat. 136 (2023), Sec. 1518, https://www.congress.gov/bill/118th-congress/house-bill/2670/text.
50    National Defense Authorization Act for Fiscal Year 2024.
51    According to a report by Emma Schroeder and Sean Dack, “Starlink’s performance in the Ukraine conflict demonstrated its high value for wartime satellite communications: Starlink, a network of low-orbit satellites working in constellations operated by SpaceX, relies on satellite receivers no larger than a backpack that are easily installed and transported. Because Russian targeting of cellular towers made communications coverage unreliable . . . the government ‘made a decision to use satellite communication for such emergencies’ from American companies like SpaceX. Starlink has proven more resilient than any other alternatives throughout the war. Due to the low orbit of Starlink satellites, they can broadcast to their receivers at relatively higher power than satellites in higher orbits. There has been little reporting on successful Russian efforts to jam Starlink transmissions.” See Schroeder and Dack, A Parallel Terrain: Public-Private Defense of the Ukrainian Information Environment, Atlantic Council, February 2023, 14, https://www.atlanticcouncil.org/wp-content/uploads/2023/02/A-Parallel-Terrain.pdf.
52    Joey Roulette, “SpaceX Curbed Ukraine’s Use of Starlink Internet for Drones: Company President,” Reuters, February 9, 2023, https://www.reuters.com/business/aerospace-defense/spacex-curbed-ukraines-use-starlink-internet-drones-company-president-2023-02-09/.
53    Kramer, The Sixth Domain.
54    Frank Kramer, Ann Dailey, and Joslyn Brodfuehrer, NATO Multidomain Operations: Near- and Medium-term Priority Initiatives, Scowcroft Center for Strategy and Security, Atlantic Council, March 2024, https://www.atlanticcouncil.org/wp-content/uploads/2024/03/NATO-multidomain-operations-Near-and-medium-term-priority-initiatives.pdf.
55    Department of Defense, “Commercial Space Integration Strategy,” 2024, https://media.defense.gov/2024/Apr/02/2003427610/-1/-1/1/2024-DOD-COMMERCIAL-SPACE-INTEGRATION-STRATEGY.PDF; and “U.S. Space Force Commercial Space Strategy,” US Space Force, April 8, 2024, https://www.spaceforce.mil//Portals/2/Documents/Space%20Policy/USSF_Commercial_Space_Strategy.pdf.
56    “Space Development Agency Successfully Launches Tranche 0 Satellites,” Space Development Agency, September 2, 2023, https://www.sda.mil/space-development-agency-completes-second-successful-launch-of-tranche-0-satellites/.
57    Kramer, The Sixth Domain.
58    Kramer, The Sixth Domain.
59    “E-Stat,” Energy Statistics Monthly Report, Energy Administration, Taiwan Ministry of Economic Affairs, accessed May 6, 2024, https://www.esist.org.tw/newest/monthly?tab=%E7%B6%9C%E5%90%88%E8%83%BD%E6%BA%90.
60    “Comparison of Electricity Prices and Unit Cost Structures,” Electricity Price Cost, Business Information, Information Disclosure, Taiwan Electric Power Co., accessed May 6, 2024, https://www.taipower.com.tw/tc/page.aspx?mid=196.
61    Ministry of Economic Affairs (經濟部能源署), “The Electricity Price Review Meeting,” Headquarters News, accessed May 6, 2024, https://www.moea.gov.tw/MNS/populace/news/News.aspx?kind=1&menu_id=40&news_id=114222.
62    “Electric Power Monthly,” US Energy Information Administration (EIA), February 2024, https://www.eia.gov/electricity/monthly/epm_table_grapher.php?t=table_5_03.
63    Lauly Li and Cheng Ting-Feng, “Taiwan’s Frequent Blackouts Expose Vulnerability of Tech Economy,” Nikkei Asia, August 30, 2022, https://asia.nikkei.com/Business/Technology/Taiwan-s-frequent-blackouts-expose-vulnerability-of-tech-economy.
64    Xi Deng et al., “Offshore Wind Power in China: A Potential Solution to Electricity Transformation and Carbon Neutrality,” Fundamental Research, 2022, https://doi.org/10.1016/j.fmre.2022.11.008.
65    “Global Solar Atlas,” World Bank Group, ESMAP, and Solar GIS, 2024, CC BY 4.0, https://globalsolaratlas.info/map?c=24.176825.
66    Julian Spector, “Taiwan’s Rapid Renewables Push Has Created a Bustling Battery Market,” Canary Media, April 6, 2023, https://www.canarymedia.com/articles/energy-storage/taiwans-rapid-renewables-push-has-created-a-bustling-battery-market.
67    “U.S. Nuclear Plant Outages Increased in September After Remaining Low during Summer,” Today in Energy, US EIA, October 18, 2015, https://www.eia.gov/todayinenergy/detail.php?id=37252#:~:text=Nuclear%20power%20plants%20typically%20refuel.
68    For a more detailed discussion of Taiwan’s indigenous supply, see Joseph Webster, “Does Taiwan’s Massive Reliance on Energy Imports Put Its Security at Risk?,” New Atlanticist, Atlantic Council blog, July 7, 2023, https://www.atlanticcouncil.org/blogs/new-atlanticist/does-taiwans-massive-reliance-on-energy-imports-put-its-security-at-risk/.
69    “The Current Situation and Future of [the] Country’s Energy Supply and Reserves (立法院),” Seventh Session of the Tenth Legislative Yuan, Sixth Plenary Meeting of the Economic Committee, accessed May 7, 2024, https://ppg.ly.gov.tw/ppg/SittingAttachment/download/2023030989/02291301002301567002.pdf.
70    Jeanny Kao and Yimou Lee, “Taiwan to Boost Energy Inventories amid China Threat,” ed. Gerry Doyle, Reuters, October 23, 2022, https://www.reuters.com/business/energy/taiwan-boost-energy-inventories-amid-china-threat-2022-10-24/.
71    Energy Administration, “Domestic Oil Reserves Monthly Data (國內石油安全存量月資料),” Ministry of Economic Affairs, accessed May 6, 2024, https://www.moeaea.gov.tw/ecw/populace/content/wfrmStatistics.aspx?type=4&menu_id=1302.
72    Energy Administration, Ministry of Economic Affairs.
73    Energy Administration, Ministry of Economic Affairs.
74    Energy Administration, Ministry of Economic Affairs.
75    Marek Jestrab, “A Maritime Blockade of Taiwan by the People’s Republic of China: A Strategy to Defeat Fear and Coercion,” Atlantic Council Strategy Paper, December 12, 2023, https://www.atlanticcouncil.org/content-series/atlantic-council-strategy-paper-series/a-maritime-blockade-of-taiwan-by-the-peoples-republic-of-china-a-strategy-to-defeat-fear-and-coercion/.
76    Kathleen Magramo et al., “October 11, 2022 Russia-Ukraine News,” CNN, October 11, 2022, https://edition.cnn.com/europe/live-news/russia-ukraine-war-news-10-11-22/index.html.
77    Tom Balforth, “Major Russian Air Strikes Destroy Kyiv Power Plant, Damage Other Stations,” Reuters, November 2024, https://www.reuters.com/world/europe/russian-missile-strike-targets-cities-across-ukraine-2024-04-11/#:~:text=KYIV%2C%20April%2011%20(Reuters),runs%20low%20on%20air%20defences.
78    Global Taiwan Institute, “Taiwan’s Electrical Grid and the Need for Greater System Resilience,” June 14, 2023, https://globaltaiwan.org/2023/06/taiwans-electrical-grid-and-the-need-for-greater-system-resilience/.
79    “3-04 Electricity Consumption (3-04 電力消費),” Taiwan Energy Statistics Monthly Report (能源統計月報), accessed May 6, 2024, https://www.esist.org.tw/newest/monthly?tab=%E9%9B%BB%E5%8A%9B.
80    Alperovitch, D. (2024, June 6). A Chinese economic blockade of Taiwan would fail or launch a war. War on the Rocks. https://warontherocks.com/2024/06/a-chinese-economic-blockade-of-taiwan-would-fail-or-launch-a-war/
81    “The Ministry of National Defense Issues a Press Release Explaining Reports That ‘Airborne Balloons by the CCP Had Continuously Flown over Taiwan,’ ” Taiwan Ministry of National Defense, January 6, 2024,  https://www.mnd.gov.tw/english/Publish.aspx?title=News%20Channel&SelectStyle=Defense%20News%20&p=82479.
83    “Taiwan Announces an Increased Defense Budget for 2024,” Global Taiwan Institute, September 20, 2023, https://globaltaiwan.org/2023/09/taiwan-announces-an-increased-defense-budget-for-2024/.
84    Yu Nakamura, “Taiwan Allots Record Defense Budget for 2024 to Meet China Threat,” Nikkei Asia, August 24, 2023, https://asia.nikkei.com/Politics/Defense/Taiwan-allots-record-defense-budget-for-2024-to-meet-China-threat.
85    “NASAMS: National Advanced Surface-to-Air Missile System,” Raytheon, accessed May 12, 2024, https://www.rtx.com/raytheon/what-we-do/integrated-air-and-missile-defense/nasams.
86    Lopatin, “Bind Ukraine’s Military-Technology Revolution.”
87    Grace Jones, Janet Egan, and Eric Rosenbach, “Advancing in Adversity: Ukraine’s Battlefield Technologies and Lessons for the U.S.,” Policy Brief, Belfer Center for Science and International Affairs, Harvard Kennedy School, July 31, 2023, https://www.belfercenter.org/publication/advancing-adversity-ukraines-battlefield-technologies-and-lessons-us.
88    For more information, see, e.g., Peter Suciu, “Future of Taiwan’s Navy: Inside the Tuo Chiang-Class Missile Corvettes,” National Interest, March 27, 2024,  https://nationalinterest.org/blog/buzz/future-taiwans-navy-inside-tuo-chiang-class-missile-corvettes-210269; and Xavier Vavasseur, “Taiwan Launches 1st Mine Laying Ship for ROC Navy,” Naval News, August 5, 2020, https://www.navalnews.com/naval-news/2020/08/taiwan-launches-1st-mine-laying-ship-for-roc-navy/.
89    Mykola Bielieskov, “Outgunned Ukraine Bets on Drones as Russian Invasion Enters Third Year,” Ukraine Alert, Atlantic Council blog, February 20, 2024, https://www.atlanticcouncil.org/blogs/ukrainealert/outgunned-ukraine-bets-on-drones-as-russian-invasion-enters-third-year/.
90    Yimou Lee, James Pomfret, and David Lague, “Inspired by Ukraine War, Taiwan Launches Drone Blitz to Counter China,” Reuters, July 21, 2023, https://www.reuters.com/investigates/special-report/us-china-tech-taiwan/.
91    Franklin D. Kramer and Lt. Col. Matthew R. Crouch, Transformative Priorities for National Defense, Scowcroft Center for Strategy and Security, Atlantic Council, 2021, https://www.atlanticcouncil.org/wp-content/uploads/2021/06/Transformative-Priorities-Report-2021.pdf.
92    Peter Dickinson, “Ukraine’s Black Sea Success Exposes Folly of West’s ‘Don’t Escalate’ Mantra,” Ukraine Alert, Atlantic Council, January 22, 2024, https://www.atlanticcouncil.org/blogs/ukrainealert/ukraines-black-sea-success-provides-a-blueprint-for-victory-over-putin/.
93    Ministry of National Defense, ROC National Security Defense Report 2023, https://www.mnd.gov.tw/newupload/ndr/112/112ndreng.pdf.
94    Ministry of National Defense, ROC National Security Defense Report 2023.
95    “President Tsai Announces Military Force Realignment Plan,” Office of the President, December 27, 2022,  https://english.president.gov.tw/NEWS/6417.
96    “President Tsai Announces Military Force Realignment Plan.”
97    International Military Education and Training Cooperation with Taiwan, 22 U.S.C. § 3353 (2022), https://www.law.cornell.edu/uscode/text/22/3353.
98    Guy D. McCardl, “US Army Special Forces to Be Deployed on Taiwanese Island Six Miles from Mainland China,” SOFREP, March 8, 2024, https://sofrep.com/news/us-army-special-forces-to-be-deployed-on-taiwanese-island-six-miles-from-mainland-china/.
99    “Taiwan Defense Issues for Congress,” Congressional Research Service, CRS Report R48044, updated May 10, 2024, https://crsreports.congress.gov/product/pdf/R/R48044.
100    Jordyn Haime, “NGOs Try to Bridge Taiwan’s Civil Defense Gap,” China Project, August 4, 2023, https://thechinaproject.com/2023/08/04/ngos-try-to-bridge-taiwans-civil-defense-gap/.
101    Spirit of America, Taiwan Civic Engagement Survey, January 2024.
102    Amy Hawkins and Chi Hui Lin, “‘As Well Prepared as They Could Be’: How Taiwan Kept Death Toll Low in Massive Earthquake,” Observer, April 7, 2024, https://www.theguardian.com/world/2024/apr/07/as-well-prepared-as-they-could-be-how-taiwan-kept-death-toll-low-in-massive-earthquake.

The post Strengthening Taiwan’s resiliency appeared first on Atlantic Council.

]]>
Transatlantic Economic Statecraft Report cited in the International Cybersecurity Law Review on semiconductor supply chains https://www.atlanticcouncil.org/insight-impact/in-the-news/transatlantic-economic-statecraft-report-cited-in-the-international-cybersecurity-law-review-on-semiconductor-supply-chains/ Tue, 25 Jun 2024 13:57:00 +0000 https://www.atlanticcouncil.org/?p=779317 Read the journal article here.

The post Transatlantic Economic Statecraft Report cited in the International Cybersecurity Law Review on semiconductor supply chains appeared first on Atlantic Council.

]]>
Read the journal article here.

The post Transatlantic Economic Statecraft Report cited in the International Cybersecurity Law Review on semiconductor supply chains appeared first on Atlantic Council.

]]>
Designing a blueprint for open, free and trustworthy digital economies https://www.atlanticcouncil.org/blogs/econographics/designing-a-blueprint-for-open-free-and-trustworthy-digital-economies/ Fri, 14 Jun 2024 21:21:25 +0000 https://www.atlanticcouncil.org/?p=773476 US digital policy must be aimed at improving national security, defending human freedom, dignity, and economic growth while ensuring necessary accountability for the integrity of the technological bedrock.

The post Designing a blueprint for open, free and trustworthy digital economies appeared first on Atlantic Council.

]]>
More than half a century into the information age, it is clear how policy has shaped the digital world. The internet has enabled world-changing innovation, commercial developments, and economic growth through a global and interoperable infrastructure. However, the internet is also home to rampant fraud, misinformation, and criminal exploitation. To shape policy and technology to address these challenges in the next generation of digital infrastructure, policymakers must confront two complex issues: the difficulty of massively scaling technologies and the growing fragmentation across technological and economic systems.

How today’s policymakers decide to balance freedom and security in the digital landscape will have massive consequences for the future. US digital policy must be aimed at improving national security, defending human freedom, dignity, and economic growth while ensuring necessary accountability for the integrity of the technological bedrock.

Digital economy building blocks and the need for strategic alignment

Digital policymakers face a host of complex issues, such as regulating and securing artificial intelligence, banning or transitioning ownership of TikTok, combating pervasive fraud, addressing malign influence and interference in democratic processes, considering updates to Section 230 and impacts on tech platforms, and implementing zero-trust security architectures. When addressing these issues, policymakers must keep these core building blocks of the digital economy front and center:

  • Infrastructure: How to provide the structure, rails, processes, standards, and technologies for critical societal functions;
  • Data: How to protect, manage, own, use, share, and destroy open and sensitive data; and
  • Identity: How to represent and facilitate trust and interactions across people, entities, data, and devices.

How to approach accountability—who is responsible for what—in each of these pillars sets the stage for how future digital systems will or will not be secure, competitive, and equitable.

Achieving the right balance between openness and security is not easy, and the stakes for both personal liberty and national security amid geostrategic competition are high. The open accessibility of information, infrastructure, and markets enabled by the internet all bring knowledge diffusion, data flows, and higher order economic developments, which are critical for international trade and investment.

However, vulnerabilities in existing digital ecosystems contribute significantly to economic losses, such as the estimated $600 billion per year lost to intellectual property theft and the $8 trillion in global costs last year from cybercrime. Apart from direct economic costs, growing digital authoritarianism threatens undesirable censorship, surveillance, and manipulation of foreign and domestic societies that could not only undermine democracy but also reverse the economic benefits wrought from democratization.

As the United States pursues its commitment with partner nations toward an open, free, secure internet, Washington must operationalize that commitment into specific policy and technological implementations coordinated across the digital economy building blocks. It is critical to shape them to strengthen their integrity while preventing undesired fragmentation, which could hinder objectives for openness and innovation.

Infrastructure

The underlying infrastructure and technologies that define how consumers and businesses get access to and can use information are featured in ongoing debates and policymaking, which has led to heightened bipartisan calls for accountability across platform operators. Further complicating the landscape of accountability in infrastructure are the growing decentralization and aggregation of historically siloed functions and systems. As demonstrated by calls for decentralizing the banking system or blockchain-based decentralized networks underlying cryptocurrencies, there is an increasing interest from policymakers and industry leaders to drive away from concentration risks and inequity that can be at risk in overly centralized systems.

However, increasing decentralization can lead to a lack of clear lines of responsibility and accountability in the system. Accountability and neutrality policy are also impacted by increasing digital interconnectedness and the commingling of functions. The Bank of the International Settlement recently coined a term, “finternet,” to describe the vision of an exciting but complexly interconnected digital financial system that must navigate international authorities, sovereignty, and regulatory applicability in systems that operate around the world.

With this tech and policy landscape in mind, infrastructure policy should focus on two aspects:

  • Ensuring infrastructure security, integrity, and openness. Policymakers and civil society need to articulate and test a clear vision for stakeholders to coordinate on what openness and security across digital infrastructure for cross-economic purposes should look like based on impacts to national security, economic security, and democratic objectives. This would outline elements such as infrastructure ecosystem participants, the degree of openness, and where points for responsibility of controls should be, whether through voluntary or enforceable means. This vision would build on ongoing Biden administration efforts and provide a north star for strategic coordination with legislators, regulators, industry, civil society, and international partners to move in a common direction.
  • Addressing decentralization and the commingling of infrastructure. Technologists must come together with policymakers to ensure that features for governance and security are fit for purpose and integrated early in decentralized systems, as well as able to oversee and ensure compliance for any regulated, high-risk activity.

Data

Data has been called the new oil, the new gold, and the new oxygen. Perhaps overstated, each description nonetheless captures what is already the case: Data is incredibly valuable in digital economies. US policymakers should focus on how to surround how to address the privacy, control, and integrity of data, the fundamental assets of value in information economies.

Privacy is a critical area to get right in the collection and management of information. The US privacy framework is fragmented and generally use-specific, framed for high risk sectors like finance and healthcare. In the absence of a federal-government-wide consumer data privacy law, some states are implementing their own approaches. In light of existing international data privacy laws, US policy also has to account for issues surrounding harmonization and potential economic hindrances brought by data localization.

Beyond just control of privacy and disclosure, many tech entrepreneurs, legislators, and federal agencies are aimed at placing greater ownership of data and subsequent use in the hands of consumers. Other efforts supporting privacy and other national and economic security concerns are geared toward protecting against the control and ownership of sensitive data by adversarial nations or anti-competitive actors, including regulations on data brokers and the recent divest-or-ban legislation targeted at TikTok.

There is also significant policy interest surrounding the integrity of information and the systems reliant on it, such as in combating the manipulation of data underlying AI systems and protecting electoral processes that could be vulnerable to disinformation. Standards and research are rising, focused on data provenance and integrity techniques. But there remain barriers to getting the issue of data integrity right in the digital age.

While there is some momentum for combating data integrity compromise, doing so is rife with challenges of implementation and preserving freedom of expression that have to be addressed to achieve the needed balance of security and freedom:

  • Balancing data security, discoverability, and privacy. Stakeholders across various key functions of law enforcement, regulation, civil society, and industry must together define what type of information should be discoverable by whom and under what conditions, guided by democratic principles, privacy frameworks, the rule of law, and consumer and national security interests. This would shape the technical standards and requirements for privacy tech and governance models that government and industry can put into effect.
  • Preserving consumer and democratic control and ownership of data. Placing greater control and localization protections around consumer data could bring great benefits to user privacy but must also be done in consideration of the economic impacts and higher order innovations enabled from the free flow and aggregation of data. Policy efforts could pursue research and experimentation for assessing the value of data
  • Combating manipulation and protecting information integrity. Governments must work hand in hand with civil society and, where appropriate, media organizations to pursue policies and technical developments that could contribute to promoting trust in democratic public institutions and help identify misinformation across platforms, especially in high-risk areas to societies and democracies such as election messaging, financial services and markets, and healthcare.

Identity

Talk about “identity” can trigger concerns of social credit scores and Black Mirror episodes. It may, for example, evoke a sense of state surveillance, criminal anonymity, fraud, voter and political dissident suppression, disenfranchisement of marginalized populations, or even the mundane experience of waiting in line at a department of motor vehicles. As a force for good, identity enables critical access to goods and services for consumers, helps provide recourse for victims of fraud and those seeking public benefits, and protects sensitive information while providing necessary insights to authorities and regulated institutions to hold bad actors accountable. With increasing reliance on digital infrastructure, government and industry will have to partner to create the technical and policy fabric for secure, trustworthy, and interoperable digital identity.

Digital identity is a critical element of digital public infrastructure (DPI). The United States joined the Group of Twenty (G20) leaders in committing to pursue work on secure, interoperable digital identity tools and emphasized its importance in international fora to combat illicit finance. However, while many international efforts have taken root to establish digital identity systems abroad, progress by the United States on holistic domestic or cross-border digital identity frameworks has been limited. Identity security is crucial to establish trust in US systems, including the US financial sector and US public institutions. While the Biden administration has been driving some efforts to strengthen identity, the democratized access to sophisticatedAI tools increased the threat environment significantly by making it easy to create fraudulent credentials and deepfakes that circumvent many current counter-fraud measures.

The government is well-positioned to be the key driver of investments in identity that would create the underlying fabric for trust in digital communications and commerce:

  • Investing in identity as digital public infrastructure. Digital identity development and expansion can unlock massive societal and economic benefits, including driving value up to 13 percent of a nation’s gross domestic product and providing access to critical goods and services, as well as the ability to vote, engage in the financial sector, and own land. Identity itself can serve as infrastructure for higher-order e-commerce applications that rely on trust. The United States should invest in secure, interoperable digital identity infrastructure domestically and overseas, to include the provision of secure verifiable credentials and privacy-preserving attribute validation services.
  • Managing security, privacy, and equity in Identity. Policymakers must work with industry to ensure that identity systems, processes, and regulatory requirements implement appropriate controls in full view of all desired outcomes across security, privacy, and equity, consistent with National Institute of Science and Technology standards. Policies should ensure that saving resources by implementing digital identity systems also help to improve services for those not able to use them.

Technology by itself is not inherently good or evil—its benefits and risks are specific to the technological, operational, and governance implementations driven by people and businesses. This outline of emerging policy efforts affecting digital economy building blocks may help policymakers and industry leaders consider efforts needed to drive alignment to preserve the benefits of a global, interoperable, secure and free internet while addressing the key shortfalls present in the current digital landscape.


Carole House is a nonresident senior fellow at the Atlantic Council GeoEconomics Center and the Executive in Residence at Terranet Ventures, Inc. She formerly served as the director for cybersecurity and secure digital innovation for the White House National Security Council, where Carole will soon be returning as the Special Advisor for Cybersecurity and Critical Infrastructure Policy. This article reflects views expressed by the author in her personal capacity.

The post Designing a blueprint for open, free and trustworthy digital economies appeared first on Atlantic Council.

]]>
Who’s a national security risk? The changing transatlantic geopolitics of data transfers https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/whos-a-national-security-risk-geopolitics-of-data-transfers/ Wed, 29 May 2024 19:34:02 +0000 https://www.atlanticcouncil.org/?p=767982 The geopolitics of data transfers is changing. How will Washington's new focus on data transfers affect Europe and the transatlantic relationship?

The post Who’s a national security risk? The changing transatlantic geopolitics of data transfers appeared first on Atlantic Council.

]]>

Table of contents

Introduction
Data transfer politics come to America
Data transfer politics in Europe
Conclusions

Introduction

The geopolitics of transatlantic data transfers have been unvarying for the past decade. European governments criticize the US National Security Agency (NSA) for exploiting personal data moving from Europe to the United States for commercial reasons. The US government responds, through a series of arrangements with the European Union, by providing assurances that NSA collection is not disproportionate, and that Europeans have legal avenues if they believe their data has been illegally used. Although the arrangements have not proven legally stable, on the whole they have sufficed to keep data flowing via subsea cables under the Atlantic Ocean.

Now the locus of national security concerns about international data transfers has shifted from Brussels to Washington. The Biden administration and the US Congress, in a series of bold measures, are moving aggressively to interrupt certain cross-border data flows, notably to China and Russia.

The geopolitics of international data flows remain largely unchanged in Europe, however. European data protection authorities have been mostly noncommittal about the prospect of Russian state surveillance collecting Europeans’ personal data. Decisions on whether to transfer European data to Russia and China remain in the hands of individual companies.

Will Washington’s new focus on data transfers to authoritarian states have an impact in Europe? Will Europe continue to pay more attention to the surveillance activities of its liberal democratic allies, especially the United States? Is there a prospect of Europe and the United States aligning on the national security risks of transfers to authoritarian countries?

Data transfer politics come to America

The US government long considered the movement of personal data across borders as primarily a matter of facilitating international trade.1 US national security authorities’ surveillance of foreigners’ personal data in the course of commercial transfers was regarded as an entirely separate matter.

For example, the 2001 EU-US Safe Harbor Framework,2 the first transatlantic data transfer agreement, simply allowed the United States to assert the primacy of national security over data protection requirements, without further discussion. Similarly, the 2020 US-Mexico-Canada Free Trade Agreement3 and the US-Japan Digital Trade Agreement4 contain both free flow of data guarantees and traditional national security carve-outs from those obligations.

Edward Snowden’s 2013 revelations of expansive US NSA surveillance in Europe put the Safe Harbor Framework’s national security derogation into the political spotlight. Privacy activist Max Schrems then challenged its legality under EU fundamental rights law, and the Court of Justice of the European Union (CJEU) ruled it unacceptable.5

The 2023 EU-US Data Privacy Framework6 (DPF) is the latest response to this jurisprudence. In it, the United States commits to hold national security electronic surveillance of EU-origin personal data to a more constrained standard, as the European Commission has noted.7 The United States’ defensive goal has been to reassure Europe that it conducts foreign surveillance in a fashion that can be reconciled with EU fundamental rights law.

Now, however, the US government has begun expressly integrating its own national security considerations into decisions on the foreign destinations to which US-origin personal data may flow. It is a major philosophical shift from the prior free data flows philosophy, in which national security limits played a theoretical and marginal role.

One notable development is a February 28, 2024, executive order, Preventing Access to Americans’ Bulk Sensitive Personal Data and United States Government-Related Data by Countries of Concern.8 The EO empowers the Department of Justice (DOJ), in consultation with other relevant departments, to identify countries “of concern” and to prohibit or otherwise regulate bulk data transfers to them, based on a belief that these countries could be collecting such data for purposes of spying on or extorting Americans. A week later DOJ issued a proposed rule describing the envisaged regulatory regime, and proposing China, Cuba, Iran, North Korea, Russia, and Venezuela as the countries “of concern.”9

The White House, in issuing the bulk data EO, was at pains to insist that it was limited in scope and not inconsistent with the historic US commitment to the free flow of data, because it applies only to certain categories of data and certain countries.10 Nonetheless, as has been observed by scholars Peter Swire and Samm Sacks, the EO and proposed rule are, for the United States, part of “a new chapter in how it regulates data flows” in that they would create an elaborate new national security regulatory regime applying to legal commercial data activity.11

Hard on the heels of the bulk data EO came congressional passage in April of the Protecting Americans’ Data from Foreign Adversaries Act, which the president signed into law.12 It prohibits data brokers from selling or otherwise making available Americans’ sensitive information to four specified countries: China, Iran, North Korea, and Russia. The new law has a significantly broader scope than the EO. It cuts off certain data transfers to any entity controlled by one of these adversary countries, apparently including corporate affiliates and subsidiaries. It extends to any sensitive data, not just data in bulk. It remains to be seen how the administration will address the overlaps between the new law and the EO.

Another part of the same omnibus legislation ordered the ban or forced sale of TikTok, the Chinese social media platform widely used in this country.13 Advocates of the law point to the government of China’s ability under its own national security law to demand that companies operating there turn over personal data, including, potentially, TikTok users’ data transferred from the United States. Critics have cast the measure as a targeted punishment of a particular company, done without public evidence being offered of national security damage. TikTok has challenged the law as a violation of the First Amendment.14

Finally, the data transfer restrictions in these measures are thematically similar to a January 29 proposed rule from the Commerce Department obliging cloud service providers to verify the identity of their customers, on whose behalf they transfer data.15 The rule would impose know your customer (KYC) requirements—similar to those that apply in the international banking context—for cloud sales to non-US customers, wherever located.

This extraordinary burst of legislative and executive action focused on the national security risks of certain types of data transfers from the United States to certain authoritarian states is indicative of how far and fast political attitudes have shifted in this country. But what of Europe, which faces similar national security data challenges from authoritarian states? Is it moving in a similar direction as the United States?

Data transfer politics in Europe

The EU, unlike the United States, has long had a systematic set of controls on personal data flows from EU territory abroad, articulated in the General Data Protection Regulation (GDPR).16 The GDPR conditions transfers to a foreign jurisdiction on the “adequacy” of its data protection safeguards—or, as the CJEU has refined the concept, their “essential equivalence” to the GDPR regime.

The task of assessing foreign legal systems falls to the European Commission, the EU’s quasi-executive arm. Article 45 of the GDPR instructs it to consider, among other things, “the rule of law, respect for human rights and fundamental freedoms, relevant legislation . . . including concerning . . . the access of public authorities to personal data.”

For much of the past decade, the central drama in the European Commission’s adequacy process has been whether the United States meets this standard. As previously noted, the CJEU invalidated first the Safe Harbor Framework,17 in 2015, and then the Privacy Shield Framework,18 in 2020. The DPF is the third try by the US government and the European Commission to address the CJEU’s fundamental rights concerns. Last year, the European Commission issued yet another adequacy decision that found the DPF adequate.19 The EU understandably has focused its energies on the United States, since vast amounts of Europeans’ personal data travels to cloud service providers’ data centers in the United States and, as Snowden revealed, offered an inviting target for the NSA.

Separately, the European Commission has gradually expanded the range of other countries benefiting from adequacy findings, conferring this status on Japan,20 Korea,21 and the United Kingdom.22 However, the 2019 adequacy decision for the UK continues to be criticized in Brussels. On April 22, the Committee on Civil Liberties, Justice, and Home Affairs (LIBE) of the European Parliament wrote to the UK House of Lords complaining about UK national security bulk data collection practices and the prospect of onward transfer of data from UK territory to jurisdictions not deemed adequate by the EU.23 Next year, the European Commission will formally review the UK’s adequacy status.

List of countries with European Commission Adequacy Decisions

This past January, the European Commission renewed the adequacy decisions for eleven jurisdictions which had long enjoyed them, including, notably, Israel.24 On April 22, a coalition of civil society groups published an open letter to the European Commission questioning the renewal of Israel’s adequacy decision.25 The letter expressed doubts about the rule of law in Israel itself, the specific activities of Israeli intelligence agencies in Gaza during the current hostilities there, and the surveillance powers exercised by those agencies more generally.

Also delicate is the continuing flow of personal data from the European Union to Russia and China. Although neither country has been—or is likely to be—accorded adequacy status, data nonetheless can continue to flow to their territories, as to other third countries, if accompanied by contractual data protection safeguards. The CJEU established in its Schrems jurisprudence that such standard contractual clauses (SCCs) must uphold the same fundamental rights standards as an adequacy decision. The European Data Protection Board (EDPB) subsequently issued detailed guidance on the essential guarantees against national security surveillance that must be in place in order for personal data to be sent to a nonadequate jurisdiction.26

In 2021, the EDPB received an outside expert report27 on several foreign governments’ data access regimes. Its findings were clear. “Chinese law legitimises broad and unrestricted access to personal data by the government,” it concluded. Similarly, with respect to Russia, “The right to privacy is strongly limited when interests of national security are at stake.” The board did not take any further steps to follow up on the report, however.

Shortly after Russia invaded Ukraine, Russia was excluded from the Council of Europe and ceased to be a party to that body’s European Convention on Human Rights.28 The European Data Protection Board issued a statement confirming that data transfers to Russia pursuant to standard contract clauses remained possible, but stressed that safeguards to guard against Russian law enforcement or national security access to data were vital.29

Over two thousand multinational companies continue to do business in Russia, despite the Ukraine war, although a smaller number have shut down, according to a Kyiv academic research institute.30 Data flows between Europe and Russia thus remain substantial, if less than previously. Companies engaged in commerce in Russia also are subject to requirements that data on Russian persons be localized in that country.31 Nonetheless, data flows from Europe to Russia are not subject to categorical exclusions, unlike the new US approach.

The sole reported case of a European data protection authority questioning data flows to Russia involves Yango, a taxi-booking mobile app developed by Yandex, a Russian internet search and information technology company. Yango’s European services are based in the Netherlands and are available in other countries including Finland and Norway. In August 2023, Finland’s data protection authority (DPA) issued an interim decision to suspend use of Yango in its territory because Russia had just adopted a decree giving its state security service (FSB) unrestricted access to commercial taxi databases.32

The interim suspension decision was short-lived. A month later, the Finnish authority, acting in concert with Norwegian and Dutch counterparts, lifted it, on the basis of a clarification that the Russian decree in fact did not apply to use of the Yango app in Finland.33 The Finnish authority further announced that the Dutch authority, in coordination with it and Norway, would issue a final decision in the matter. The Dutch investigation reportedly remains open, but it does not appear to be a high priority matter.

The day after lifting the Yango suspension, the Finnish data protection authority rushed out yet another press release advising that its decision “does not address the legality of data transfers to Russia,” or “mean that Yango data transfers to Russia would be in compliance with the GDPR or that Russia has an adequate level of data protection.”34

One can interpret this final Finnish statement as at least indirectly acknowledging that continued commercial data transfers from an EU jurisdiction to Russia may raise rule of law questions bigger than a single decree allowing its primary security agency, known as the FSB, to access certain taxi databases. Otherwise, the Finnish decision could be criticized for ignoring the forest for the birch trees.

Equally striking is the limited extent of DPA attention to data transfers between EU countries and China. China maintains an extensive national security surveillance regime, and lately has implemented a series of legal measures that can limit outbound data transfers for national security reasons.35 In 2023, the Irish Data Protection Commissioner36 imposed a substantial fine on TikTok for violating the GDPR with respect to children’s privacy, following a decision by the EDPB.37 This inquiry did not examine the question of whether Chinese government surveillance authorities had access to European users’ data, however.

Personal data actively flows between Europe and China in the commercial context, pursuant to SCCs. China reportedly may issue additional guidance to companies on how to respond to requests for data from foreign law enforcement authorities. To date there is no public evidence of European DPAs questioning companies about their safeguard measures for transfers to China.

Indeed, signs recently have emerged from China of greater openness to transfers abroad of data generated in the automotive sector, including from connected cars. Data from connected cars is a mix of nonpersonal and personal data. China recently approved Tesla’s data security safeguards, enabling the company’s previously localized data to leave the country.38 In addition, the government of Germany is trying to ease the passage of data to and from China on behalf of German carmakers. On April 16, several German government ministers, part of a delegation visiting China led by Chancellor Olaf Scholz, issued a joint political statement with Chinese counterparts promising “concrete progress on the topic of reciprocal data transfer—and this in respect of national and EU data law,” with data from connected cars and automated driving in mind.39

Conclusions

The United States and the European Union are, in some respects, converging in their international data transfer laws and policies. In Washington, free data transfers are no longer sacrosanct. In Europe, they never have been. Viewed from Brussels, it appears that the United States is, finally, joining the EU by creating a formal international data transfers regime—albeit constructed in a piecemeal manner and focused on particular countries, rather than through a comprehensive and general data privacy law.

Yet the rationales for limiting data transfers vary considerably from one side of the Atlantic to the other. Washington now focuses on the national security dangers to US citizens and to the US government from certain categories of personal data moving to the territories of “foreign adversaries.” Brussels instead applies more abstract criteria relating to foreign governments’ commitment to the rule of law, human rights, and especially their access to personal data.

A second important difference is that the United States has effectively created a blacklist of countries to which certain categories of data should not flow, whereas the EU’s adequacy process serves as a means of “white listing” countries with comparable data protection frameworks to its own. Concretely, this structural difference means that the United States concentrates on prohibiting certain data transfers to China and Russia, while the EU institutionally has withheld judgment about transfers to those authoritarian jurisdictions. Critics of the EU’s adequacy practice instead have tended to concentrate on the perceived risks of data transfers to liberal democracies with active foreign surveillance establishments: Israel, the United Kingdom, and the United States.

The transatlantic—as well as global—geopolitics of data transfers are in flux. The sudden US shift to viewing certain transfers through a national security lens is unlikely to be strictly mirrored in Europe. In light of the emerging differences in approach, the United States and European governments should consider incorporating the topic of international data transfers into existing political-level conversations. Although data transfer topics have thus far not figured into the formal work of the EU-US Trade and Technology Council (TTC),40 which has met six times since 2022 including most recently in April,41 there is no evident reason why that could not change. If the TTC resumes activity after the US elections, it could become a useful bilateral forum for candid discussion of perceived national security risks in data flows.

Utilizing a broader grouping, such as the data protection and privacy authorities of the Group of Seven (G7), which as a group has been increasingly active in the last few years,42 also could be considered. The deliberations of this G7 group already have touched generally on the matter of government access, and they could readily expand to how its democratic members assess risks from authoritarians in particular. Eventually, such discussions could be expanded beyond the G7 frame into broader multilateral fora. The Organisation of Economic Co-operation and Development (OECD) Declaration on Government Access43 is a good building block.

The days when international data transfers were a topic safely left to privacy lawyers are long gone. It’s time for Washington and Brussels to acknowledge that the geopolitics of data flows has moved from the esoteric to the mainstream, and to grapple with the consequences.

About the author

Related content

The Europe Center promotes leadership, strategies, and analysis to ensure a strong, ambitious, and forward-looking transatlantic relationship.

1    Kenneth Propp, “Transatlantic Digital Trade Protections: From TTIP to ‘Policy Suicide?,’” Lawfare, February 16, 2024, https://www.lawfaremedia.org/article/transatlantic-digital-trade-protections-from-ttip-to-policy-suicide.
2    U.S.-EU Safe Harbor Framework: Guide to Self-Certification, US Department of Commerce, March 2009, https://legacy.trade.gov/publications/pdfs/safeharbor-selfcert2009.pdf.
3    “Chapter 19: Digital Trade,” US-Mexico-Canada Free Trade Agreement, Office of the United States Trade Representative, https://ustr.gov/sites/default/files/files/agreements/FTA/USMCA/Text/19-Digital-Trade.pdf.
4    “Agreement between the United States of America and Japan Concerning Digital Trade,” Office of the United States Trade Representative, https://ustr.gov/sites/default/files/files/agreements/japan/Agreement_between_the_United_States_and_Japan_concerning_Digital_Trade.pdf.
5    Schrems v. Data Protection Commissioner, CASE C-362/14 (Court of Justice of the EU 2015), https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:62014CJ0362.
6    “President Biden Signs Executive Order to Implement the European Union-U.S. Data Privacy Framework,” Fact Sheet, White House Briefing Room, October 7, 2022, https://www.whitehouse.gov/briefing-room/statements-releases/2022/10/07/fact-sheet-president-biden-signs-executive-order-to-implement-the-european-union-u-s-data-privacy-framework/.
7    European Commission, “Commission Implementing Decision of 10.7.2023 Pursuant to Regulation (EU) 2016/679 of the European Parliament and of the Council on the Adequate Level of Protection of Personal Data under the EU-US Data Privacy Framework,” July 10, 2023, https://commission.europa.eu/system/files/2023-07/Adequacy%20decision%20EU-US%20Data%20Privacy%20Framework_en.pdf.
9    Department of Justice, “National Security Division; Provisions Regarding Access to Americans’ Bulk Sensitive Personal Data and Government-Related Data by Countries of Concern,” Proposed Rule, 28 C.F.R. 202 (2024), https://www.federalregister.gov/d/2024-04594.
10    “President Biden Issues Executive Order to Protect Americans’ Sensitive Personal Data,” Fact Sheet, White House Briefing Room, February 28, 2024, https://www.whitehouse.gov/briefing-room/statements-releases/2024/02/28/fact-sheet-president-biden-issues-sweeping-executive-order-to-protect-americans-sensitive-personal-data/.
11    Peter Swire and Samm Sacks, “Limiting Data Broker Sales in the Name of U.S. National Security: Questions on Substance and Messaging,” Lawfare, February 28, 2024, https://www.lawfaremedia.org/article/limiting-data-broker-sales-in-the-name-of-u.s.-national-security-questions-on-substance-and-messaging.
12    “Protecting Americans from Foreign Adversary Controlled Applications Act,” in emergency supplemental appropriations, Pub. L. No. 118–50, 118th Cong. (2024), https://www.congress.gov/bill/118th-congress/house-bill/7520/text.
13    Cristiano Lima-Strong, “Biden Signs Bill That Could Ban TikTok, a Strike Years in the Making,” Washington Post, April 24, 2024, https://www.washingtonpost.com/technology/2024/04/23/tiktok-ban-senate-vote-sale-biden/.
14    “Petition for Review of Constitutionality of the Protecting Americans from Foreign Adversary Controlled Applications Act,” TikTok Inc. and ByteDance Ltd. v. Merrick B. Garland (US Court of Appeals for the District of Columbia Cir. 2024), https://sf16-va.tiktokcdn.com/obj/eden-va2/hkluhazhjeh7jr/AS%20FILED%20TikTok%20Inc.%20and%20ByteDance%20Ltd.%20Petition%20for%20Review%20of%20H.R.%20815%20(2024.05.07)%20(Petition).pdf?x-resource-account=public.
15    Department of Commerce, “Taking Additional Steps to Address the National Emergency with Respect to Significant Malicious Cyber-Enabled Activities,” Proposed Rule, 15 C.F.R. Part 7 (2024), https://www.govinfo.gov/content/pkg/FR-2024-01-29/pdf/2024-01580.pdf.
16    “Regulation (EU) 2016/679 of the European Parliament and of the Council of April 27, 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation),” 2016/679, Official Journal of the European Union (2016), https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32016R0679.
17    Schrems v. Data Protection Commissioner.
18    Data Protection Commissioner v. Facebook Ireland & Schrems, CASE C-311/18 (Court of Justice of the EU 2020), https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:62018CJ0311.
19    The Commission’s decision has since been challenged before the CJEU. See Latombe v. Commission, No. Case T-553/23 (Court of Justice of the EU 2023), https://curia.europa.eu/juris/document/document.jsf?text=&docid=279601&pageIndex=0&doclang=EN&mode=lst&dir=&occ=first&part=1&cid=1498741.
20    European Commission, “European Commission Adopts Adequacy Decision on Japan, Creating the World’s Largest Area of Safe Data Flows,” Press Release, January 23, 2019, https://commission.europa.eu/document/download/c2689793-a827-4735-bc8d-15b9fd88e444_en?filename=adequacy-japan-factsheet_en_2019.pdf.
21    “Commission Implementing Decision (EU) 2022/254 of 17 December 2021 Pursuant to Regulation (EU) 2016/679 of the European Parliament and of the Council on the Adequate Protection of Personal Data by the Republic of Korea under the Personal Information Protection Act,” Official Journal of the European Union, December 17, 2021, https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32022D0254.
22    “Commission Implementing Decision (EU) 2021/1772 of 28 June 2021 Pursuant to Regulation (EU) 2016/679 of the European Parliament and of the Council on the Adequate Protection of Personal Data by the United Kingdom,” Official Journal of the European Union, June 28, 2021, https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32021D1772.
23    European Parliament Justice Committee, Correspondence to Rt. Hon. Lord Peter Ricketts regarding Inquiry into Data Adequacy, April 22, 2024, https://content.mlex.com/Attachments/2024-04-25_L75PCWU60ZLVILJ5%2FLIBE%20letter%20-%20published%20EAC.pdf.
24    “Report from the Commission to the European Parliament and the Council on the First Review of the Functioning of the Adequacy Decisions Adopted Pursuant to Article 25(6) of Directive 95/46/EC,” European Commission, January 15, 2024, https://commission.europa.eu/document/download/f62d70a4-39e3-4372-9d49-e59dc0fda3df_en?filename=JUST_template_comingsoon_Report%20on%20the%20first%20review%20of%20the%20functioning.pdf.
25    European Digital Rights et al., Letter to Vice-President of the European Commission Věra Jourová Regarding Concerns following  Reconfirmation of Israel’s Adequacy Status, April 22, 2024, https://edri.org/wp-content/uploads/2024/04/Concerns-Regarding-European-Commissions-Reconfirmation-of-Israels-Adequacy-Status-in-the-Recent-Review-of-Adequacy-Decisions-updated-open-letter-April-2024.pdf.
26    Milieu Consulting and Centre for IT and IP Law of KU Leuven, “Recommendations 02/2020 on the European Essential Guarantees for Surveillance Measures,” Prepared for European Data Protection Board (EDPB), November 10, 2020, https://www.edpb.europa.eu/sites/default/files/files/file1/edpb_recommendations_202002_europeanessentialguaranteessurveillance_en.pdf.
27    Milieu Consulting and Centre for IT and IP Law of KU Leuven, “Government Access to Data in Third Countries,” EDPB, EDPS/2019/02-13, November 2021, https://www.edpb.europa.eu/system/files/2022-01/legalstudy_on_government_access_0.pdf.
28    European Convention on Human Rights, November 4, 1950, https://www.echr.coe.int/documents/d/echr/Convention_ENG.
29    Statement 02/2022 on Data Transfers to the Russian Federation, European Data Protection Board, July 12, 2022,
https://www.edpb.europa.eu/system/files/2022-07/edpb_statement_20220712_transferstorussia_en.pdf.
30    “Stop Doing Business with Russia,” KSE Institute, May 20, 2024, #LeaveRussia: The List of Companies that Stopped or Still Working in Russia (leave-russia.org).
31    “Russian Data Localization Law: Now with Monetary Penalties,” Norton Rose Fulbright Data Protection Report, December 20, 2019, https://www.dataprotectionreport.com/2019/12/russian-data-localization-law-now-with-monetary-penalties/.
32    “Finnish DPA Bans Yango Taxi Service Transfers of Personal Data from Finland to Russia Temporarily,” Office of the Data Protection Ombudsman, August 8, 2023, https://tietosuoja.fi/en/-/finnish-dpa-bans-yango-taxi-service-transfers-of-personal-data-from-finland-to-russia-temporarily.
33    “European Data Protection Authorities Continue to Cooperate on the Supervision of Yango Taxi Service’s Data Transfers–Yango Is Allowed to Continue Operating in Finland until Further Notice,” Office of the Data Protection Ombudsman, September 26, 2023, https://tietosuoja.fi/en/-/european-data-protection-authorities-continue-to-cooperate-on-the-supervision-of-yango-taxi-service-s-data-transfers-yango-is-allowed-to-continue-operating-in-finland-until-further-notice.
34    “The Data Protection Ombudsman’s Decision Does Not Address the Legality of Data Transfers to Russia–the Matter Remains under Investigation,” Office of the Data Protection Ombudsman, September 27, 2023, https://tietosuoja.fi/en/-/the-data-protection-ombudsman-s-decision-does-not-address-the-legality-of-data-transfers-to-russia-the-matter-remains-under-investigation#:~:text=The%20Office%20of%20the%20Data%20Protection%20Ombudsman%27s%20decision,Protection%20Ombudsman%20in%20October%2C%20was%20an%20interim%20decision.
35    Samm Sacks, Yan Lou, and Graham Webster, “Mapping U.S.-China Data De-Risking,” Freeman Spogli Institute for International Studies, Stanford University, February 29, 2024), https://digichina.stanford.edu/wp-content/uploads/2024/03/20240228-dataderisklayout.pdf.
36    “Irish Data Protection Commission Announces €345 Million Fine of TikTok,” Office of the Irish Data Protection Commissioner, September 15, 2023, https://www.dataprotection.ie/en/news-media/press-releases/DPC-announces-345-million-euro-fine-of-TikTok.
37    “Following EDPB Decision, TikTok Ordered to Eliminate Unfair Design Practices Concerning Children,” European Data Protection Board, September 15, 2023, https://www.edpb.europa.eu/news/news/2023/following-edpb-decision-tiktok-ordered-eliminate-unfair-design-practices-concerning_en.
38    “Tesla Reaches Deals in China on Self-Driving Cars,” New York Times, April 29, 2024, https://www.nytimes.com/2024/04/29/business/elon-musk-tesla-china-full-self-driving.html.
39    “Memorandum of Understanding with China,” German Federal Ministry of Digital and Transport, April 16, 2024,
https://bmdv.bund.de/SharedDocs/DE/Pressemitteilungen/2024/021-wissing-deutschland-china-absichtserklaerung-automatisiertes-und-vernetztes-fahren.html.
40    Frances Burwell and Andrea Rodríguez, “The US-EU Trade and Technology Council: Assessing the Record on Data and Technology Issues,” Atlantic Council, April 20, 2023, https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/us-eu-ttc-record-on-data-technology-issues/.
41    “U.S.-EU Trade and Technology Council (TTC),” US State Department, https://www.state.gov/u-s-eu-trade-and-technology-council-ttc/.
42    “G7 DPAs’ Action Plan,” German Office of the Federal Commissioner for Data Protection and Freedom of Information (BfDI), June 22, 2023, https://www.bfdi.bund.de/SharedDocs/Downloads/EN/G7/2023-Action-Plan.pdf?__blob=publicationFile&v=1.
43    OECD, Declaration on Government Access to Personal Data Held by Private Sector Entities, December 14, 2022, OECD/LEGAL/0487, https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0487.

The post Who’s a national security risk? The changing transatlantic geopolitics of data transfers appeared first on Atlantic Council.

]]>
What to do about ransomware payments https://www.atlanticcouncil.org/blogs/econographics/what-to-do-about-ransomware-payments/ Tue, 14 May 2024 16:57:36 +0000 https://www.atlanticcouncil.org/?p=764759 And why payment bans alone aren’t sufficient.

The post What to do about ransomware payments appeared first on Atlantic Council.

]]>
Ransomware is a destabilizing form of cybercrime with over a million attacks targeting businesses and critical infrastructure every day.  Its status as a national security threat, even above that of other pervasive cybercrime, is driven by a variety of factors like its scale, disruptive nature, and potential destabilizing impact on critical infrastructure and services—as well as the sophistication and innovation in ransomware ecosystems and cybercriminals, who are often Russian actors or proxies.   

The ransomware problem is multi-dimensional. Ransomware is both a cyber and a financial crime, exploiting vulnerabilities not only in the security of digital infrastructure but also in the financial system that have enabled the rise of sophisticated Ransomware-as-a-Service (RaaS) economies.  It is also inherently international, involving transnational crime groups operating in highly distributed networks that are targeting victims, leveraging infrastructure, and laundering proceeds without regard for borders.  As with other asymmetric threats, non-state actors can achieve state-level consequences in disruption of critical infrastructure.

With at least $1 billion reported in ransomware payments in 2021 and with incidents targeting critical infrastructure like hospitals, it is not surprising that the debate on ransomware payments is rising again. Ransomware payments themselves are problematic—they are the primary motive for these criminal acts, serving to fuel and incentivize this ecosystem.  Many are also inherently already banned in that payments to sanctioned actors are prohibited. However, taking a hardline position on ransomware payments is also challenging because of its potential impact on victims, visibility and cooperation, and limited resources.

Cryptocurrency’s role in enabling ransomware’s rise

While ransomware has existed in some form since 1989, the emergence of cryptocurrencies as an easy means for nearly-instantaneous, peer-to-peer, cross-border value transfer contributed to the rise of sophisticated RaaS economies. Cryptocurrencies use largely public, traceable ledgers which can certainly benefit investigations and disruption efforts. However, in practice those disruption efforts are hindered by weaknesses in cryptocurrency ecosystems like lagging international and industry compliance with anti-money laundering and countering financing of terrorism (AML/CFT) standards; growth of increasingly sophisticated methods of obfuscation leveraging mixers, anonymity-enhanced cryptocurrencies, chain-hopping, and intermixing with off-chain and traditional finance methods; and insufficient steps taken to enable real-time, scaled detection and timely interdictionof illicit cryptocurrency proceeds.

Despite remarks by some industry and policymaker advocates, RaaS economies would not work at the same level of scale and success without cryptocurrency, at least in its current state of compliance and exploitable features. Massively scaled ransomware campaigns targeting thousands of devices could not work by asking victims to pay using wire transfers and gift cards pointing to common accounts at regulated banks or widely publishing a physical address. Reliance on traditional finance methods would require major, and likely significantly less profitable, evolution in ransomware models.

The attraction of banning ransomware payments

Any strategy to deal with ransomware needs to have multiple elements, and one key aspect is the approach to ransomware payments. The Biden Administration’s multi-pronged counter-ransomware efforts have driven unprecedented coordination of actions combating ransomware, seen in actions like disrupting the ransomware variant infrastructure and actors, OFAC and FinCEN designations of actors and financial institutions facilitating ransomware, pre-ransomware notifications to affected companies by CISA, and a fifty-member International Counter-Ransomware Initiative.

However, ransomware remains a significant threat and is still affecting critical infrastructure. As policymakers in the administration and in Congress consider every tool available, they will have to consider the effectiveness of the existing policy approach to ransomware payments. Some view payment bans as a necessary action to address the risks ransomware presents to Americans and to critical infrastructure. Set against the backdrop of the moral, national security, and economic imperatives to end this destabilizing activity, bans could be the quickest way to diminish incentives for targeting Americans and the significant amounts of money making it into the hands of criminals.

Additionally, banning ransomware payments promotes other Administration policy objectives like driving a greater focus on cybersecurity and resilience. Poor cyber hygiene, and especially often poor identity and access management, are frequently exploited in ransomware. Removing payments as a potential “escape hatch” is seen by some as a way to leverage market forces to incentivize better cyber hygiene, especially in a space where the government has limited and fragmented regulatory authority.

Those who promote bans typically do not come to that position lightly but instead see them as a last resort to try to deter ransomware.  The reality is that we have not yet been able to sufficiently scale disruption to the extent needed to diminish this threat below a national security concern—driven by insufficient resourcing, limits on information sharing and collaboration, timeliness issues for use of certain authorities, and insufficient international capacity and coordination on combating cyber and crypto crime. When policymakers are in search of high-impact initiatives to reduce the high-impact threat of ransomware, many understandably view bans as attractive.

Challenges with banning ransomware payments

However, taking a hardline position on ransomware payments can also present practical and political challenges:

  • Messaging and optics of punishing victims:A ban inherently places the focus of the policy burden and messaging on the victims, potentially not stopping them from using this tool but instead raising the costs for them to do so. Blaming victims that decide to pay in order to keep their company intact presents moral and political challenges.
  • Limited resources that need to be prioritized against the Bad Guys:  For a ban to be meaningful, it would have to be enforced. Spending enforcement resources against victims to enforce a ban—resources which could have been spent on scaling disruption of the actual perpetrators—could divert critically limited resources from efforts against the ransomware actors.
  • Likelihood that payments will still happen as companies weigh the costs against the benefits:  Many feel that companies, if faced between certain demise and the costs of likely discovery and legal or regulatory action by the government, will still end up making ransomware payments.
  • Disincentivizing reporting and visibility:  A ban would also make companies less likely to report that they have been hit with ransomware, as they will aim to keep all options open as they decide how to proceed. This disincentivizes transparency and cooperation from companies needed to drive effective implementation of the cyber incident and ransomware payment reporting requirements under the Cybersecurity Incident Reporting for Critical Infrastructure Act (CIRCIA) regulations to the Cybersecurity and Infrastructure Security Agency (CISA). Diminished cooperation and transparency could have a devastating effect on investigations and disruption efforts that rely on timely visibility.
  • Asking for permission means the government deciding which companies survive:  Some advocates for bans propose exceptions, such as supplementing a presumptive ban with a licensing or waiver authority, where the government is the arbiter of deciding which companies get to pay or not.  This could enable certain entities like hospitals to use the payment “escape hatch.” However, placing the government in a position to decide which companies live and die is extremely complicated and presents uncomfortable questions.  It is unclear what government body could be capable, or should be endowed with the authority of making that call at all, especially in as timely a fashion as would be required.  Granting approval could also place the government in the uncomfortable position of essentially approving payments to criminals.

Additional policy options that can strike a balance for practical implementation

In light of the large-scale, disruptive threat to critical infrastructure from ransomware, policymakers will have to consider other initiatives along with its ransomware payment approach to strike a balance on enhancing disruption and incentivizing security measures:

  • Resource agencies and prioritize counter-ransomware efforts: Government leadership must properly resource through appropriations and prioritize disruption efforts domestically and internationally as part of a sustained pressure campaign against prioritized ransomware networks.
  • International cyber and cryptocurrency capacity building and pressure campaign: Agencies should prioritize targeted international engagement, such as capacity building where capability lags and diplomatic pressure where political will lags, toward defined priority jurisdictions.  Capacity building and pressure should drive both cybersecurity and cryptocurrency capacity, such as critical infrastructure controls, regulatory, and law enforcement capabilities. Jurisdictional prioritization could account for elements like top nations where RaaS actors and infrastructure operate and where funds are primarily laundered and cashed out.
  • Enhance targeting authorities for use against ransomware actors: Congress should address limitations in existing authorities to enable greater disruptive action against the cyber and financial elements of ransomware networks. For example, Congress could consider fixes to AML/CFT authorities (e.g., 311 and 9714 Bank Secrecy Act designations) for better use against ransomware financial enablers, as well as potential fixes that the defense, national security, and law enforcement communities may need.
  • Ensure government and industry visibility for timely interdiction and disruption of ransomware flows: Congressional, law enforcement, and regulatory agencies should work with industry to ensure critical visibility across key ecosystem participants to enable disruption efforts, such as through: Enforcing reporting requirements of ransomware payments under CIRCIA and US Treasury suspicious activity reporting (SAR) requirements; Mandating through law that entities (such as digital forensic and incident response [DFIR] firms) that negotiate or make payments to ransomware criminals on behalf of victims, including in providing decryption services for victims, must be regulated as financial institutions with SAR reporting requirements; Driving the evolution of standards, like those for cyber indicators, to enable real-time information sharing and ingestion of cryptocurrency illicit finance indicators for responsible ecosystem participants to disrupt illicit finance flows.
  • Prioritize and scale outcome-driven public-private partnerships (PPPs): Policymakers should prioritize, fund, and scale timely efforts for PPPs across key infrastructure and threat analysis actors (e.g., internet service providers [ISPs], managed service providers [MSPs], cyber threat firms, digital forensic and incident response [DFIR] and negotiation firms, cryptocurrency threat firms, cryptocurrency exchanges, and major crypto administrators and network-layer players [e.g., mining pools and validators]) focused on disruption of key ransomware activities and networks.
  • Incentivize and promote better security while making it less attractive to pay ransoms: Policymakers could leverage market and regulatory incentives to drive better security measures adoption to deter ransomware and make it less attractive to pay.  For example, legislation could prohibit cyber insurance reimbursement of ransomware payments. Regulatory action and legislative authority expansion could also drive implementation of high-impact defensive measures against ransomware across critical infrastructure and coordination of international standards on cyber defense.

While attractive for many reasons, banning ransomware payments presents challenges for limiting attacks that demand a broader strategy to address. Only this kind of multi-pronged, whole-of-nation approach will be sufficient to reduce the systemic threats presented by disruptive cybercrime that often targets our most vulnerable.


Carole House is a nonresident senior fellow at the Atlantic Council GeoEconomics Center and the Executive in Residence at Terranet Ventures, Inc. She formerly served as the director for cybersecurity and secure digital innovation for the White House National Security Council.

The post What to do about ransomware payments appeared first on Atlantic Council.

]]>
#BalkansDebrief – Why is France refocused on security in the Balkans? | A debrief with Alexandre Vulic https://www.atlanticcouncil.org/content-series/balkans-debrief/balkansdebrief-why-is-france-refocused-on-security-in-the-balkans-a-debrief-with-alexandre-vulic/ Mon, 15 Apr 2024 17:46:31 +0000 https://www.atlanticcouncil.org/?p=757169 In this episode of #BalkansDebrief, Europe Center Nonresident Senior Fellow Ilva Tare welcomes Alexandre Vulic. They discuss France's security concerns for the Western Balkans.

The post #BalkansDebrief – Why is France refocused on security in the Balkans? | A debrief with Alexandre Vulic appeared first on Atlantic Council.

]]>

IN THIS EPISODE

The Western Balkans remain a security concern, particularly Bosnia and Herzegovina. Recently, France has deployed a battalion as part of the Strategic Reserve Force to assist the EUFOR mission and exercise a level of deterrence in Bosnia and Kosovo, two countries with security issues, where France wants to see progress.

Ilva Tare, a Nonresident Senior Fellow at the Europe Center, discusses regional security issues with Alexandre Vulic, Deputy Director General for Strategic Affairs, International Security, and Arms Control at the French Ministry of Europe and Foreign Affairs.

Why does France consider the situation in Bosnia as stable yet fragile? What are the main concerns that threaten security in the region? How do cybersecurity, disinformation, and false narratives affect the Western Balkans? And how can France counter Russia’s influence, which is exercised via proxies and nationalist forces?

MEET THE #BALKANSDEBRIEF HOST

The Europe Center promotes leadership, strategies, and analysis to ensure a strong, ambitious, and forward-looking transatlantic relationship.

The post #BalkansDebrief – Why is France refocused on security in the Balkans? | A debrief with Alexandre Vulic appeared first on Atlantic Council.

]]>
Atkins in E&E News by POLITICO https://www.atlanticcouncil.org/insight-impact/in-the-news/atkins-in-ee-news-by-politico/ Tue, 09 Apr 2024 17:54:43 +0000 https://www.atlanticcouncil.org/?p=756656 On April 8, IPSI Nonresident Senior Fellow Victor Atkins was quoted in an E&E News by POLITICO article, in which he discussed the vulnerabilities of the US power grid, which is suffering increased state-sponsored cyberattacks.

The post Atkins in E&E News by POLITICO appeared first on Atlantic Council.

]]>

On April 8, IPSI Nonresident Senior Fellow Victor Atkins was quoted in an E&E News by POLITICO article, in which he discussed the vulnerabilities of the US power grid, which is suffering increased state-sponsored cyberattacks.

The post Atkins in E&E News by POLITICO appeared first on Atlantic Council.

]]>
Ralby quoted in the Washington Post on the Baltimore bridge collapse https://www.atlanticcouncil.org/insight-impact/in-the-news/ralby-quoted-in-the-washington-post-on-the-baltimore-bridge-collapse/ Wed, 27 Mar 2024 13:50:17 +0000 https://www.atlanticcouncil.org/?p=753214 The post Ralby quoted in the Washington Post on the Baltimore bridge collapse appeared first on Atlantic Council.

]]>

The post Ralby quoted in the Washington Post on the Baltimore bridge collapse appeared first on Atlantic Council.

]]>
Break up TikTok, arm Ukraine https://www.atlanticcouncil.org/content-series/inflection-points/break-up-tiktok-arm-ukraine/ Wed, 20 Mar 2024 11:30:00 +0000 https://www.atlanticcouncil.org/?p=749993 The United States and its allies need to address both Russia’s military threats and Chinese influence operations.

The post Break up TikTok, arm Ukraine appeared first on Atlantic Council.

]]>
The US Congress should force the sale of TikTok or ban the app, and it should pass its long-delayed aid package for Ukraine. Just as important, it should signal to American voters that both represent the front lines in the strategic battle for the global future.

What’s surprising is that the same House Republican minority that has blocked Ukraine funding for more than five months hasn’t made this connection. What might help this group is a close reading of the recently released “Annual Threat Assessment of the US Intelligence Community”—and Peggy Noonan’s latest Wall Street Journal column.

News reports have focused public attention on the new intelligence report primarily because of its assessment of Israeli Prime Minister Benjamin Netanyahu’s “viability as a leader” as being “in jeopardy.” Even more important, however, are the links it draws between regional conflicts in Europe and the Middle East and our unfolding, generational contest with China to shape the future.

“During the next year,” the assessment explains, “the United States faces an increasingly fragile global order strained by accelerating strategic competition among major powers, more intense and unpredictable transnational challenges, and multiple regional conflicts with far-reaching implications.”

Regarding Beijing, the assessment underscores China’s growing efforts online, resembling the long-standing Moscow playbook, “to exploit perceived US societal divisions . . . for influence operations.” That includes experimentation with artificial intelligence. TikTok accounts run by a Chinese government propaganda arm “reportedly targeted candidates from both political parties during the US midterm election cycle in 2022,” it notes, something the Atlantic Council’s Digital Forensic Research Lab was the first to show through an open-source investigation.

In a valuable new report, the Atlantic Council’s own analysts stopped short of calling for a breakup or ban of TikTok as a means of addressing the platform’s threats to US national security. “TikTok: Hate the Game, Not the Player” argues that an exclusive focus on the Chinese app overlooks “broader security vulnerabilities in the US information ecosystem.”

Peggy Noonan makes a compelling case for why the United States should nevertheless target TikTok. “It uses algorithms to suck up information about America’s 170 million users, giving it the potential to create dossiers,” she writes. Federal Bureau of Investigation Director Christopher Wray, Noonan adds, has warned that China “has the ability to control software on millions of devices in the US.”

That brings me to Ukraine.

It’s difficult to gather hard evidence to illustrate how the Chinese government is deploying the TikTok weapon, yet the existing and potential dangers were sufficient to prompt a bipartisan House vote against it of 352-65, unifying members of Congress such as Democrat Nancy Pelosi and Republican Elise Stefanik, who are more often poles apart.

By comparison, the evidence of Russian President Vladimir Putin’s murderous intentions is incontestable. Russian forces are advancing, and US dithering is costing Ukrainian lives. It’s also encouraging an increasingly close autocratic partnership built on the shared belief that now is the moment to test US and Western staying power and resolve.

“Russia’s strengthening ties with China, Iran, and North Korea to bolster its defense production and economy are a major challenge for the West and its partners,” says the new report by the US intelligence community. On Tuesday, Reuters reported that Putin will visit Chinese leader Xi Jinping in May, building upon what he has called their “no limits” partnership.

Weeks ago, a large Senate majority voted in favor of an aid package that would bring $60 billion in aid to Ukraine alongside support for Israel and Taiwan. A similar House majority would support that, but thus far a small Republican minority in the lower chamber has blocked a vote. This needs to be fixed quickly either by Speaker Mike Johnson permitting a floor vote, or through a discharge petition signed by a bipartisan majority.

With the stakes of such a historic nature, the United States and its allies should address both Russia’s military threats, with Chinese support, and Chinese influence operations, with Russian inspiration.

It’s not one or the other—but both. And now.


Frederick Kempe is president and chief executive officer of the Atlantic Council. You can follow him on Twitter: @FredKempe.

This edition is part of Frederick Kempe’s Inflection Points Today newsletter, a column of quick-hit insights on a world in transition. To receive this newsletter throughout the week, sign up here.

The post Break up TikTok, arm Ukraine appeared first on Atlantic Council.

]]>
Atkins in CyberScoop https://www.atlanticcouncil.org/insight-impact/in-the-news/atkins-in-cyberscoop/ Sat, 16 Mar 2024 19:27:06 +0000 https://www.atlanticcouncil.org/?p=752696 On March 15, IPSI Nonresident Senior Fellow Victor Atkins was quoted in a Cyberscoop article discussing industry complacency as Chinese hacking operations become increasingly threatening.

The post Atkins in CyberScoop appeared first on Atlantic Council.

]]>

On March 15, IPSI Nonresident Senior Fellow Victor Atkins was quoted in a Cyberscoop article discussing industry complacency as Chinese hacking operations become increasingly threatening.

The post Atkins in CyberScoop appeared first on Atlantic Council.

]]>
Will the US crack down on TikTok? Six questions (and expert answers) about the bill in Congress. https://www.atlanticcouncil.org/blogs/new-atlanticist/will-the-us-crack-down-on-tiktok-six-questions-and-expert-answers-about-the-bill-in-congress/ Wed, 13 Mar 2024 23:42:14 +0000 https://www.atlanticcouncil.org/?p=747735 The US House has just passed a bill to force the Chinese company ByteDance to either divest from TikTok or face a ban in the United States.

The post Will the US crack down on TikTok? Six questions (and expert answers) about the bill in Congress. appeared first on Atlantic Council.

]]>
The clock is ticking. On Wednesday, the US House overwhelmingly passed a bill to force the Chinese company ByteDance to divest from TikTok, or else the wildly popular social media app would be banned in the United States. Many lawmakers say the app is a national security threat, but the bill faces an uncertain path in the Senate. Below, our experts address six burning questions about this bill and TikTok at large.

1. What kind of risks does TikTok pose to US national security?

Chinese company ByteDance’s ownership of TikTok poses two specific risks to US national security. One has to do with concerns that the Chinese Communist Party (CCP) could use its influence over the Chinese owners to use TikTok’s algorithm for propaganda purposes. Addressing this security concern is tricky due to legal protections for freedom of expression. The other risk, and the one addressed through the current House legislation, has to do with the ability of the CCP to use Chinese ownership of TikTok to access the massive amount of data that the app collects on its users. This could include data on everything from viewing tastes, to real-time location, to information stored on users’ phones outside of the app, including contact lists and keystrokes that can reveal, for example, passwords and bank activity.

Sarah Bauerle Danzman is a resident senior fellow with the Economic Statecraft Initiative in the Atlantic Council’s GeoEconomics Center.

This debate is not over free speech or access to social media: The question is fundamentally one of whether the United States can or should force a divestment of a social media company from a parent company (in this case ByteDance) if the company can be compelled to act under the direction of the CCP. We have to ask: Does the CCP have the intent or ability to compel data to serve its interests? There is an obvious answer here. We know that China has already collected massive amounts of sensitive data from Americans through efforts such as the Office of Personnel Management hack in 2015. Recent unclassified reports, including from the Office of the Director of National Intelligence, show the skill and intent of China to use personal data for influence. And the CCP has the legal structure in place to compel companies such as ByteDance to comply and cooperate with CCP requests.

Meg Reiss is a nonresident senior fellow at the Scowcroft Strategy Initiative of the Atlantic Council’s Scowcroft Center for Strategy and Security.

2. Are those risks unique to TikTok?

TikTok is not an unproblematic platform, and there are real and significant user risks that could pose dangers to safety and security, especially for certain populations. However, focusing on TikTok ignores broader vulnerabilities in the US information ecosystem that put Americans at risk. An outright ban of TikTok as currently proposed—particularly absent clearer standards for all platforms—would not meaningfully address these broader risks and would in fact potentially undermine US interests in a much more profound way.

As our recent report outlines in detail, a ban is unlikely to achieve the intended effect of meaningfully curbing China’s ability to gather sensitive data on Americans or to conduct influence operations that harm US interests. It also may contribute to a global curbing of the free flow of data that is essential to US tech firms’ ability to innovate and maintain a competitive edge.

Kenton Thibaut is a senior resident China fellow at the Atlantic Council’s Digital Forensic Research Lab.

Some have argued that TikTok, while on the aggressive end of the personal data collection spectrum, collects similar data to what other social media companies collect. However, the US government would counter with two points: First, TikTok has a history of skirting data privacy rules, such as those limiting data collection on children and those that prevent the collection of phone-specific identifiers called MAC numbers, and therefore the company cannot be trusted to handle sensitive personal data in accordance with the law. And second, unlike other popular apps, TikTok is ultimately beholden to Chinese regulations. This includes the 2017 Chinese National Intelligence Law that requires Chinese companies to hand over a broad range of information to the Chinese government if asked. Because China’s legal system is far more opaque than the United States’, it is unclear if the US government or its citizens would even know if the Chinese government ever asked for this data from TikTok. While TikTok’s management has denied supplying the Chinese government with such data, insider reports have uncovered Chinese employees gaining access to US user data. In other words, the US government has little reason to trust that ByteDance is keeping US user data safe from the CCP.

—Sarah Bauerle Danzman

3. What does the House bill actually do?

There are two important, related bills. The one that passed the House today is the Protecting Americans from Foreign Adversary Controlled Applications Act, which forces divestment. It is not an outright ban, and it is intended to address the real risk of ByteDance—thus TikTok—falling under the jurisdiction of China’s 2017 National Intelligence Law, which compels Chinese companies to cooperate with the CCP’s requests. However, divestment doesn’t completely solve for the additional potential risks of the CCP using TikTok in a unique or systemic way for data collection, algorithmic tampering (e.g. what topics surface or don’t surface to users), or information operations (e.g. an influence campaign unique to TikTok as opposed to on other platforms as well). Second, the Protecting Americans’ Data from Foreign Adversaries Act, which cleared a House committee last week, more directly addresses a broader risk of blocking the Chinese government’s access to the type of data that TikTok and many other social media platforms collect on the open market. The former without the latter is an incomplete approach to protecting Americans’ data from the CCP—and even the two combined falls short of a federal data privacy standard.

Graham Brookie is vice president and senior director of the Digital Forensic Research Lab.

There is no question China seeks to influence the American public and harvests large amounts of data on American citizens. As our recent report illuminates however, the Chinese state’s path to these goals depends very little on TikTok.

Today’s actions in the House underscore the disjointed nature of the US approach to governing technology. Rather than focus on TikTok specifically, it would be both legally and geopolitically wiser to pass legislation that sets standards for everyone, and not just one company. That could mean setting standards for what actions or behavior by any social media company would be unacceptable (for example on the use of algorithms or collection and selling of data). Or Congress could focus on prohibiting companies that are owned by states proven to have conducted hostile actions toward US digital infrastructure to operate in the United States. That would certainly include TikTok (and many other companies). This bill takes a halfway approach, both tying itself explicitly to TikTok owner ByteDance and hinting that it could apply to “other social media companies.”

Rose Jackson is the director of the Democracy and Tech Initiative at the Digital Forensic Research Lab.

The recently passed House bill, if it were to become law, would create a pathway to force the divestment of Chinese ownership in TikTok or ban the app from app stores and web hosting sites. Unlike previous attempts by the Trump administration to ban the app outright or force a divestment through the Committee on Foreign Investment in the United States, the Protecting Americans from Foreign Adversary Controlled Applications Act would not just affect TikTok. Instead, the legislation would create a process through which the US government could designate social media apps that are considered to be under the control of foreign adversaries as national security threats. Once identified as threats, the companies would have 180 days to divest from the foreign ownership or be subject to a ban.

—Sarah Bauerle Danzman

4. What would be some of the global ripple effects of a TikTok ban?

The United States has always opposed efforts by authoritarian nations seeking to build “great firewalls” around themselves. This model of “cyber sovereignty” sees the open, interoperable, and free internet as a threat, which is why countries like China already have a well-funded strategy to leverage global governance platforms to drive the development of a less open and more authoritarian-friendly version. A TikTok ban would ironically benefit authoritarian governments as they seek to center state-level action (over multi-stakeholder processes) in internet governance. TikTok should not lead the United States to abandon its longstanding commitment to the values of a free, open, secure, and interoperable internet.

A ban could generate more problems than it would solve. What the United States should consider instead is passing federal privacy laws and transparency standards that apply to all companies. This would be the single most impactful way to address broader system vulnerabilities, protect US values and commitments, and address the unique risks related to TikTok’s Chinese ownership, while avoiding the potential significant downsides of a ban. 

Kenton Thibaut

5. What do you make of TikTok’s response, particularly in pushing its users to flood Capitol Hill with calls?

Members of Congress were rightfully alarmed by TikTok’s use of its platform to send push notifications encouraging users to call their representatives. However, Uber and Lyft used this exact same tactic in California when trying to defeat legislation that would have required it to provide benefits to its drivers. If we try to solve “TikTok” and not the broader issue TikTok is illuminating, we will keep coming back to these same issues over and over again. 

—Rose Jackson

6. How is China viewing this debate?

The CCP has a tendency to throw a lot of spaghetti at the wall in an attempt to make its arguments, in this case that the divestment of TikTok from its Chinese parent company ByteDance is unnecessary. When the CCP has justified the internment of Uyghurs, it has thrown out everything from defending its repression based on terrorist beliefs across the population to claiming that it was just helping with social integration and developing work programs. The CCP has already made claims that the divestment would cause investors to lose faith in the US market and that it shows a fundamental weakness and abuse of national security. Expect many different versions of these arguments and more. But all the anticipated pushback will be focused on diverting the public argument away from the fundamental concern: The Chinese government can, under law, force a Chinese company to share information. 

—Meg Reiss

The post Will the US crack down on TikTok? Six questions (and expert answers) about the bill in Congress. appeared first on Atlantic Council.

]]>
Kramer authors op-ed on the role of Congress in deterring Chinese cyber attacks https://www.atlanticcouncil.org/insight-impact/in-the-news/kramer-on-role-of-congress-in-deterring-chinese-cyber-attacks/ Tue, 05 Mar 2024 21:44:00 +0000 https://www.atlanticcouncil.org/?p=751700 Kramer advocates for US action against Chinese cyber threats, emphasizing their risk to economic and infrastructure security.

The post Kramer authors op-ed on the role of Congress in deterring Chinese cyber attacks appeared first on Atlantic Council.

]]>

On March 4, Scowcroft Center for Strategy and Security Distinguished Fellow and Board Director Franklin D. Kramer published an op-ed in The National Interest on the role of Congress in deterring Chinese cyber attacks.

In the article, Kramer highlights the serious threats Chinese cyberattacks pose to US economic security and critical infrastructure. It suggests four measures: providing cybersecurity tax credits to support small businesses, academia, and infrastructure; leveraging AI to improve security software; creating a corps of private-sector cybersecurity providers for wartime; and addressing the cybersecurity workforce shortage to enhance national resilience.

China’s determined cyber attacks on the United States call for significant actions to enhance national resilience both now and in the event of conflict.

Franklin D. Kramer

Forward Defense, housed within the Scowcroft Center for Strategy and Security, generates ideas and connects stakeholders in the defense ecosystem to promote an enduring military advantage for the United States, its allies, and partners. Our work identifies the defense strategies, capabilities, and resources the United States needs to deter and, if necessary, prevail in future conflict.

The post Kramer authors op-ed on the role of Congress in deterring Chinese cyber attacks appeared first on Atlantic Council.

]]>
Experts react: What Biden’s new executive order about Americans’ sensitive data really does https://www.atlanticcouncil.org/blogs/new-atlanticist/experts-react/experts-react-what-bidens-new-executive-order-about-americans-sensitive-data-really-does/ Thu, 29 Feb 2024 19:05:56 +0000 https://www.atlanticcouncil.org/?p=742382 US President Joe Biden just issued an executive order restricting the large-scale transfer of personal data to “countries of concern.” Atlantic Council experts share their insights.

The post Experts react: What Biden’s new executive order about Americans’ sensitive data really does appeared first on Atlantic Council.

]]>
It’s a personal matter. On Wednesday, US President Joe Biden issued an executive order restricting the large-scale transfer of personal data to “countries of concern.” The order is intended to prevent genomic, health, and geolocation data, among other types of sensitive information, from being sold in bulk to countries such as China, which could use it to track or blackmail individuals. Can Biden’s directive stop sensitive data from slipping into the wrong hands? And what are the implications for privacy and cybersecurity more broadly? Below, Atlantic Council experts share their personal insights.

Click to jump to an expert analysis:

Rose Jackson: The absence of a federal US data protection law threatens national security

Kenton Thibaut: The focus on data brokers targets a key vulnerability in the US information ecosystem

Graham Brookie: An essential, baseline step for shoring up US data security

Sarah Bauerle Danzman: It will be essential to sort out how new rules fit in with the current regulatory structure

Justin Sherman: Congress must get involved to tame data brokerage over the long term

Maia Hamin: A welcome step, but beware of data brokers exploiting backdoors and work-arounds


The absence of a federal US data protection law threatens national security

The United States desperately needs a federal privacy or data protection law; the absence of one threatens our national interest and national security. While we wait for Congress to take the issue seriously, the Biden administration seems to be looking to leverage its executive authorities to take action where it can. Wednesday’s executive order should be understood in that context. The order takes particular aim at what are called data brokers—a lucrative market most Americans have likely never heard of. These companies quietly buy up troves of information collected through social media and credit card companies, consumer loyalty programs, mobile phone providers, health tech services, and more, then sell the combined files to whoever wants it. That means that currently, the Chinese intelligence service doesn’t need an app like TikTok to collect data on US citizens; they can just buy it from a US company. So while this executive order won’t address all of the issues related to this unregulated and highly extractive market, it will close an obvious and glaring national security gap by barring the sale of such data to foreign adversaries.

Another significant piece of the executive order is its focus on genomic data as a particularly risky category. Genomic data are all but banned from provision to adversarial nations in any form. While this is a good step, the administration does not have the authority to ban the sale of genomic data to non-adversarial nations or domestically. This means there is a high likelihood that absent congressional or other action, the market for US genomic data will only grow. This underscores an uncomfortable reality when it comes to tech policy; there is no separating the foreign and domestic. Markets grow where there is incentive, and our continued failure in the United States to meaningfully grapple with how we want tech to be governed means we are choosing not to have input on the direction our own world-changing innovations will take.

Rose Jackson is the director of the Democracy + Tech Initiative at the Atlantic Council’s Digital Forensic Research Lab. She previously served as the chief of staff to the Bureau of Democracy, Human Rights, and Labor at the US State Department.


The focus on data brokers targets a key vulnerability in the US information ecosystem

While further details are still being developed (including rightsizing thresholds for what constitutes “bulk data”), the executive order is a welcome development for those concerned about data security. The focus on data brokers—as opposed to targeting a single app, like TikTok—targets a key vulnerability in the US information ecosystem. Data brokers compile detailed profiles of individuals—including real-time location data—from various sources, including social media, credit card companies, and public records. This creates vulnerabilities for espionage and exploitation by foreign adversaries. That means while the national security community has raised concerns over the Chinese government’s ability to use TikTok to access data on Americans, it pales in comparison to what China already accesses through hacking and legal purchases via US data brokers. 

Data security threats extend beyond individual apps to include data brokers and the broader lack of regulation in the tech industry. To protect privacy and national security, stronger regulations and transparency measures are needed, and the United States should pass comprehensive federal privacy legislation. However, in the interim, the administration has done what it can with this executive order to help stem the tide of Americans’ sensitive personal data flowing abroad. 

Kenton Thibaut is a senior resident China fellow at the Atlantic Council’s Digital Forensic Research Lab (DFRLab).


An essential, baseline step for shoring up US data security

The executive order preventing the sale of bulk data to adversarial countries may sound technical, bureaucratic, and even opaque. However, it is one of the most essential baseline steps the United States needs to take in shoring up security in an era in which technology is at the forefront of geopolitical competition. Enormous amounts of information about Americans is bought and sold on the open market every single day. This measure is intended to make it harder for specific adversarial countries to buy billions of data points about citizens legally.

As many other more challenging technical issues arise—such as how to govern the rapid development of artificial intelligence—a standard for data privacy for every single person in the United States is sorely needed. Data privacy is the foundation for establishing a rights-respecting and rights-protecting approach in an era of both rapid technological change and geopolitical competition. The executive order is an important step that can be built on. The policy is a threat-based approach to securing citizens’ data and information from the worst foreign actors. Congress can strengthen this approach and address the limitations of an executive order by passing legislation for a strong federal data privacy standard that not only protects Americans’ data from foreign adversaries, but also provides Americans protection in general.

Graham Brookie is the vice president for technology programs and strategy, as well as senior director, of the Atlantic Council’s Digital Forensic Research Lab. He previously served in various roles over four years at the White House National Security Council.


It will be essential to sort out how new rules fit in with the current regulatory structure

With its latest executive order and related advance notice of proposed rulemaking, the Biden administration is trying to find transparent, clearly defined legal channels to address a specific set of national security challenges. These are the challenges that arise from the unmitigated and largely untracked commercial world of bulk data transfer to entities owned by, controlled by, or subject to the jurisdiction or direction of potential adversaries. The administration’s proposed rules demonstrate its seriousness of purpose in attempting to craft rules that are narrow in scope and application, while also anticipating and countering potential circumvention techniques of untrusted actors. They are also complicated. For example, they seek to stand up a new licensing line of effort with financial sanctions and export licenses based on a model from the Department of Justice and on the experiences of the Office of Foreign Assets Control and the Bureau of Industry and Security. This complexity raises questions about the feasibility and costs of compliance and enforcement.

Some parts of the proposed rules overlap significantly with existing regulatory structure, and especially with the Committee on Foreign Investment in the United States (CFIUS). In particular, the regulation will cover investments by covered persons and entities in US businesses that collect covered data, a class of transactions typically handled by the CFIUS. It will be important for the government to clearly articulate how the new rules and the different government entities involved will relate to each other, with a goal toward reducing rather than exacerbating regulatory complexity that leads to higher compliance costs and confusion. The proposed rules suggest that the CFIUS might take precedence, but the CFIUS is a costly and time-intensive case-by-case review that is supposed to be a tool of last resort. It would be more efficient and probably more effective to first apply investment restrictions based on these new rules and preserve case-by-case CFIUS review only in situations in which the new data security prohibitions and restrictions do not adequately address national security risks associated with a particular transaction. Doing so would reduce pressure on the CFIUS’s ever-growing caseload and would provide businesses with bright lines rather than black boxes.

Sarah Bauerle Danzman is a resident senior fellow with the GeoEconomics Center’s Economic Statecraft Initiative. She is also an associate professor of international studies at Indiana University Bloomington where she specializes in the political economy of international investment and finance.


Congress must get involved to tame data brokerage over the long term

Data brokerage is a multi-billion-dollar industry comprising thousands of companies. Foreign governments such as China and Russia obviously have many ways to get sensitive data on Americans, from hacking to tapping into advertising networks—and one of those vulnerabilities lies in the data brokerage industry.

Data brokers collect and sell data on virtually every single person in the United States, and that includes data related to government employees, security clearance-holding contractors, and active-duty military personnel. My team at Duke’s Sanford School of Public Policy published a detailed study in November 2023, where we purchased sensitive, individually identified, and nonpublic information such as health conditions, financial information, and data on religion and children about active-duty US military servicemembers from US data brokers—with little to no vetting, and for as cheap as twelve cents per servicemember. It would be easy for the Chinese or Russian governments to set up a website and purchase data on select Americans to blackmail individuals or run intelligence operations. With some datasets available for cents on the dollar per person, or incredibly granular datasets available for much more, it may be considerably cheaper than the cost of espionage for foreign governments to simply tap into the unregulated data brokerage ecosystem and buy data.

Of course, an executive order isn’t going to fix everything. At the end of the day, the fact that data brokers gather and sell Americans’ data at scale, without their knowledge, often without controls, is a congressional problem—and has signified a major congressional failure to act. Federal and state legislation is what will ultimately best tackle the privacy, safety, civil rights, and national security risks from the data brokerage industry. But that doesn’t mean the executive branch shouldn’t act in the meantime. If the executive branch can introduce even a few additional regulations for data brokers to better vet their customers or to stop selling certain kinds of data to certain foreign actors, that’s an important improvement from the status quo.

Over the coming months, important challenges for the executive branch will be defining terms such as “data broker,” ensuring that covered data brokers are required to properly implement “know your customer” requirements, and figuring out ways to manage regulatory compliance in light of the size and operating speed of the data brokerage industry.

Justin Sherman is a nonresident fellow at the Atlantic Council’s Cyber Statecraft Initiative and founder and CEO of Global Cyber Strategies.


A welcome step, but beware of data brokers exploiting backdoors and work-arounds

The commercial data broker ecosystem monetizes and sells Americans’ most sensitive data, often piggybacking off of invasive ad-tracking infrastructure to vacuum up and auction off specific information about Americans, such as their location history or mental health conditions. This executive order is a useful step toward making it more difficult for specific adversary countries to purchase that data, and it makes clear sense from a national security perspective.

However, while this market remains (otherwise) largely unregulated and flourishing in the United States, in the absence of a comprehensive privacy law or other restrictions on data brokering, Americans’ privacy will continue to suffer. Leaving this market intact domestically runs the risk of opening up potential backdoors and work-arounds to the limitations in the executive order. It also—perhaps not coincidentally—leaves the door open for the US government itself to continue purchasing and using commercial data in its own intelligence programs. 

That’s all to say, cracking down on data brokers is always welcome, so it’s great to see this order (and recent action from the Federal Trade Commission as well). Next, let’s challenge Congress and the executive to push it further.

Maia Hamin is an associate director with the Atlantic Council’s Cyber Statecraft Initiative under the Digital Forensic Research Lab.

The post Experts react: What Biden’s new executive order about Americans’ sensitive data really does appeared first on Atlantic Council.

]]>
Braw featured in Politico on espionage in Europe https://www.atlanticcouncil.org/insight-impact/in-the-news/braw-featured-in-politico-on-espionage-in-europe/ Tue, 27 Feb 2024 22:15:01 +0000 https://www.atlanticcouncil.org/?p=741978 On February 27, Transatlantic Security Initiative senior fellow Elisabeth Braw wrote an opinion piece in Politico discussing the changes in espionage tactics by authoritarian regimes.   

The post Braw featured in Politico on espionage in Europe appeared first on Atlantic Council.

]]>

On February 27, Transatlantic Security Initiative senior fellow Elisabeth Braw wrote an opinion piece in Politico discussing the changes in espionage tactics by authoritarian regimes.

  

The Transatlantic Security Initiative, in the Scowcroft Center for Strategy and Security, shapes and influences the debate on the greatest security challenges facing the North Atlantic Alliance and its key partners.

The post Braw featured in Politico on espionage in Europe appeared first on Atlantic Council.

]]>
To combat Chinese cyber threats, the US must spearhead a new Indo-Pacific intelligence coalition https://www.atlanticcouncil.org/blogs/new-atlanticist/to-combat-chinese-cyber-threats-the-us-must-spearhead-a-new-indo-pacific-intelligence-coalition/ Tue, 27 Feb 2024 17:56:19 +0000 https://www.atlanticcouncil.org/?p=741039 Such a coalition would help disrupt cyber threats, signal US resolve, and ideally help deter future cyberattacks from China.

The post To combat Chinese cyber threats, the US must spearhead a new Indo-Pacific intelligence coalition appeared first on Atlantic Council.

]]>
When the highest-ranking US law enforcement official describes a concern as “the defining threat of our generation,” it should be taken seriously. On January 31, FBI Director Christopher Wray testified before Congress about China’s capability to threaten US national and economic security. In particular, he identified the imminent cyber threat that Chinese hackers pose to critical infrastructure. A China-sponsored cyber group called “Volt Typhoon,” Wray explained, has prepositioned cyberattack capabilities in the US communications, energy, transportation, and water sectors intended to “destroy or degrade the civilian critical infrastructure that keeps us safe and prosperous.” Alarming in its own right, Volt Typhoon is just the latest example of Beijing’s ongoing “cyber onslaught,” Wray added.

This story is not new. Since at least 2019, the US government has publicly sounded the alarm about the threat that China’s cyberattack and espionage enterprise poses to US national security and to regional stability in East Asia. The 2023 annual threat assessment by the US Office of the Director of National Intelligence (ODNI) states that China “uses coordinated, whole-of-government tools to demonstrate strength and compel neighbors to acquiesce to its preferences.” The assessment adds that China’s cyber capabilities are essential for orchestrating espionage, malign influence, and attack operations in support of Chinese interests.

To confront the threat to critical infrastructure posed by Volt Typhoon and other state-sponsored Chinese cyber actors, the United States should launch an expansive new multilateral cyber threat intelligence sharing coalition in the Indo-Pacific. This coalition should utilize some of the lessons learned from the Five Eyes intelligence alliance, and it would incorporate members of the Five Eyes alliance, US Indo-Pacific partners, and even some European states. The expanded reach and resources of such a coalition would help disrupt cyber threats, signal to the world that the United States and its partners are committed to protecting both cyber and physical infrastructure from malicious actors, and ideally help deter future cyber threats from China. 

Meeting the threat

The Biden administration has already taken some steps to improve cybersecurity cooperation in the Indo-Pacific region, such as recent commitments with Japan and South Korea. In each case, the partners recognize the importance of sharing cyber threat intelligence information related to critical infrastructure threats. A goal of this cooperation is to enhance cybersecurity in the region, especially through capacity building and sharing best practices with network defenders and incident responders. In practice, this often amounts to arming individual critical infrastructure asset owners with better tools and procedures that will improve their cybersecurity posture over time.

Increased cybersecurity at the point of a potential attack is necessary, but it is not sufficient given the urgency and scope of the threat. Dedicated, well-resourced state-sponsored adversaries, as demonstrated by Volt Typhoon, have already proven they can establish a cyberattack foothold in the control systems that operate critical infrastructure.

In fact, this strategy of merely sharing cybersecurity information with network defenders may play into Beijing’s hands, since malicious actors already present with deep access privileges in these networks could be prepositioned to observe how new cybersecurity programs are implemented, potentially giving them valuable information to evade detection in the future.

The additional key to interrupting China’s cyberattack enterprise as it exists today is for the United States and its allies and partners to detect and dismantle global command-and-control (C2) infrastructures that Chinese-supported threat groups use to perform “living off the land” techniques. These techniques are very difficult for network defenders to identify because they use a network’s built-in administration tools to closely mimic normal network business traffic and operational protocols. For any threat actor to execute disruptive actions within a victim network, they must first establish remote C2 connections through external communication access points, such as the open internet or web-based channels. Network defenders might miss these remote C2 connections, lost in a cacophony of legitimate network traffic. However, US and allied intelligence services are often better equipped to monitor, track, and disrupt covert C2 activities wherever they occur around the world.

Building out a new coalition from the Five Eyes alliance

Thankfully, the United States does not need to imagine a radical solution for this challenge. The US intelligence community already has decades of experience managing a complex foreign intelligence-sharing alliance with multiple countries that routinely collaborate to monitor adversaries of mutual concern.

The “Five Eyes” intelligence sharing partnership among the United States, Australia, Canada, New Zealand, and the United Kingdom was established in the 1940s to surveil the Soviet Union and Eastern Bloc nations. It then expanded to monitor terrorism-related activities after the 9/11 attacks. Just as the original Five Eyes members were driven to confront the autocratic Soviet threat to capitalist democracy, it is easy to imagine how a new cyber-focused alliance of US and Indo-Pacific partners could coalesce to counter Beijing’s manipulation of cyberspace. It is just as easy, in the absence of such a coalition, to imagine China continuing its quest to dominate East Asia and undermine US military efforts to support US regional allies and partners.

Five Eyes is especially adept at sharing intelligence derived from electronic signals and systems used by foreign targets, called signals intelligence. While there are important differences between signals intelligence and cyber threat intelligence, an established intelligence sharing system in the former gives Five Eyes countries a model to work from, since the latter is largely derived from intercepts of digital signals in network traffic that reveal indicators of malicious activities. In addition, it is more effective to build governance measures, such as security protocols, that protect sensitive sources and that uphold shared democratic values, within the structure of a coalition than, say, trying to manage these issues in a series of cumbersome bilateral security arrangements.  

A consequential first step would be for the United States to engage current Five Eyes partners on a strategy to bring more Indo-Pacific intelligence liaison partners into the fold. Highlighting the recent danger posed by Volt Typhoon, the United States and Five Eyes partners could underscore for this expanded group the urgency of working together to find and disrupt similar threats.

Given that Australia is an existing Five Eyes member with clear regional security interests, it would be an ideal partner with the United States to lead engagements with capable and like-minded partners to lay the groundwork for a more expansive cyber intelligence coalition.

Obvious starting points are Japan and South Korea, which already have bilateral agreements with the United States to enhance cyber intelligence sharing. The United States also has long-standing military alliances with the Philippines and Thailand, which could be further developed to include intelligence analysis and collection components focused on Chinese cyber activities. India and the United States have recently committed to partner on sharing information about cyber threats and vulnerabilities as part of their Comprehensive Global and Strategic Partnership. And building upon President Joe Biden’s steps to upgrade US relations with Vietnam and Indonesia to Comprehensive Strategic Partnerships—both of which include elements to improve digital cooperation—the groundwork exists for expansion into more sophisticated cyber intelligence sharing arrangements with partners in Southeast Asia.

Leadership for this new coalition should come from the ODNI, with support from the National Security Agency (NSA), which is the primary US intelligence community element responsible for sharing signals intelligence within the existing Five Eyes alliance. The NSA has all the required authorities, experience, and expertise to operationalize intelligence-informed insights on Chinese cyber threats to assist Indo-Pacific intelligence liaison partners in strengthening their own intelligence sharing mechanisms to contribute to the alliance’s mission. Moreover, these efforts should be carried out in ways that complement and boost, but do not detract from, the ongoing work of the Five Eyes alliance.

Deterring Beijing in cyberspace

The United States must act soon. The revelations about Volt Typhoon are a wake-up call not only about the operations China currently has underway, but also about the far-reaching threat it will continue to pose. China has proven it is willing and able to exploit cyberspace to achieve its objectives, and until the United States and partner nations confront it in places where it operates, it will only become more dangerous.  

In addition to the immediate benefits of disrupting ongoing operations like Volt Typhoon, an expanded multilateral Indo-Pacific cyber threat intelligence alliance might contribute to long-term deterrence strategies. More eyes on this adversary could increase opportunities to disrupt China’s future cyber activities, making them less likely to succeed over time. Increased attribution could also cause the Chinese government reputational harm internationally, in addition to the direct financial costs Beijing would suffer each time it needed to reconstitute C2 upon discovery.

If the United States wants to achieve its strategic vision of an “open, free, global, interoperable, reliable, and secure” internet that “that uplifts and empowers people everywhere,” then Washington must commit to pushing back on any efforts to weaponize cyberspace to achieve autocratic or coercive geopolitical objectives. None of these efforts is likely to deter China completely from mounting cyberattacks, of course. But more eyes on malicious Chinese cyber activities targeting critical infrastructure through a comprehensive, coordinated cyber intelligence alliance would make it more difficult and costly for Beijing to continue its current course. Equally valuable, this would send a clear signal to the world that the United States and its regional allies and partners are willing to contest Beijing in cyberspace to secure the enduring freedom of the global digital ecosystem.


Victor Atkins is a nonresident fellow with the Atlantic Council’s Indo-Pacific Security Initiative, where he specializes in cyber intelligence, national security, and industrial cybersecurity issues. He was previously a leader within the Department of Energy’s Cyber Intelligence Directorate, where his teams provided all-source foreign intelligence analytical support to the US energy sector.

The views expressed in this article are the author’s and do not reflect those of the Department of Energy or the US intelligence community.

The post To combat Chinese cyber threats, the US must spearhead a new Indo-Pacific intelligence coalition appeared first on Atlantic Council.

]]>
Hinata-Yamaguchi in SCMP https://www.atlanticcouncil.org/insight-impact/in-the-news/hinata-yamaguchi-in-scmp/ Tue, 13 Feb 2024 16:31:00 +0000 https://www.atlanticcouncil.org/?p=747415 On February 12, IPSI Nonresident Senior Fellow Ryo Hinata-Yamaguchi was quoted in a South China Morning Post article, where he warned that Japan’s allies will be hesitant to share sensitive information if Japan cannot strengthen its cybersecurity measures. 

The post Hinata-Yamaguchi in SCMP appeared first on Atlantic Council.

]]>

On February 12, IPSI Nonresident Senior Fellow Ryo Hinata-Yamaguchi was quoted in a South China Morning Post article, where he warned that Japan’s allies will be hesitant to share sensitive information if Japan cannot strengthen its cybersecurity measures. 

The post Hinata-Yamaguchi in SCMP appeared first on Atlantic Council.

]]>
Atkins in Industrial Cyber https://www.atlanticcouncil.org/insight-impact/in-the-news/atkins-in-industrial-cyber/ Mon, 12 Feb 2024 22:08:00 +0000 https://www.atlanticcouncil.org/?p=747194 On February 11, IPSI Nonresident Senior Fellow Victor Atkins was quoted in an Industrial Cyber article, where he discussed key takeaways related to protection of critical infrastructure and operational technology (OT) from recent Congressional hearings on cybersecurity.  

The post Atkins in Industrial Cyber appeared first on Atlantic Council.

]]>

On February 11, IPSI Nonresident Senior Fellow Victor Atkins was quoted in an Industrial Cyber article, where he discussed key takeaways related to protection of critical infrastructure and operational technology (OT) from recent Congressional hearings on cybersecurity.  

The post Atkins in Industrial Cyber appeared first on Atlantic Council.

]]>
The competition for influence in the Americas is now online https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/the-competition-for-influence-in-the-americas-is-now-online/ Mon, 12 Feb 2024 15:00:00 +0000 https://www.atlanticcouncil.org/?p=726580 China is expanding its footprint in Latin America and the Caribbeans’s emerging technology and critical infrastructure arenas, while Russia is engaging in foreign influence operations via the cyber domain. These challenges require a proactive stance by the United States.

The post The competition for influence in the Americas is now online appeared first on Atlantic Council.

]]>
The Biden administration identified China and Russia as strategic competitors in its 2022 National Security Strategy, and this rivalry with malign state actors is on full display in the western hemisphere. For decades, the People’s Republic of China (PRC) and Russia have been expanding their influence across the Americas via the diplomatic, informational, military, and economic domains. Now they are engaging in new areas to include emerging technologies, cyberspace, and outer space. These strategic competitors have been supporting autocratic regimes and threatening democracy, prosperity, and security in the region. The Chinese have aggressive investment and commercial projects underway to secure new markets and strategic resources to expand their global Belt and Road Initiative (BRI). Since the Cold War, Russia has challenged US influence in the Americas by sponsoring like-minded regimes including Cuba, Venezuela, and Nicaragua and fomenting unrest in democratic states. This article will examine PRC efforts to expand its economic footprint in the region in the emerging technologies and critical infrastructure arenas. It will also analyze Russian foreign influence operations in the cyber domain with disinformation campaigns intended to destabilize democratic governments allied with the United States. To counter growing Chinese and Russian influence in the cyber and emerging technologies domains in the region, the United States must adopt a more proactive stance by doubling down on constructive investment and commercial activities with partner nations, and educating the region on US engagements that contribute to economic growth and democracy, and discredit disinformation campaigns in the Americas.

China’s dominance in emerging technologies in the Americas

China has expanded its economic influence, becoming a key trading partner across Latin America over the past two decades. Since Beijing joined the World Trade Organization, the bilateral trade in goods increased significantly from $14.6 billion in 2001 to $315 billion in 2020. In the same period, the trade in goods between the United States and Latin America nearly doubled, reaching $758.2 billion from $364.3 billion.1 China has secured natural resources, investment opportunities, and markets for its exports across the region and now, twenty-one of the thirty-one Latin America and Caribbean (LAC) countries are participating in the Belt and Road Initiative. A major development interest of China has been in infrastructure, with the BRI providing financing for ports, transportation networks, power plants, and telecommunications facilities. China is now aggressively expanding its activities in emerging technologies and critical infrastructure across the region.

While many insist Chinese interests in the Americas are purely economic, US Southern Commander Gen. Laura J. Richardson stated on March 2023 before Congress that the PRC now possesses the ability to extract resources, establish ports, and potentially build dual-use space facilities, which if true would make the area of responsibility of the US Southern Command the home of the most space facilities out of all the combatant commands. In addition, China is able to manipulate local governments through predatory investment practices.2 The US Southern Command believes that PRC activities have included investments across realms such as infrastructure and technology and malicious activities such as intellectual property theft, the spread of ensuring long-term CCP access and influence in the political, economic, and security sectors of the western hemisphere.3 More recently, China has expanded its ventures in the telecommunications, cloud computing, and surveillance sectors. Gen. Richardson repeatedly underscores the security threat posed by the expansion of activities from malign state actors like China and Russia in the region in her public remarks.

Huawei, the Chinese technology firm, perhaps best exemplifies how dominant China is becoming in the emerging technology and communications space. Huawei controls a majority of the region’s telecommunications infrastructure and is poised to play a significant role in future technological developments, including 5G and the Internet of Things.4 Unfortunately, there are few competitive options to Huawei for 5G in terms of service and pricing available in Latin America. Huawei is lobbying hard to secure 5G contracts in countries, such as Colombia, and has established cloud computing with data centers in Mexico, Chile, and Brazil.5 Gen. Richardson has expressed concern that 5G deals between the region and China could undermine the information-sharing partnerships that the region holds with the United States.6

Across the region, Huawei consistently offers incentives for companies to utilize Huawei clouds for their core processes and to store their intellectual property. In Panama, Huawei designed a digital free trade zone, consisting of a $38 million project with involvement from nearly one hundred companies in product distribution, as well as cloud computing services.7 According to Strand Consult, a research firm focused on the telecommunications industry, data centers built and run by Chinese firms, including Huawei, routinely process US internet traffic. Alongside governments of all levels, private companies, including healthcare providers, use Chinese data centers.”8

China has been increasingly active in the surveillance and security sector. Chinese state linked companies such as Huawei and Hikvision have combined cameras, biometrics, data processing, and other tools to offer “safe” and “smart cities” solutions throughout the region, including in Ecuador and Bolivia.9 Such services have become increasingly attractive as violence and insecurity have been amplified by the economic impact of COVID-19.With few alternative service providers available, Huawei is emerging as the dominant force in emerging technologies and surveillance services across Latin America and the Caribbean. The Heritage Foundation has observed that “Huawei often functions as an extension of the Chinese Communist Party’s security enterprise. If Huawei develops 5G networks in Latin America, China will essentially control the communications, infrastructure, and sensitive technology of the entire region.”10 The United States must recognize this Chinese expansion into critical infrastructure sectors in LAC as a formidable threat to US influence in the region.

Russian disinformation campaigns in the Western Hemisphere

Russia has tried to counter US influence in the region by supporting communist and left-leaning regimes and movements since the Cold War. Moscow has conducted foreign influence operations in the region that have spread disinformation and sown discord, resulting in an undermining of democratic institutions and values. At the 2022 Summit of the Americas, US Secretary of State Antony Blinken warned of rising disinformation across Latin America, especially from China and Russia, and stated that the United States was committed to countering it.11 In recent elections in Brazil, Chile, and Colombia, disinformation propagated by online trolls and fake social media accounts sowed the seeds of doubt in electoral processes. Latin America has one of the highest risk perceptions regarding misinformation at 74.2 percent of internet users.12

Russia has a clear track record of manipulating the information environment, often using influence operations and information warfare tactics that are now further magnified in cyberspace. Russia’s presence in Latin America has only become more evident since the invasion of Ukraine in February 2022. Russia has capitalized on its expertise in the cyber realm by manipulating social media to spur on massive protests in several countries such as Chile and Colombia.

Russia has established a significant media and information footprint throughout the region with Russia Today and Sputnik News. Russia Today’s Spanish-language affiliate, Actualidad RT, has over 3.5 million followers on X (formerly Twitter) and it’s YouTube channel, now blocked, had over six million subscribers. On Facebook, RT’s Spanish-language page is now more popular than its English-language counterpart, “pushing Russia’s preferred narratives in Latin America, stoking anti-Americanism and praising authoritarian regimes, all under the veil of a supposedly objective platform,” wrote León Krauze in a Washington Post opinion column.13 According to a DisinfoLab analysis, “The majority of RT en Español’s website traffic comes from Venezuela (21.29 percent), Argentina (16.93 percent), Mexico (13.33 percent), and Colombia (5.52 percent).”14 Russia’s media presence in Latin America demonstrates its use of the information instrument of national power to challenge US influence.

Russia has used Moscow-linked social media accounts in an attempt to stir up civil unrest in South American countries calling for the resignation of Nicolás Maduro of Venezuela (namely in Ecuador, Peru, Bolivia, Colombia, and Chile) since 2018. Russian bots and trolls were found to have exacerbated the massive protests that broke out in these countries.15 Russian activities sought to increase polarization and decrease confidence in democratic institutions across the region, especially in countries with a pro-US stance in foreign policy, like in Colombia and possibly Chile and Mexico.

As the closest, long-standing ally of the United States in the region, Colombia has been a top target of Russian espionage and disinformation campaigns. Iván Duque’s government, which was in power from 2018 to 2022, confronted Russia for the malign influence of promoting social protests from 2020 to 2022. In 2020, Colombian Vice President Marta Lucía Ramírez blamed Russia and Venezuela for fomenting protests and discord using social media platforms.16 In the past two years, Colombia has experienced sophisticated cyberattacks targeting its energy, military, and political sectors that only a few nations could employ, with some attacks being traced back to Russian and Venezuelan proxy servers.17

The case of Russian national Sergei Vagin sheds light on how Russia tried to use asymmetrical warfare to destabilize Colombia. On March 30, 2022, the Colombian National Police and the Attorney General’s Office arrested Sergei Vagin on a variety of charges including aggravated conspiracy to commit a crime and abusive access to computer systems.18 According to the presiding judge, Vagin is accused of financing illicit activities through fraudulent online betting platforms, receiving money through third parties from countries such as Russia and Ukraine. He also has alleged ties with the ELN terrorist group engaged in arms and drug trafficking.19 On April 1, 2022, President Duque voiced support for the investigation by the Prosecutor’s Office against Vagin on illicit financing and the alleged interference of Russian mafias in Colombian territory. He reassured that there were indications that would prove the use of the money to finance protest activities related to the national strike of 2021. According to several intelligence reports, the Prosecutor’s Office was able to establish that Sergei Vagin had already participated in previous marches of November 21, 2020, and March 8, 2022.”20 Moreover, “a CIA dossier published by the newspaper El Tiempo states that Sergei Vagin, also known as alias ‘Servac,’ mobilized important sums of money from Russia in order to finance violent actions in the main cities in Colombia; and he had ties with members of the so-called First Line that organized the social protests.”21 The case of Colombia demonstrates how Russia has been exploiting foreign influence operations and disinformation campaigns as a form of asymmetrical warfare against the United States and its democratic allies in LAC. Russia uses these asymmetrical operations as it does not possess the same economic might China does to expand its influence across the region.

Measures to counter China and Russia’s expanding influence in emerging technologies and cyberspace in the Americas

In light of the growing influence of China and Russia in the emerging technologies and cyber arenas, the United States must improve its ability to detect, understand, and counter its strategic competitors’ activities in Latin America and the Caribbean. In a 2023 Commanders Series discussion at the Atlantic Council, Gen. Richardson acknowledged that the threats to prosperity, security, and democracy posed by the PRC and Russia in the western hemisphere, saying: “The US needs to step up its game in our neighborhood to rival malign state and non-state actors.”22

The United States should increase engagement with partner countries in the region on the political, economic, information, and technology fronts to safeguard democratic institutions, competitive economies, the free flow of accurate information, and the rules-based order that both Russia and China are challenging.

Countering China’s influence

To counter China’s growing dominance in the emerging technology and critical infrastructure sectors in the western hemisphere, the United States should:

  • Deepen economic engagements with partner nations by expanding existing free trade agreements and brokering new ones that can include issues like near-shoring, manufacturing, and the digital economy across the region.
  • Implement global infrastructure initiatives that were included in the Biden administration’s Build Back Better World (B3W), which was brokered with the Group of Seven as a counterweight to China’s BRI, in four areas of focus: climate, health, digital technology, and equality with an emphasis on gender.
  • Identify public- and private-sector opportunities to collaborate with LAC countries in emerging technologies like the cloud, artificial intelligence, and quantum computing to challenge China’s growing monopoly on the critical infrastructure and communications sectors.
  • Implement the 2022 CHIPS and Science Act (Pub. L. No. 117-167), which seeks to both strengthen the US semiconductor supply chain by promoting the research and development of advanced technologies domestically and identifying LAC partners who can directly contribute to these efforts.
  • Adopt legislative initiatives, such as the proposed Americas Trade and Investment Act, that seek to “prioritize partnerships in the western hemisphere to improve trade, bring manufacturing back to our shores, and compete with China,” as well as capitalize on the “full economic potential of the United States and Latin America.”23

Countering Russia’s influence

To counter Russia’s expanded influence operations and disinformation campaigns in cyberspace to undermine democracy in the hemisphere, the United States should:

  • Engage in more proactive strategic communications in the region to inform and educate on important US government and private-sector contributions aimed at protecting and enhancing prosperity, security, and democracy in the Americas, and correct the record of the disinformation circulated by the Russians and their proxies.
  • Improve US understanding of Russian disinformation campaigns’ content, tactics, techniques, and procedures through our intelligence and law enforcement agencies, and tailor more timely and effective ways to counter them along with partner nations.
  • Leverage the State Department Global Engagement Center and its programs to assist LAC countries to counter disinformation.24
  • Share US efforts to counter disinformation with US partner nations through, for example, the Federal Bureau of Investigation’s Foreign Influence Task Force, which could shed light on investigations, operations, and best practices in partnering with private-sector technology companies.
  • Promote media literacy and education to raise awareness of disinformation across Latin America.
  • Encourage social media companies like Meta to identify and remove certain Russian state-affiliated accounts, such as Sputnik and RT en Español, from their platforms to stop the flow of fake news.

The threat of strategic competition from China and Russia in the Americas is real and is manifesting itself in new domains, such as emerging technologies and cyberspace. As several recent elections have brought left-leaning governments sympathetic to the PRC and Russia to power in the western hemisphere, the United States must actively invest political, economic, and technological capital in our neighbors to the south to remain the partner of choice for Latin America and Caribbean partner countries. The stakes are high. China and Russia seek to undermine the rules-based order, democracy, and free market principles in the Americas and challenge US dominance in the region. However, by harnessing American ingenuity and innovation, capital, technology, and democratic values, the United States has significant opportunities to curb and counter the influence of malign state actors like China and Russia in the Americas—and must seize such opportunities without delay.


About the author

Celina Realuyo is Professor of Practice at the William J. Perry Center for Hemispheric Defense Studies at the National Defense University where she focuses on US national security, illicit networks, transnational organized crime, counterterrorism and threat finance issues in the Americas.

The Scowcroft Center for Strategy and Security works to develop sustainable, nonpartisan strategies to address the most important security challenges facing the United States and the world.

The Adrienne Arsht Latin America Center broadens understanding of regional transformations and delivers constructive, results-oriented solutions to inform how the public and private sectors can advance hemispheric prosperity.

1    Sophie Wintgens, “China’s Growing Footprint in Latin America,” fDi Intelligence (a Financial Times unit), March 10, 2023, https://www.fdiintelligence.com/content/feature/chinas-growing-footprint-in-latin-america-82014
2    “2023 Posture Statement to Congress,” Excerpts from Commander’s House Armed Services Committee Testimony, US Southern Command (website), March 8, 2023, https://www.southcom.mil/Media/Special-Coverage/SOUTHCOMs-2023-Posture-Statement-to-Congress/
3    Center for a Secure Free Society, “China Expands Strategic Ports in Latin America,” VRIC Monitor No. 28(2022), https://www.securefreesociety.org/research/monitor28/; VRIC stands for Venezuela, Russia, Iran, and China.
4    R. Evan Ellis, New Developments in China-Latin America Engagement, Analysis, Peruvian Army Center for Strategic Studies, December 20, 2022, https://ceeep.mil.pe/2022/12/20/nuevos-desarrollos-en-las-relaciones-entre-china-y-america-latina/?lang=en.
5    Dan Swinhoe, “Huawei Planning Second Mexico Data Center, More across Latin America,” Data Center Dynamics, August 26, 2021, https://www.datacenterdynamics.com/en/news/huawei-planning-second-mexico-data-center-more-across-latin-america/.
6    Naveed Jamali and Tom O’Connor, “China Influence Reaches U.S. ‘Red Zone,’” Newsweek, July 25, 2023, https://www.newsweek.com/exclusive-china-influence-has-reached-red-zone-our-homeland-us-general-warns-1814448
7    Hope Wilkinson, “Explainer: B3W vs BRI in Latin America,” Council of the Americas, December 14, 2021, https://www.as-coa.org/articles/explainer-b3w-vs-bri-latin-america.
8    Silvia Elaluf-Calderwood, “Huawei Data Centres and Clouds Already Cover Latin America—Chinese Tech Influence Is a Gift to Countries and Politicians That Don’t Respect Human Rights,” Strand Consult, February 7, 2022,  https://strandconsult.dk/blog/huawei-data-centres-and-clouds-already-cover-latin-america-chinese-tech-influence-is-a-gift-to-countries-and-politicians-that-dont-respect-human-rights/
9    R. Evan Ellis, “Chinese Surveillance Complex Advancing in Latin America,” Newsmax, April 12, 2019, https://www.newsmax.com/evanellis/china-surveillance-latin-america-cameras/2019/04/12/ id/911484/.
10    Ana Rosa Quintana, “Latin American Countries Must Not Allow Huawei to Develop Their 5G Networks,” Issue Brief, Heritage Foundation, January 25, 2021, https://www.heritage.org/americas/report/latin-american-countries-must-not-allow-huawei-develop-their-5g-networks
11    Claudia Flores-Saviaga and Deyra Guerrero, “In Latin America, Fact-Checking Organizations Attempt to Counter Russia’s Disinformation,” Power 3.0 (blog), International Forum for Democratic Studies, July 6, 2022,  https://www.power3point0.org/2022/07/06/in-latin-america-fact-checking-organizations-and-cross-regional-collaborations-attempt-to-counter-russias-disinformation/
12    Aleksi Knuutila, Lisa-Maria Neudert, and Philip N. Howard, “Who Is Afraid of Fake News? Modeling Risk Perceptions of Misinformation in 142 Countries,” Harvard Kennedy School, Misinformation Review, April 12, 2022, https://misinforeview.hks.harvard.edu/article/who-is-afraid-of-fake-news-modeling-risk-perceptions-of-misinformation-in-142-countries/
13    Leon Kreuze, “Russia’s Top Propagandist in Latin America Has a Change of Heart,” Washington Post, May 8, 2022, https://www.washingtonpost.com/opinions/2022/05/08/russia-today-propagandist-latin-america-change-of-heart/
14    India Turner, “Why Latin America is Susceptible to Russian War Disinformation,” DisinfoLab, Global Research Institute, September 13, 2022, https://www.disinfolab.net/post/why-latin-america-is-susceptible-to-russian-war-disinformation
15    Center for Strategic & International Studies, “An Enduring Relationship—from Russia, with Love,” Blog, September 24, 2020, https://www.csis.org/blogs/post-soviet-post/enduring-relationship-russia-love
16    Lara Jakes, “As Protests in South America Surged, So Did Russian Trolls on Twitter, U.S. Finds,” New York Times, January 19, 2020, https://www.nytimes.com/2020/01/19/us/politics/south-america-russian-twitter.html
17    Guido L. Torres, “Nonlinear Warfare: Is Russia Waging a Silent War in Latin America?,” Small Wars Journal, January 24, 2022, https://smallwarsjournal.com/jrnl/art/nonlinear-warfare-russia-waging-silent-war-latin-america.
18    Loren Moss, “Alleged Russian Spy Charged . . . with Running a Gambling Mafia,” Finance Colombia, April 12, 2022, https://www.financecolombia.com/alleged-russian-spy-chargedwith-running-a-gambling-mafia/
19    Las pruebas que comprobarían la participación de ciudadano ruso en actividades ilegales (Evidence Proving the Involvement of Russian Citizen in Illegal Activities),” Noticias RCN, Abril 1, 2022, https://www.noticiasrcn.com/bogota/pruebas-comprobarian-actuar-de-ciudadano-ruso-en-actividades-ilegales-414593
20    “Hay indicios de que financiaban las protestas”: Duque sobre ruso capturado (“There are Indications that They were Financing the Protests”: Duque on Captured Russian),” Abril 1, 2022, https://www.noticiasrcn.com/colombia/presidente-duque-habla-sobre-injerencia-de-rusos-en-el-paro-nacional-414601.
21    “Sergei Vagin, el ruso capturado por la Fiscalía, aseguró que no tiene nada que ver con el Paro Nacional (Sergei Vagin, the Russian Captured by the Prosecutor’s Office, Assured that He has Nothing to do with the National Strike),” Infobae, Marzo 30, 2022, https://www.infobae.com/america/colombia/2022/03/30/sergei-vagin-el-ruso-capturado-por-la-fiscalia-aseguro-que-no-tiene-nada-que-ver-con-el-paro-nacional/.
22    “A Conversation with Laura J. Richardson on Security across the Americas,” Commander Series, Atlantic Council, January 19, 2023, https://www.atlanticcouncil.org/event/a-conversation-with-general-laura-j-richardson-on-security-across-the-americas/.
23    In January, US Senator Bill Cassidy, MD, and US Representative Maria Elvira Salazar released a discussion draft of the Americas Trade and Investment Act (Americas Act), https://www.cassidy.senate.gov/imo/media/doc/Americas%20Act%20Senator%20Bill%20Cassidy.pdf.
24    The State Department Global Engagement Center’s mission is to direct, lead, synchronize, integrate, and coordinate US government efforts to recognize, understand, expose, and counter foreign state and nonstate propaganda and disinformation efforts aimed at undermining or influencing the policies, security, or stability of the United States, its allies, and partner nations.

The post The competition for influence in the Americas is now online appeared first on Atlantic Council.

]]>
Atkins published in Cyber Defense Magazine https://www.atlanticcouncil.org/insight-impact/in-the-news/atkins-published-in-cyber-defense-magazine/ Tue, 06 Feb 2024 19:28:19 +0000 https://www.atlanticcouncil.org/?p=735085 On February 5, IPSI nonresident senior fellow Victor Atkins published a piece in Cyber Defense Magazine titled “Closing the Gap: Safeguarding Critical Infrastructure’s IT and OT Environments.” In this article, Atkins discusses the importance of shoring up informational and operational technology systems’ protections against cyberattacks. 

The post Atkins published in Cyber Defense Magazine appeared first on Atlantic Council.

]]>

On February 5, IPSI nonresident senior fellow Victor Atkins published a piece in Cyber Defense Magazine titled “Closing the Gap: Safeguarding Critical Infrastructure’s IT and OT Environments.” In this article, Atkins discusses the importance of shoring up informational and operational technology systems’ protections against cyberattacks. 

The post Atkins published in Cyber Defense Magazine appeared first on Atlantic Council.

]]>
If the US and EU don’t set AI standards, China will first, say Gina Raimondo and Margrethe Vestager https://www.atlanticcouncil.org/blogs/new-atlanticist/if-the-us-and-eu-dont-set-ai-standards-china-will-first-say-gina-raimondo-and-margrethe-vestager/ Wed, 31 Jan 2024 16:31:02 +0000 https://www.atlanticcouncil.org/?p=730814 The standardization of technologies is already being dominated by nonmarket and Chinese players, the two officials warned at an AC Front Page event.

The post If the US and EU don’t set AI standards, China will first, say Gina Raimondo and Margrethe Vestager appeared first on Atlantic Council.

]]>
Watch the event

According to US Commerce Secretary Gina Raimondo, the United States and European Union (EU) don’t have a moment to wait in setting standards for the development and use of artificial intelligence (AI). “If the US and EU don’t show up,” she warned, “China will, [and] autocracies will.” 

Raimondo spoke at an Atlantic Council Front Page event on Tuesday alongside European Commission Executive Vice President Margrethe Vestager, who cautioned that the field of standardization in technologies is already being “dominated by nonmarket players or Chinese players.” But “we need to be much more present in standardization for us,” she said. “We need to have a presence.” 

The leaders spoke shortly after the fifth meeting of the EU-US Trade and Technology Council (TTC) in Washington, where officials touched on everything from AI to climate policy to semiconductors. 

In the US-EU relationship, “there are irritants for sure,” Raimondo admitted, “but fundamentally what binds us is massively more consequential than the irritants.” 

Below are more highlights from the conversation, which was moderated by Atlantic Council President and Chief Executive Officer Frederick Kempe. 

EU+US on AI

  • Raimondo argued that the TTC will “prove to be exceedingly valuable” as AI tools continue to evolve. She said that the “muscle” the TTC has “built up”—in generating trust between private-sector stakeholders and the governments leading the United States and EU—will help in “bringing us together to write the rules of the road of AI.” 
  • But while Raimondo said that a “transatlantic approach” to AI could possibly come out of the TTC, she said she doubted whether “joint regulation” is feasible. “It will be some time before the US Congress passes a law that relates to the governing of AI,” the commerce secretary pointed out. “In the absence of that, there’s an awful lot of work to be done.” 
  • The commerce secretary explained that normally when governments develop regulations, each country first goes about writing rules separately before gathering with others to harmonize. “With AI, we can harmonize from the get-go because we haven’t yet written these regulations or rules or standards,” she said. 
  • In response to concerns that governments will always be regulating too slowly to keep up with tech, Vestager said that she thinks “that is just plain wrong” and that it is the responsibility of governments to ensure that technology respects society’s values. 

Keep “eyes wide open” on China 

  • Vestager outlined the EU’s “very complex relationship” with China, given that it is an important partner in fighting climate change—but it is also a systemic rival and an economic competitor. She explained that the EU is working to “derisk [its] dependencies” on countries such as China by getting more countries to screen foreign direct investments and working to prevent countries from skirting export controls. 
  • Regarding China, it is in the “self-interest” of the United States and EU “to work together,” Raimondo argued. “There are real national security concerns for both of us,” she added, “and we have to be eyes wide open about that and work together to protect… our countries.” 
  • Among those national security concerns: “We have to keep our eye on the number of Chinese-made electric vehicles being sold in Europe,” Raimondo said, explaining that sophisticated electric vehicles collect a “huge amount of information” about the driver and their surroundings. “Do we want all that data going to Beijing?” she asked. 

A TTC progress report 

  • Raimondo argued that the TTC has offered the transatlantic partners another opportunity to build trust, collaborate, and share information. The TTC has the added benefit of offering the partners a forum “where we can complain about each other in a constructive manner,” Vestager chuckled. 
  • While steel tariffs and the US Inflation Reduction Act have been examples of what Raimondo called “irritants” in the relationship, she said it is important that the transatlantic partners agree on “the principles and goals” behind regulating trade and tech. She attributed differing US and EU regulation methods to “differences in our systems of government… [and] political realities.” 
  • Beyond the ministerial meetings, Vestager said that the TTC has resulted in tech and trade teams having “gotten to know each other really well,” which she says made it easier to, for example, work together on sanctioning Russia after its full-scale invasion of Ukraine in 2022. “It went so fast with very little sort of bumps [in] the road because people knew each other,” she explained. “I think it’s really important not to underestimate what it means that you know who to call.” 
  • With elections looming on both sides of the Atlantic—and thus the possibility of new leaders who feel differently about US-EU collaboration—Raimondo said that the forum is taking measures to solidify its plans, for example renewing its memoranda of understanding. Raimondo added that with the TTC engaging stakeholders from the private sector, she hopes that there’s demand from both industry and civil society to keep the collaboration going. “There’s much more work to be done,” she said. 

Katherine Walla is an associate director of editorial at the Atlantic Council. 

Watch the full event

The post If the US and EU don’t set AI standards, China will first, say Gina Raimondo and Margrethe Vestager appeared first on Atlantic Council.

]]>
International law doesn’t adequately protect undersea cables. That must change. https://www.atlanticcouncil.org/content-series/hybrid-warfare-project/international-law-doesnt-adequately-protect-undersea-cables-that-must-change/ Thu, 25 Jan 2024 15:00:00 +0000 https://www.atlanticcouncil.org/?p=727834 What's missing: A global effort to protect undersea cables in international waters.

The post International law doesn’t adequately protect undersea cables. That must change. appeared first on Atlantic Council.

]]>
Undersea cables are important tools for transmitting sensitive data and supporting international telecommunications—but they’re relatively vulnerable. Sensitive data remains safe as long as undersea cables are in good physical condition, but events such as severe sabotage—in the form of cutting cables—could leak data and interrupt vital international communications. Today, when events that damage or cut a cable, (including acts of sabotage) happen in international waters, there is no effective regime to hold the perpetrator of a physical attack accountable.

The United States and its allies and partners have come to understand how important it is to secure the world’s undersea cables. But there haven’t yet been enough efforts that incorporate all countries in a protection pact. The reality is that cable cutting could severely impact the lives of citizens in countries across the globe, from Tonga to Norway and far beyond. Thus, intergovernmental organizations such as the United Nations (UN) must take undersea cable security seriously, including by forming internationally recognized and formalized protections.

Risks are growing under the sea

Threats to undersea cables are increasing. For example, Russia is well positioned to conduct malicious attacks on undersea cables with the help of its intelligence ship, Yantar, which was spotted loitering near cable locations in 2019 and 2021. NATO Assistant Secretary General for Intelligence and Security David Cattler expressed particular concern about Russian activity in European waters, following the 2022 invasion of Ukraine. Cattler told reporters in May 2023 that Russia could attack infrastructure such as undersea cables in an attempt to “disrupt Western life and gain leverage over those nations that are providing support to Ukraine.”

For a sense of how interruptive cable cutting could be, look to the African continent and the Matsu Islands. In April 2018, damage to the Africa Coast to Europe cable—which at the time connected twenty-two countries along the western coast of Africa and Europe—caused significant connectivity issues (and in some cases days-long blackouts) for ten countries. Reporters suggested that the damage could have been caused by Sierra Leone, as the country’s government seemed to have imposed other internet blackouts on its citizens around the same time, impacting communications for not just social but also economic and governance matters.

In February 2023, two Chinese vessels on two separate instances severed cables in the East China Sea—one on February 2 and another on February 8. Although there is no direct evidence that the vessels did so intentionally, Taiwanese local officials said that the cable cuts are part of repeated cable breaks that amount to harassment by China. For nearly two months, the over thirteen thousand residents of the Taipei-governed Matsu Islands endured an internet outage, encountering great difficulty when conducting business and communicating. For China, understanding how undersea cable cuts can impact Taiwan provides useful insights that can be leveraged in both traditional and hybrid warfare.

These interruptions hit particularly hard when countries don’t have many connection points. For example, while Saudi Arabia has sixteen cable connections, the Matsu Islands only have two connections. Norway’s Svalbard archipelago similarly only has two connections, while Tonga only has one. The impact of a severe cable cut also depends on a country’s ability to fix damaged or degraded cables. It took Taiwan over a month to repair cables stretching to the Matsu Islands. For Tonga, whose cable was damaged by a volcanic eruption in 2022, it took ten days for a cable repair ship stationed in Papua New Guinea to even reach the island before beginning repairs, which then took several weeks.

Clusters of countries have begun to acknowledge the increasing threats to undersea cables. For example, in 2019, Japan outlined the Data Free Flow with Trust (DFFT) concept that promotes the free flow of data and the protection of individual privacy, national security, and intellectual property by connecting undersea cables only with allies and partner nations. At a May 2023 summit in Hiroshima, the Group of Seven (G7) endorsed the creation of the Institutional Arrangement for Partnership, which puts DFFT into action. The G7 also issued a communiqué (albeit more of a political consensus than any sort of treaty) with a section committing to collaborate more on undersea cable security.

Should the G7 countries follow through on their commitment—for example, by investing in an undersea cable project together—they could affect geopolitics in the undersea cable world and highlight to political and business leaders how necessary it is to keep countries connected through cables.

The G7’s progress and NATO’s recent establishment of a London-based center on protecting undersea cables are examples of how the United States prefers to share cables with likeminded countries. These efforts also demonstrate how democratic states are joining together in smaller consortia to invest in establishing and securing undersea communication cables.

Democratic states are also investing in undersea cables as a way to spread the free flow of data. In June 2023, the East Micronesia Cable project to connect several islands in Oceania began, funded by Australia, Japan, and the United States—with the understanding that connectivity is vital to economic development and, in this case, a means to counter Chinese influence in the region. The project was slow to start, as it faced a stalemate after China’s HMN Technologies submitted a tempting bid to build the cable, and the United States warned the Pacific islands about the risks associated with the participation of a Chinese company. Soon after, all bids were deemed noncompliant and removed from consideration, a challenge to China’s increasing control of digital traffic in Oceania.

China’s influence in the undersea cable world has grown immensely in recent years. In 2019, China owned, supplied, or was a landing point for over 11 percent of the world’s undersea cables, and it is aiming to grow this proportion to 20 percent by 2030. US warnings about Chinese cable companies demonstrate how Washington, with its allies and partners, is working to counter Chinese influence in supplying undersea cables in the Pacific.

A global deterrence plan

The world’s information is in serious danger, as perpetrators could resort to malicious attacks not only to interrupt connectivity but also to tap into the cables and eavesdrop. When undersea cables are cut or damaged, the laws that determine who is responsible for sabotage vary depending on where the cables are laid. For example, a coastal state has sovereign rights in its territorial sea, according to Article 21 of the UN Convention on the Law of the Sea (UNCLOS). In addition, a coastal state may exercise its rights to repair and maintain undersea cables in its exclusive economic zone, according to UNCLOS Article 58.

However, in regard to cables that are sabotaged in international waters, there is currently no effective regime to hold the perpetrator of damage responsible. If cables are willfully or accidentally damaged by a ship or person, the jurisdiction to determine an appropriate punishment for the perpetrator lies with the state under whose flag the ship operates or that of the person’s citizenship. Because this places onus on the perpetrator’s state, not the state that owns the cable, there is no effective regime to ensure that the responsible party is held accountable directly.

It is time for an intergovernmental organization such as the UN or its International Telecommunication Union (ITU) to take undersea cable security seriously and establish internationally recognized protocols under a formalized protection plan that deters actions against undersea cables and prioritizes the security of digital communications.

Such a protection plan should give jurisdiction to the cable owner’s state. Under such a plan, the fact that the cable owner’s state could take the perpetrator’s state to court might make intentional saboteurs think twice, creating a deterrent effect, especially if fines or remediation costs are significant. It should also take into account nonstate actors, such as armed groups or large multinational business companies, who could interfere with the cables. UNCLOS, as a traditional treaty between states, does not hold nonstate actors responsible, even in a scenario in which a terrorist group were to inflict damage.

The type of first-rate technology required to cut undersea cables is immensely expensive and not typically affordable for nonstate actors or militia groups—and even for many states. Only a few countries have submarines: For example, China owns vessels such as the Jiaolong and Russia owns vessels such as the Losharik. However, countries often rely on companies to manufacture and lay cables, and there are concerns that untrustworthy companies maintaining undersea cables could become involved in disrupting the data inside the cables—for example, by spying or stealing information.

However, if the ITU is to be the origin of such a regime, it must look inward and address what some democratic countries would call a major controversy: China’s increasing influence in the UN body. From 2015 to 2022, Chinese engineer Houlin Zhao served as the ITU’s secretary-general, and during that time he championed China’s Digital Silk Road vision and notably increased Chinese employment at the ITU. He seemed to forget his position as a neutral international civil servant, acting more like a Chinese diplomat.

During Zhao’s term, Huawei and the Chinese government introduced its “New IP” proposal to the ITU which quickly became controversial for sacrificing the privacy of individuals and making state control and monitoring of digital communications easier. Despite not yet being debated, it was backed by two authoritarian governments (China and Russia) and opposed by the United States, Sweden, the United Kingdom, and several other democratic nations.

While Zhao was replaced by an American engineer—Doreen Bogdan-Martin—China has been sending more individuals than other states to various study groups at the ITU. It is also one of the top contributors to the ITU’s annual budget, providing about $7.5 million in 2023. It is clear that China recognizes the importance and influence to be had in the digital space through undersea cables, and its attempts to influence the management of this global infrastructure should not be left uncountered.

In the UN, increasing factionalization could make finding common ground for a new regime difficult but not impossible. Countries would need to agree that managing undersea cables together is important. Similar agreement has been reached on the need for nuclear protocols and for deconfliction in space operations—areas where states are generally more willing to share information, despite counterintelligence concerns.

From a hybrid warfare perspective, sabotaging or destroying undersea cables can be a powerful tool for adversaries. As countries come to rely more on digital communications and infrastructures, a sudden or unexpected blackout can increase social angst and foster political instability. World leaders must switch focus to establishing a working international regime that governs how the world responds to undersea cable sabotage to deter those who may see an opportunity in attacking the system. The effort should be directed at creating a working international regime that enhances individual privacy, not more government control of the internet, when protecting the data in undersea cables.

The world’s interconnectivity provides for the movement of tremendous wealth, improved access to information, and international relationships that would have been impossible only fifty years ago. With huge benefits come huge risks, and for undersea cables, those risks include significant vulnerabilities that global leaders must take seriously. They must build better protections now, before nefarious actors come to view undersea cables as a viable target.


Amy Paik is an associate research fellow at the Korea Institute for Defense Analyses (KIDA). She has been with the Center for Security and Strategy at KIDA since 2013. She is also a visiting scholar at the Reischauer Center for East Asian Studies at Johns Hopkins University School of Advanced International Studies.

Jennifer Counter is a nonresident senior fellow in the Scowcroft Center for Strategy and Securitys Forward Defense Program. She is a member of the Gray Zone Task Force focusing on influence, intelligence, and covert action.

This piece is based on a doctoral dissertation, written by Paik, entitled “Building an International Regulatory Regime in Submarine Cables and Global Marine Communications.”

The post International law doesn’t adequately protect undersea cables. That must change. appeared first on Atlantic Council.

]]>
Makanju quoted in TIME Magazine on cybersecurity capabilites https://www.atlanticcouncil.org/insight-impact/in-the-news/makanju-quoted-in-time-magazine-on-cybersecurity-capabilites/ Wed, 17 Jan 2024 20:04:00 +0000 https://www.atlanticcouncil.org/?p=740767 On January 17, Transatlantic Security Initiative nonresident senior fellow Anna Makanju was mentioned in an article in TIME Magazine discussing OpenAI’s work providing cybersecurity capabilities to the Pentagon.   

The post Makanju quoted in TIME Magazine on cybersecurity capabilites appeared first on Atlantic Council.

]]>

On January 17, Transatlantic Security Initiative nonresident senior fellow Anna Makanju was mentioned in an article in TIME Magazine discussing OpenAI’s work providing cybersecurity capabilities to the Pentagon.

  

The Transatlantic Security Initiative, in the Scowcroft Center for Strategy and Security, shapes and influences the debate on the greatest security challenges facing the North Atlantic Alliance and its key partners.

The post Makanju quoted in TIME Magazine on cybersecurity capabilites appeared first on Atlantic Council.

]]>
The sentencing of a US Navy sailor is a window into Chinese espionage. Here’s how the US should respond. https://www.atlanticcouncil.org/blogs/new-atlanticist/the-sentencing-of-a-us-navy-sailor-is-a-window-into-chinese-espionage-heres-how-the-us-should-respond/ Sat, 13 Jan 2024 17:09:30 +0000 https://www.atlanticcouncil.org/?p=724859 China’s intelligence services recognize that national security information does not have to be classified to provide them with value.

The post The sentencing of a US Navy sailor is a window into Chinese espionage. Here’s how the US should respond. appeared first on Atlantic Council.

]]>
The United States and its allies and partners are under constant threat from pervasive efforts by China to collect intelligence, though this rarely makes it into the public eye. This week provided a clear reminder of this threat. On January 8, US Navy sailor Wenheng Zhao, who pled guilty in October 2023 in the Central District of California to one count of conspiring with a foreign intelligence officer and one count of receiving a bribe, was sentenced to twenty-seven months in prison and ordered to pay a $5,500 fine.

Zhao was one of two active duty US servicemembers indicted in August 2023 for providing sensitive US military information to China. The second, Jinchao Wei, was indicted for violating an espionage statute and multiple export violations in the Southern District of California. According to the indictment, he was granted US citizenship while the alleged illegal activities were taking place. (Wei is, of course, presumed innocent until proven guilty in a court of law.)

These two cases are playing out as tensions remain high between the United States and China, even after the November 2023 meeting between US President Joe Biden and Chinese leader Xi Jinping. In response to these court cases, there will be an understandable temptation for the United States to react by doing something to address Chinese espionage, and perhaps even pressure for systemic changes to the US counterintelligence approach. But big, sudden changes often create new and potentially greater vulnerabilities. Instead, policymakers should respond carefully and deliberately by seizing this moment to manage counterintelligence and security risks more effectively over the long term.

This can be done by decreasing the probability of future similar events from occurring, while avoiding creating new risks. Specifically, the response should consider focusing on prevention via training, enhanced information-sharing with allies and partners, and a shift to a more holistic risk-based personnel security approach for all US military members.

Intelligence collection doesn’t always mean stealing classified secrets

These two cases suggest that China’s intelligence services recognize that national security information does not have to be classified to provide them with value.  

Although both Zhao and Wei reportedly had secret-level security clearances, they were not assigned to particularly sensitive military occupational specialties, and there are no indications within the indictments that they passed classified information to Beijing’s intelligence services.

Wei was assigned to the USS Essex amphibious assault ship, which operates as a “Lightning carrier,” a platform for fifth generation F-35B Lightning strike aircraft. He allegedly used his phone to take photos that he provided to China’s intelligence services, while also providing information regarding potential vulnerabilities of the USS Wasp class of US Navy ship.

Zhao reportedly provided Chinese intelligence with information regarding the electrical system for a storage facility at a base in Okinawa housing a Ground/Air Task-Oriented Radar system. This radar system is used for expeditionary warfare that supports Marines in a contested or potentially contested maritime area—the kind of warfare that would matter in a conflict in the Western Pacific.

Given China’s resources, these were low-cost operations relative to the information allegedly received and a high return on investment to enhance Beijing’s hard power. As compensation for their alleged activities, Wei reportedly received between $10,000 and $15,000, while Zhao received the equivalent of almost $15,000.  

Three new steps to bolster counterintelligence and security

While these cases shed light on national security risks for the United States and its allies and partners, they also present the opportunity to justify new options for Washington to respond. That response should not, for example, be to limit the opportunities for foreign nationals to serve honorably in the US military or take measures that could damage recruitment and retention. Rather, it should take careful, measured steps to reinforce the foundations of counterintelligence and security. There are three steps policymakers should take next:

1. Focus more on prevention relative to treatment

In the medical community, doctors think of solutions in terms of prevention and treatment. For national security, the United States must do both, but in this instance, prevention—via training—should be the focus.

Specifically, the Department of Defense should enhance its counterintelligence threat awareness and reporting training program. This can be done by increasing the frequency of the training, presenting the information in different ways, and requiring a signed acknowledgement of responsibility from the training recipient. Such prevention measures would require additional resources for the Department of Defense counterintelligence and security system, but it would be worth the cost since the enhanced training requirements would decrease risk and potential costs overall.

2. Mobilize allies and partners to work together on counterintelligence

While protecting the integrity of the criminal justice process, the United States should consider sharing as much information as possible with its allies and partners about the methods that China’s intelligence services use to conduct their operations, particularly US allies and partners in the Indo-Pacific, since they are likely being targeted using similar methods. 

Specifically, the US counterintelligence community should host periodic events with its allies and partners to exchange information regarding how Beijing’s intelligence services target military members. This will help educate their military personnel regarding the evolving threat, including the types of cover used to approach potential targets. In the case of Zhao, the Chinese intelligence officer reportedly portrayed himself to Zhao as a maritime economic researcher, who needed information in order to “inform investment decisions.”

3. Establish a more holistic approach to personnel security that better integrates counterintelligence

Finally, the Department of Defense should consider enhancing the current security clearance-based system with a more holistic, risk-based personnel security approach. This would include those US military members who do not require access to classified information.

How might this work? There are various policies and systems already in place for personnel security and information security, especially for individuals who hold top secret security clearances and those who work in sensitive compartmented information facilities (SCIFs). Those important safeguards for security clearance holders should remain, but there are currently disconnects between security considerations (Do the duties of a position require working with sensitive information?) and counterintelligence findings (What information might China or other countries want?). The goal, then, should be to more closely integrate security and counterintelligence. Such an approach would fuse counterintelligence information regarding the evolving capabilities and intentions of foreign intelligence services with information about the duties of the position.

The risks of national security information being provided to foreign intelligence services have always existed and can never be eliminated, so the objective should be to optimally manage those risks. This could best be accomplished by investing in training, increasing sharing with allies and partners, and shifting to a more holistic risk-based personnel security approach for all US military members. 

Given the long-term and dynamic challenges of US-China strategic competition, now is the time to adapt US counterintelligence and security policy to effectively meet those challenges posed by China’s intelligence collection efforts.


Andrew Brown is a nonresident fellow with the Atlantic Council’s Indo-Pacific Security Initiative, where he specializes in defense and intelligence issues. He was previously a criminal investigator with the Department of Defense and was assigned to the Office of the Director of National Intelligence (ODNI).

The views expressed in this article are the author’s and do not reflect those of the Department of Defense or ODNI.

The post The sentencing of a US Navy sailor is a window into Chinese espionage. Here’s how the US should respond. appeared first on Atlantic Council.

]]>
Global China Hub Nonresident Fellow Dakota Cary spoke to CNN https://www.atlanticcouncil.org/insight-impact/in-the-news/global-china-hub-nonresident-fellow-dakota-cary-spoke-to-cnn/ Fri, 12 Jan 2024 19:51:00 +0000 https://www.atlanticcouncil.org/?p=725971 On January 12, GCH Nonresident Fellow Dakota Cary spoke to CNN on how the Chinese government relies on the private sector to help its cybersecurity capacities.

The post Global China Hub Nonresident Fellow Dakota Cary spoke to CNN appeared first on Atlantic Council.

]]>

On January 12, GCH Nonresident Fellow Dakota Cary spoke to CNN on how the Chinese government relies on the private sector to help its cybersecurity capacities.

The post Global China Hub Nonresident Fellow Dakota Cary spoke to CNN appeared first on Atlantic Council.

]]>
Global China Hub Nonresident Fellow Dakota Cary Featured on Click Here https://www.atlanticcouncil.org/insight-impact/in-the-news/global-china-hub-nonresident-fellow-dakota-cary-on-click-here/ Thu, 11 Jan 2024 15:47:34 +0000 https://www.atlanticcouncil.org/?p=723872 On January 10, GCH Nonresident Fellow Dakota Cary was brought on to Click Here to discuss his report, “Sleigh of hand: How China weaponizes software vulnerabilities,” which explains how Chinese software vulnerability laws require Chinese businesses to report coding flaws to a government agency, which in turn shares this information with state-sponsored hacking groups.

The post Global China Hub Nonresident Fellow Dakota Cary Featured on Click Here appeared first on Atlantic Council.

]]>

On January 10, GCH Nonresident Fellow Dakota Cary was brought on to Click Here to discuss his report, “Sleigh of hand: How China weaponizes software vulnerabilities,” which explains how Chinese software vulnerability laws require Chinese businesses to report coding flaws to a government agency, which in turn shares this information with state-sponsored hacking groups.

The post Global China Hub Nonresident Fellow Dakota Cary Featured on Click Here appeared first on Atlantic Council.

]]>
Global China Hub Nonresident Fellow Dakota Cary quoted in CNN https://www.atlanticcouncil.org/insight-impact/in-the-news/global-china-hub-nonresident-fellow-dakota-cary-quoted-in-cnn/ Wed, 10 Jan 2024 19:52:00 +0000 https://www.atlanticcouncil.org/?p=725969 On January 10, GCH Nonresident Fellow Dakota Cary was quoted in CNN on China’s surveillance capabilities.

The post Global China Hub Nonresident Fellow Dakota Cary quoted in CNN appeared first on Atlantic Council.

]]>

On January 10, GCH Nonresident Fellow Dakota Cary was quoted in CNN on China’s surveillance capabilities.

The post Global China Hub Nonresident Fellow Dakota Cary quoted in CNN appeared first on Atlantic Council.

]]>
Ukraine is on the front lines of global cyber security https://www.atlanticcouncil.org/blogs/ukrainealert/ukraine-is-on-the-front-lines-of-global-cyber-security/ Tue, 09 Jan 2024 21:37:52 +0000 https://www.atlanticcouncil.org/?p=722954 Ukraine is currently on the front lines of global cyber security and the primary target for groundbreaking new Russian cyber attacks, writes Joshua Stein.

The post Ukraine is on the front lines of global cyber security appeared first on Atlantic Council.

]]>
There is no clear dividing line between “cyber warfare” and “cyber crime.” This is particularly true with regard to alleged acts of cyber aggression originating from Russia. The recent suspected Russian cyber attack on Ukrainian mobile operator Kyivstar is a reminder of the potential dangers posed by cyber operations to infrastructure, governments, and private companies around the world.

Russian cyber activities are widely viewed as something akin to a public-private partnership. These activities are thought to include official government actors who commit cyber attacks and unofficial private hacker networks that are almost certainly (though unofficially) sanctioned, directed, and protected by the Russian authorities.

The most significant government actor in Russia’s cyber operations is reportedly Military Unit 74455, more commonly called Sandworm. This unit has been accused of engaging in cyber attacks since at least 2014. The recent attack on Ukraine’s telecommunications infrastructure was probably affiliated with Sandworm, though specific relationships are intentionally hard to pin down.

Stay updated

As the world watches the Russian invasion of Ukraine unfold, UkraineAlert delivers the best Atlantic Council expert insight and analysis on Ukraine twice a week directly to your inbox.

Attributing cyber attacks is notoriously difficult; they are designed that way. In some cases, like the attacks on Ukraine’s electrical and cellular infrastructure, attribution is a matter of common sense. In other cases, if there is enough information, security firms and governments can trace attacks to specific sources.

Much of Russian cyber crime occurs through private hacker groups. Russia is accused of protecting criminals who act in the interests of the state. One notable case is that of alleged hacker Maksim Yakubets, who has been accused of targeting bank accounts around the world but remains at large in Russia despite facing charges from the US and UK.

The Kremlin’s preferred public-private partnership model has helped make Russia a major hub for aggressive cyber attacks and cyber crime. Private hacker networks receive protection, while military hacking projects are often able to disguise their activities by operating alongside private attacks, which provide the Kremlin with a degree of plausible deniability.

More than ten years ago, Thomas Rid predicted “cyber war will not take place.” Cyber attacks are not a battlefield, they are a race for digital resources (including access to and control of sensitive devices and accounts). This race has been ongoing for well over a decade.

Part of the reason the US and other NATO allies should be concerned about and invested in the war in Ukraine is that today’s cyber attacks are having an impact on cyber security that is being felt far beyond Ukraine. As Russia mounts further attacks against Ukrainian targets, it is also expanding its resources in the wider global cyber race.

Andy Greenberg’s book Sandworm documents a range of alleged Russian attacks stretching back a number of years and states that Sandworm’s alleged operations have not been limited to cyber attacks against Ukraine. The United States indicted six GRU operatives as part of Sandworm for their role in a series of attacks, including attempts to control the website of the Georgian Parliament. Cyber security experts are also reasonably sure that the NotPetya global attack of 2016 was perpetrated by Sandworm.

The NotPetya attack initially targeted Ukraine and looked superficially like a ransomware operation. In such instances, the victim is normally prompted to send cryptocurrency to an account in order to unlock the targeted device and files. This is a common form of cyber crime. The NotPetya attack also occurred after a major spree of ransomware attacks, so many companies were prepared to make payouts. But it soon became apparent that NotPetya was not ransomware. It was not meant to be profit-generating; it was destructive.

The NotPetya malware rapidly spread throughout the US and Europe. It disrupted global commerce when it hit shipping giant Maersk and India’s Jawaharlal Nehru Port. It hit major American companies including Merck and Mondelez. The commonly cited estimate for total economic damage caused by NotPetya is $10 billion, but even this figure does not capture the far greater potential it exposed for global chaos.

Ukraine is currently on the front lines of global cyber security and the primary target for groundbreaking new cyber attacks. While identifying the exact sources of these attacks is necessarily difficult, few doubt that what we are witnessing is the cyber dimension of Russia’s ongoing invasion of Ukraine.

Looking ahead, these attacks are unlikely to stay in Ukraine. On the contrary, the same cyber weapons being honed in Russia’s war against Ukraine may be deployed against other countries throughout the West. This makes it all the more important for Western cyber security experts to expand cooperation with Ukraine.

Joshua Stein is a researcher with a PhD from the University of Calgary.

Further reading

The views expressed in UkraineAlert are solely those of the authors and do not necessarily reflect the views of the Atlantic Council, its staff, or its supporters.

The Eurasia Center’s mission is to enhance transatlantic cooperation in promoting stability, democratic values and prosperity in Eurasia, from Eastern Europe and Turkey in the West to the Caucasus, Russia and Central Asia in the East.

Follow us on social media
and support our work

The post Ukraine is on the front lines of global cyber security appeared first on Atlantic Council.

]]>
Ukrainian telecoms hack highlights cyber dangers of Russia’s invasion https://www.atlanticcouncil.org/blogs/ukrainealert/ukrainian-telecoms-hack-highlights-cyber-dangers-of-russias-invasion/ Thu, 21 Dec 2023 00:09:09 +0000 https://www.atlanticcouncil.org/?p=718878 An unprecedented December 12 cyber attack on Ukraine's largest telecoms operator Kyivstar left tens of millions of Ukrainians without mobile services and underlined the cyber warfare potential of Russia's ongoing invasion, writes Mercedes Sapuppo.

The post Ukrainian telecoms hack highlights cyber dangers of Russia’s invasion appeared first on Atlantic Council.

]]>
A recent cyber attack on Ukraine’s largest telecommunications provider, Kyivstar, caused temporary chaos among subscribers and thrust the cyber front of Russia’s ongoing invasion back into the spotlight. Kyivstar CEO Oleksandr Komarov described the December 12 hack as “the biggest cyber attack on telco infrastructure in the world,” underlining the scale of the incident.

This was not the first cyber attack targeting Kyivstar since Russia launched its full-scale invasion in February 2022. The telecommunications company claims to have repelled around 500 attacks over the past twenty-one months. However, this latest incident was by far the most significant.

Kyivstar currently serves roughly 24 million Ukrainian mobile subscribers and another million home internet customers. This huge client base was temporarily cut off by the attack, which also had a knock-on impact on a range of businesses including banks. For example, around 30% of PrivatBank’s cashless terminals ceased functioning during the attack. Ukraine’s air raid warning system was similarly disrupted, with alarms failing in several cities.

Kyivstar CEO Komarov told Bloomberg that the probability Russian entities were behind the attack was “close to 100%.” While definitive evidence has not yet emerged, a group called Solntsepyok claimed responsibility for the attack, posting screenshots that purportedly showed the hackers breaching Kyivstar’s digital infrastructure. Ukraine’s state cyber security agency, known by the acronym SSSCIP, has identified Solntsepyok as a front for Russia’s GRU military intelligence agency.

Stay updated

As the world watches the Russian invasion of Ukraine unfold, UkraineAlert delivers the best Atlantic Council expert insight and analysis on Ukraine twice a week directly to your inbox.

The details of the attack are still being investigated but initial findings indicate that hackers were able to breach Kyivstar security via an employee account at the telecommunications company. This highlights the human factor in cyber security, which on this occasion appears to have enabled what Britain’s Ministry of Defense termed as “one of the highest-impact disruptive cyber attacks on Ukrainian networks since the start of Russia’s full-scale invasion.”

This latest cyber attack is a reminder of the threat posed by Russia in cyberspace. Ever since a landmark 2007 cyber attack on Estonia, Russia has been recognized as one of the world’s leading pioneers in the field of cyber warfare. The Kremlin has been accused of using both state security agencies and non-state actors in its cyber operations in order to create ambiguity and a degree of plausible deniability.

While cyber attacks have been a feature of Russian aggression against Ukraine since hostilities first began in 2014, the cyber front of the confrontation has been comparatively quiet following the launch of the full-scale invasion almost two years ago. Some experts are now warning that the recent attack on the Kyivstar network may signal an intensification of Russian cyber activities, and are predicting increased cyber attacks on key infrastructure targets in the coming months as the Kremlin seeks to make the winter season as uncomfortable as possible for Ukraine’s civilian population.

Ukraine’s cyber defense capabilities were already rated as robust before Russia’s full-scale invasion. These capabilities have improved considerably since February 2022, not least thanks to a rapid expansion in international cooperation between Ukraine and leading global tech companies. “Ukraine’s cyber defense offers an innovative template for other countries’ security efforts against a dangerous enemy,” the Financial Times reported in July 2023. “Constant vigilance has been paired with unprecedented partnerships with US and European private sector groups, from Microsoft and Cisco’s Talos to smaller firms like Dragos, which take on contracts to protect Ukraine in order to gain a close-up view of Russian cyber tradecraft. Amazon Web Services has sent in suitcase-sized back-up drives. Cloudfare has provided its protective service, Project Galileo. Google Project Shield has helped fend off cyber intrusions.”

As Ukraine’s cyber defenses grow more sophisticated, Russia is also constantly innovating. Ukrainian cyber security officials recently reported the use of new and more complex malware to target state, private sector, and financial institutions. Accelerating digitalization trends evident throughout Ukrainian society in recent years leave the country highly vulnerable to further cyber attacks.

There are also some indications that Ukrainian cyber security bodies may require reform. In November 2023, two senior officials were dismissed from leadership positions at the SSSCIP amid a probe into alleged embezzlement at the agency. Suggestions of corruption within Ukraine’s cyber security infrastructure are particularly damaging at a time when Kyiv needs to convince the international community that it remains a reliable partner in the fight against Russian cyber warfare.

The Kyivstar attack is a reminder that the Russian invasion of Ukraine is not only a matter of tanks, missiles, and occupying armies. In the immediate aftermath of the recent attack on the country’s telecommunications network, Ukrainian Nobel Peace Prize winner and human rights activist Oleksandra Matviichuk posted that the incident was “a good illustration of how much we all depend on the internet, and how easy it is to destroy this whole system.” Few would bet against further such attacks in the coming months.

Mercedes Sapuppo is a program assistant at the Atlantic Council’s Eurasia Center.

Further reading

The views expressed in UkraineAlert are solely those of the authors and do not necessarily reflect the views of the Atlantic Council, its staff, or its supporters.

The Eurasia Center’s mission is to enhance transatlantic cooperation in promoting stability, democratic values and prosperity in Eurasia, from Eastern Europe and Turkey in the West to the Caucasus, Russia and Central Asia in the East.

Follow us on social media
and support our work

The post Ukrainian telecoms hack highlights cyber dangers of Russia’s invasion appeared first on Atlantic Council.

]]>
Kroenig on Fox News podcast discussing cyber intrusions by China https://www.atlanticcouncil.org/insight-impact/in-the-news/kroenig-on-fox-news-podcast-discussing-cyber-intrusions-by-china/ Wed, 13 Dec 2023 18:32:18 +0000 https://www.atlanticcouncil.org/?p=715903 On December 13, Matthew Kroenig, Atlantic Council vice president and Scowcroft Center senior director, was interviewed by Fox News Rundown on how China could use its cyber intrusions into private sector entities to interfere with US efforts to protect Taiwan.

The post Kroenig on Fox News podcast discussing cyber intrusions by China appeared first on Atlantic Council.

]]>

On December 13, Matthew Kroenig, Atlantic Council vice president and Scowcroft Center senior director, was interviewed by Fox News Rundown on how China could use its cyber intrusions into private sector entities to interfere with US efforts to protect Taiwan.

I think [these cyber intrusions] really [are] about the strategic competition and really about China preparing for war.

Matthew Kroenig

The Scowcroft Center for Strategy and Security works to develop sustainable, nonpartisan strategies to address the most important security challenges facing the United States and the world.

The post Kroenig on Fox News podcast discussing cyber intrusions by China appeared first on Atlantic Council.

]]>
The 5×5—2023: The cybersecurity year in review https://www.atlanticcouncil.org/content-series/the-5x5/the-5x5-2023-the-cybersecurity-year-in-review/ Wed, 13 Dec 2023 05:01:00 +0000 https://www.atlanticcouncil.org/?p=714286 A group of Atlantic Council fellows review the past year in cybersecurity, which organizations and initiatives made positive steps, and areas for improvement going forward. 

The post The 5×5—2023: The cybersecurity year in review appeared first on Atlantic Council.

]]>
This article is part of The 5×5, a monthly series by the Cyber Statecraft Initiative, in which five featured experts answer five questions on a common theme, trend, or current event in the world of cyber. Interested in the 5×5 and want to see a particular topic, event, or question covered? Contact Simon Handler with the Cyber Statecraft Initiative at SHandler@atlanticcouncil.org.

It has been a busy year in cybersecurity and in the land of policy. On March 2, 2023, the Biden administration released its long-awaited National Cybersecurity Strategy, laying out an ambitious plan to maintain the United States’ advantage in cyberspace and boost the security and resilience of critical technical systems across the economy and society. The document was followed by its Implementation Plan and the National Cyber Workforce and Education Strategy later that summer.

This year saw other noteworthy developments, including cybersecurity failures that resulted in major hacks of organizations ranging from T-Mobile and 23andMe to critical infrastructure in Guam and the Ukrainian military amidst its war with Russia.  There has been no shortage to discuss in 2023, so we brought together a group of Atlantic Council fellows to review the past year in cybersecurity, which organizations and initiatives made positive steps, and areas for improvement going forward. 

Editors of the editor note: The 5×5’s founder and inaugural editor, Simon Handler, is moving on to new adventures, but it bears a note of thanks to Simon for his wit and work ethic in taking this series from an idea through to forty-two issues over the last four years. The series continues, but meanwhile thank you, Simon, and good luck. 

#1 What organization, public or private, had the greatest impact on cybersecurity in 2023? 

Amélie Koran, nonresident senior fellow, Cyber Statecraft Initiative, Digital Forensic Research Lab (DFRLab), Atlantic Council

“Progress Software, the makers of the MOVEit file transfer service which has been the gift that has kept on giving this year when it comes to notable breaches this year. It has impacted private and public sector organizations and over sixty million individuals around the world, with more than 80 percent of the impacted organizations based in the United States. There was rarely a cybersecurity-adjacent news story in 2023 that did not have a component tied to this software.” 

John Speed Meyers, nonresident senior fellow, Cyber Statecraft Initiative, Digital Forensic Research Lab (DFRLab), Atlantic Council; principal research scientist, Chainguard

“Since there is not, to a first approximation, a scale on which cybersecurity has been or is measured, it is hard for me to say anything objective. That said, assuming the scale extends below zero, I would like to vote for C and C++ software developers.” 

Justin Sherman, nonresident senior fellow, Cyber Statecraft Initiative, Digital Forensic Research Lab (DFRLab), Atlantic Council; founder and chief executive officer, Global Cyber Strategies

“There are, in some ways, too many to pick from—both good and bad. On the positive side in 2023, the United Kingdom’s National Cyber Security Centre continues to roll out voluntary, systemic internet security protections for British networks and organizations, most recently offering its free Domain Name System (DNS) security service to schools. Such decisions exemplify the concept of security at scale, identifying the points with great ‘leverage’ improve security, something with which US policy still struggles. On the side of undermining US cybersecurity, the Chinese government’s expanded efforts to require companies to disclose software vulnerabilities to the state increase a number of hacking risks to the United States and plenty of other countries.” 

Maggie Smith, nonresident senior fellow, Cyber Statecraft Initiative, Digital Forensic Research Lab (DFRLab), Atlantic Council; director, Cyber Project, Irregular Warfare Initiative

“I think everyone’s mind immediately goes to Microsoft and its ongoing efforts to assist Ukraine. But I think the company’s impact on cybersecurity goes beyond the all-consuming narrative around the role of the private sector before, during, and in the aftermath of conflict. In September, I read a great post by Cynthia Brumfield on the <Meta>curity Substack (I highly recommend subscribing to its ‘Best Infosec-Related Long Reads for the Week’) about the technical blunders made by Microsoft that gave Chinese actors access to US government emails. For me, it tied a bow around how I feel about how to approach cybersecurity: there is no silver bullet, and no one is ever truly secure. China’s hack highlighted how a company that is literally helping prevent catastrophic cyberattacks can simultaneously be the victim of one. This is a dichotomy inherent to the domain of cyberspace and the impact of seeing it so publicly with Microsoft was my 2023 cybersecurity ‘woah’ moment.” 

Bobbie Stempfley, nonresident senior fellow, Cyber Statecraft Initiative, Digital Forensic Research Lab (DFRLab), Atlantic Council; vice president and business unit security officer, Dell Technologies

“It is hard not to say that the Security and Exchange Commission (SEC) has had the greatest impact on cybersecurity, given how active it has been in this space. That being said, recognizing the National Institute for Standards and Technology and its publication of post-quantum encryption standards for three of its four selected algorithms and intention to evaluate the next wave of algorithms has great impact on national security.” 

#2 What was the most impactful cyber policy or initiative of 2023? 

Koran: “I would say that the US National Cybersecurity Strategy would count in this category because it was released, debated, and followed with an implementation plan. Getting any policy or directive out of the government and through the gauntlet of reviews, markup, critique, and public consumption is to be lauded. Is it perfect? No. Is it a good start? Yes. For it to succeed and the United States to continue to lead in these policy areas, policymakers need maintain, revise, and consider it a living document. For the implementation plan, leaders need to realize that these were lofty goals with aggressive timelines—many of which may be missed—but to keep trying.” 

Meyers: “Overlooking the aforementioned lack of a cybersecurity impact scale, I would nominate the Internet Security Research Group’s Prossimo project or, more parochially, the creation of Wolfi, a new security-first Linux distribution.” 

Sherman: “The 2023 US National Cybersecurity Strategy is particularly significant because of its strong, explicit bent toward regulation. It is the product of an important, positive, and long overdue decision to focus US cyber policy on where and why companies are not investing in cybersecurity, rather than continue to speak purely about public-private partnerships and ignore the failures of the market to address the risks to citizens, businesses, and the country. As a point of comparison for this shift, the 2023 cyber strategy mentions ‘regulation’ or some variant of it forty times—while the previous National Cyber Strategy, released in 2018, did not say ‘regulation’ once.” 

Smith: “For impact in 2023, the Department of Defense (DoD) Cyber Strategy is at the top of my list because it places a hard stop on DoD by clearly defining its jurisdictional limits. With the rise of ransomware and other forms of pervasive cybercrime, US Cyber Command has often worked to support other US entities to combat attacks. Many viewed DoD’s activity as blurring the line and stepping dangerously close to getting involved in domestic cybersecurity. The 2023 DoD Cyber Strategy clearly draws the line: The Department, in particular, lacks the authority to employ military forces to defend private companies against cyber-attacks. It may do so only if directed by the President, or (1) if the Secretary of Defense or other appropriate DoD official approves a request for defense support of civil authorities from the Department of Homeland Security, Federal Bureau of Investigation, or another appropriate lead Federal agency; (2) at the invitation of such a company; and (3) in coordination with the relevant local or Federal authority. Given this—and the limited circumstances in which military cyber forces would be asked to defend civilian critical infrastructure—the Department will not posture itself to defend every private sector network.” 

Stempfley: “The Delaware Court of Chancery ruling that expands the duty of care from ‘directors’ to ‘officers’ and takes an expansive view of what an officer is at a company.  The ruling in the McDonald’s Corporation Stockholder Derivative Litigation, while not getting the same attention as the SEC rule or the National Cyber Strategy, is creating impact by lining up top-to-bottom conversations about cyber risks in organizations. Additionally, it is likely to lead to more standardization and clarity around the role of the Chief Information Security Officer and other relevant officers.”

#3 What is the most important yet under-reported cyber incident of 2023?

Koran: “The T-Mobile data breaches. If we answer the question of ‘what day is it?’ and reply ‘another day for a T-Mobile breach,’ the company has not learned from its long history of breaches, nor has regulatory framework aided in curbing the regularity and impact of these breaches. While other telecommunications companies have not had as many regular lapses as T-Mobile has had, one wonders what makes them different than the others and if the issue can be remedied. Additionally, the company has decided to cut more jobs and the only thing keeping people away from sensitive areas of the company is a sign on the door of a data center with a strongly worded message of ‘please do not steal any more data.’” 

Meyers: “Using a loose definition of ‘incident,’ I would like to nominate the Cyber Safety Review Board’s decision to investigate the extortion activities of Lapsus$ prior to investigating the Russian intelligence agencies’ epic SolarWinds hack.” 

Sherman: “Among others—recognizing that I am cheating on this response by picking a few—a Chinese state-sponsored group called Volt Typhoon hacked US critical infrastructure systems, including in Guam, which speaks to the cyber-focused risks associated with any potential kinetic conflict with Beijing in the future; hackers exploited the log4j vulnerability to hack into devices and then sell the information to ‘proxyware’ services, which speaks to the intersection of major vulnerabilities and the cryptojacking, adware, and other similar markets; and Russia’s military intelligence agency built malware specifically targeting Android devices to spy on Ukrainian devices and, for a period, gained access to the Ukrainian military’s combat data exchange.” 

Smith: “Earlier this year genetic testing company 23andMe was hacked multiple times. For a long time, I have wondered about mail-order DNA kits and how they store, protect, and manage an individual’s data—consumer genetic testing data, for example, does not fall under the Health Insurance Portability and Accountability Act (HIPAA). As someone who has done genetic testing for a medical reason and felt the ripple effects of what it can reveal, the 23andMe hacks confirmed my fears that sensitive, personal genetic information gathered for commercial purposes may put marginalized groups at risk if stolen. Many genetic mutations, for example, fall in the ‘founder mutation’ category, meaning the mutation is observed with high frequency in a group that is or was geographically or culturally isolated, in which one or more of the ancestors was a carrier of the altered gene. Therefore, it is relatively easy to determine a person’s ethnicity if a founder mutation is present. 23andMe tests for many known founder mutations because they do tell people a lot about their personal history. With antisemitism at peak levels and the first 23andMe hack targeting those of Ashkenazi Jewish heritage, I think the hacking of commercial genetic data deserves a lot more attention.” 

Stempfley: “Ransomware has gotten a great deal of coverage, from the Ransomware Task Force to its highlights in the Verizon Data Breach Report (VDBR) and the financial impact—so what is under-reported in ransomware? The now documented impact to public safety. Early in the year, published research explicitly tied ransomware at hospitals and health care delivery points to impact to patient care. This study showed that in 44 percent of the cases that were studied patient care was impeded leading to negative patient outcomes. This report was published in the Journal of American Medicine Association a mainstream medical journal, not in a security publication.” 

More from the Cyber Statecraft Initiative:

#4 What cybersecurity issue went unaddressed in 2023 but deserves greater attention in 2024? 

Koran: “Not to flog the buzzwords, but better forward-leaning policies and regulations toward security in artificial intelligence (AI) and large language model (LLM) services deserve more attention. Putting these tools and services on the market well before their safety has been successfully worked out, vetted, and peer reviewed greatly increases risk to critical and non-critical infrastructure. While these tools may not be directly flipping switches at power plants and hospitals, the impact of their generated content on mis- and disinformation, at a time when the public is not critically thinking about their output, is dangerous. Even non-LLM or AI-based tools that are labelled as being backed or run by these technologies not only engender a false sense of safety and completeness but also fuel the hype train.” 

Meyers: “The ungodly amount of time that software professionals spend identifying, triaging, and remediating known software vulnerabilities. I thought computers were supposed to make our lives better.” 

Sherman: “Some of the most important protocols for internet traffic transmission globally, such as the Border Gateway Protocol (BGP), remain fundamentally insecure, and many companies and organizations still have not implemented the available cybersecurity improvements. Policymakers should also remember, amid excitement, fear, and craze about generative AI, to think about the cybersecurity of physical internet infrastructure that underpins GenAI—such as the cloud computing systems used to train and deploy models.” 

Smith: “In March, the Environmental Protection Agency (EPA) released a memorandum stressing the need for states to assess cybersecurity risk to drinking water systems and issued a new rule that added cybersecurity assessments to annual state-led Sanitary Survey Programs for public water systems. However, the EPA rescinded the rule after legal challenges. Attorneys general in Iowa, Arkansas, and Missouri, joined by the American Water Works Association and the National Rural Water Association, claimed that making the cybersecurity improvements were too costly for suppliers and those costs would pass to the consumers. Importantly, EPA Assistant Administrator Radhika Fox warned, ‘cyberattacks have the potential to contaminate drinking water, which threatens public health.’ I hope to see more action to protect our public water systems, as well as other systems critical to public health and welfare.” 

Stempfley: “The impact of Generative AI on entry-level positions in the cyber workforce [deserves greater attention]. The cyber workforce shortage has been widely reported, as has the challenge that many new entrants to the field have experienced, but we have not begun to talk about how the impacts from this technology will be disproportionately aligned to those least experienced in the field, potentially doing away with most entry level roles. If this happens, it will require us to think about the workforce in different ways.” 

#5 At year’s end, how do you assess the efficacy of the Biden administration’s 2023 National Cybersecurity Strategy?

Koran: “In a short word, it has been ineffective—despite, as I note above, being the most impactful. Barring the momentum of the software bill of materials (SBOM) message train, the suggested movements by public and private sector organizations to align with the strategy have been resisted or questioned, even though many of the ideas and efforts proposed are laudable. There was not a lot of momentum for these groups to push some of these efforts, and it will take years, not weeks or months, to meet the strategy’s goals. The strategy is a way finder, but Congress—in disarray for quite some time—needs to act to power it. Until Congress passes legislation and appropriations that support government efforts, private sector organizations will have little reason to align unless the market demands change. Everything else has also been overshadowed by global events and politics, and momentum to achieve the goals set out by the strategy will be hard to come by.” 

Meyers: “To be determined. Perhaps it shifted the Overton window on software security and liability, though I suspect that general suspicion of large technology companies did that more than the issuing of any one strategy.” 

Sherman: “The Biden administration’s strategy, particularly with its emphasis on regulation, is an important and long-overdue shift in how the US government is messaging and advancing its cybersecurity policy. However, there is still much to be done, and it is not yet clear exactly how the administration intends to implement the emphasis on regulation in practice—the implementation guidance for the National Cybersecurity Strategy entirely omitted certain sections of the Strategy itself.” 

Smith: “I think it is too early to assess the efficacy of the strategy, but I do think that it is a step forward. As a wild example, the October 22 60 Minutes brought the Five Eyes (United States, Australia, New Zealand, United Kingdom, and Canada) intelligence chiefs together for an interview—something that has never happened before! Before the interview they released a rare joint statement to confront the ‘unprecedented threat’ China poses to the innovation world, and that from quantum technology and robotics to biotechnology and artificial intelligence, China is stealing secrets in various sectors. The best part about the interview, in my opinion, is that it is conducted in a sparse, dimly lit room with all the chiefs sitting around a non-descript round table, adding to the spook factor!” 

Stempfley: “The National Cybersecurity Strategy, its associated implementation plan, and workforce strategy have been important documents and have certainly set the national direction—this direction has served the administration well in domestic and international discussions. The strategy’s influence in the federal budget process and in those elements of industry that do not typically engage in public private partnerships have not been as substantive as hoped.”

Simon Handler is a fellow at the Atlantic Council’s Cyber Statecraft Initiative within the Digital Forensic Research Lab (DFRLab). He is also the editor-in-chief of The 5×5, a series on trends and themes in cyber policy. Follow him on Twitter @SimonPHandler.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post The 5×5—2023: The cybersecurity year in review appeared first on Atlantic Council.

]]>
Ukraine’s AI road map seeks to balance innovation and security https://www.atlanticcouncil.org/blogs/ukrainealert/ukraines-ai-road-map-seeks-to-balance-innovation-and-security/ Tue, 12 Dec 2023 21:37:02 +0000 https://www.atlanticcouncil.org/?p=715576 As the world grapples with the implications of rapidly evolving Artificial Intelligence (AI) technologies, Ukraine has recently presented a national road map for AI regulation that seeks to balance the core values of innovation and security, writes Ukraine's Minister for Digital Transformation Mykhailo Fedorov.

The post Ukraine’s AI road map seeks to balance innovation and security appeared first on Atlantic Council.

]]>
As the world grapples with the implications of rapidly evolving Artificial Intelligence (AI) technologies, Ukraine has recently presented a national road map for AI regulation that seeks to balance the core values of innovation and security.

Businesses all over the world are currently racing to integrate AI into their products and services. This process will help define the future of the tech sector and will shape economic development across borders.

It is already clear that AI will allow us all to harness incredible technological advances for the benefit of humanity as a whole. But if left unregulated and uncontrolled, AI poses a range of serious risks in areas including identity theft and the dissemination of fake information on an unprecedented scale.

One of the key objectives facing all governments today is to maximize the positive impact of AI while minimizing any unethical use by both developers and users, amid mounting concerns over cyber security and other potential abuses. Clearly, this exciting new technological frontier must be regulated that ensure the safety of individuals, businesses, and states.

Some governments are looking to adopt AI policies that minimize any potential intervention while supporting business; others are attempting to prioritize the protection of human rights. Ukraine is working to strike a balance between these strategic priorities.

Stay updated

As the world watches the Russian invasion of Ukraine unfold, UkraineAlert delivers the best Atlantic Council expert insight and analysis on Ukraine twice a week directly to your inbox.

Today, Ukraine is among the world’s leading AI innovators. There are more than 60 Ukrainian tech companies registered as active in the field of artificial intelligence, but this is by no means an exhaustive list. Throughout Ukraine’s vibrant tech sector, a large and growing number of companies are developing products and applications involving AI.

The present objective of the Ukrainian authorities is to support this growth and avoid over-regulation of AI. We recognize that the rapid adoption of regulations is always risky when applied to fast-moving innovative fields, and prefer instead to adopt a soft approach that takes the interests of businesses into account. Our strategy is to implement regulation through a bottom-up approach that will begin by preparing businesses for future regulation, before then moving to the implementation stage.

During the first phase, which is set to last two to three years, the Ukrainian authorities will assist companies in developing a culture of self-regulation that will enable them to control the ethics of their AI systems independently. Practical tools will be provided to help businesses adapt their AI-based products in line with future Ukrainian and European legislative requirements. These tools will make it possible to carry out voluntary risk assessment of AI products, which will help businesses identify any areas that need improvement or review.

Ukraine also plans to create a product development environment overseen by the government and involving expert assistance. The aim is to allow companies to develop and test AI products for compliance with future legislation. Additionally, a range of recommendations will be created to provide stakeholders with practical guidelines for how to design, develop, and use AI ethically and responsibly before any legally binding regulations come into force.

For those businesses willing to do more during the initial self-regulation phase, the Ukrainian authorities will prepare voluntary codes of conduct. Stakeholders will also be issued a policy overview providing them with a clear understanding of the government’s approach to AI regulation and clarifying what they can expect in the future.

During the initial phase, the Ukrainian government’s role is not to regulate AI usage, but to help Ukrainian businesses prepare for inevitable future AI regulation. At present, fostering a sense of business responsibility is the priority, with no mandatory requirements or penalties. Instead, the focus is on voluntary commitments, practical tools, and an open dialogue between government and businesses.

The next step will be the formation of national AI legislation in line with the European Union’s AI Act. The bottom-up process chosen by Ukraine is designed to create a smooth transition period and guarantee effective integration.

The resulting Ukrainian AI regulations should ensure the highest levels of human rights protection. While the development of new technologies is by nature an extremely unpredictable process for both businesses and governments, personal safety and security remain the top priority.

At the same time, the Ukrainian approach to AI regulation is also designed to be business-friendly and should help fuel further innovation in Ukraine. By aligning the Ukrainian regulatory framework with EU legislation, Ukrainian tech companies will be able to enter European markets with ease.

AI regulation is a global issue that impacts every country. It is not merely a matter of protections or restrictions, but of creating the right environment for safe innovation. Ukraine’s AI regulation strategy aims to minimize the risk of abuses while making sure the country’s tech sector can make the most of this game-changing technology.

Mykhailo Fedorov is Ukraine’s Vice Prime Minister for Innovations and Development of Education, Science, and Technologies, and Minister of Digital Transformation.

Further reading

The views expressed in UkraineAlert are solely those of the authors and do not necessarily reflect the views of the Atlantic Council, its staff, or its supporters.

The Eurasia Center’s mission is to enhance transatlantic cooperation in promoting stability, democratic values and prosperity in Eurasia, from Eastern Europe and Turkey in the West to the Caucasus, Russia and Central Asia in the East.

Follow us on social media
and support our work

The post Ukraine’s AI road map seeks to balance innovation and security appeared first on Atlantic Council.

]]>
Kroenig on Fox & Friends on Chinese cyber intrusions https://www.atlanticcouncil.org/insight-impact/in-the-news/kroenig-on-fox-friends-on-chinese-cyber-intrusions/ Tue, 12 Dec 2023 18:54:39 +0000 https://www.atlanticcouncil.org/?p=715442 On December 12, Matthew Kroenig, Atlantic Council vice president and Scowcroft Center senior director, was interviewed on Fox & Friends on cyber intrusions into critical US entities by the People’s Republic of China. Dr. Kroenig argues that these intrusions demonstrate that China is preparing for war with the United States, and he contends that, to […]

The post Kroenig on Fox & Friends on Chinese cyber intrusions appeared first on Atlantic Council.

]]>

On December 12, Matthew Kroenig, Atlantic Council vice president and Scowcroft Center senior director, was interviewed on Fox & Friends on cyber intrusions into critical US entities by the People’s Republic of China. Dr. Kroenig argues that these intrusions demonstrate that China is preparing for war with the United States, and he contends that, to defend against cyberattacks, the US government needs to “be clear with the American people that we are in a new Cold War with China.”

We are in a serious rivalry. This isn’t some kind of competition like a tennis match.

Matthew Kroenig

The Scowcroft Center for Strategy and Security works to develop sustainable, nonpartisan strategies to address the most important security challenges facing the United States and the world.

The post Kroenig on Fox & Friends on Chinese cyber intrusions appeared first on Atlantic Council.

]]>
2024 DC Cyber 9/12 Strategy Challenge https://www.atlanticcouncil.org/content-series/cyber-9-12-project/2024-dc-cyber-9-12-strategy-challenge/ Tue, 05 Dec 2023 16:48:11 +0000 https://www.atlanticcouncil.org/?p=708927 The Atlantic Council’s Cyber Statecraft Initiative, in partnership with American University’s School of International Service and Washington College of Law, will hold the twelfth annual Cyber 9/12 Strategy Challenge both virtually and in-person in Washington, DC on March 15-16, 2024. This event will be held in a hybrid format, meaning teams are welcome to attend either […]

The post 2024 DC Cyber 9/12 Strategy Challenge appeared first on Atlantic Council.

]]>

The Atlantic Council’s Cyber Statecraft Initiative, in partnership with American University’s School of International Service and Washington College of Law, will hold the twelfth annual Cyber 9/12 Strategy Challenge both virtually and in-person in Washington, DC on March 15-16, 2024. This event will be held in a hybrid format, meaning teams are welcome to attend either virtually via Zoom, or in-person at American University’s Washington College of Law. The agenda and format will look very similar to past Cyber 9/12 Challenges, except that it will be held in a hybrid format. Plenary sessions will be livestreamed via Zoom.

Held in partnership with:

Frequently Asked QuestionsVirtual

How do I log in to the virtual sessions? 

Your team and coach will be sent an invitation to your round’s Zoom meeting in the week leading up to the event using the emails provided during registration

How will I know where to log in, and where is the schedule? 

For competition rounds you will receive an email invitation with your Zoom link. For all plenary sessions and for the team room assignments and agenda please check the Cyber 9/12 Linktree. 

How are the virtual sessions being run? 

Virtual sessions will be run very close to the traditional competition structure and rules. Each Zoom meeting will be managed by a timekeeper. This timekeeper will ensure that each team and judge logs on to the conference line and will manage the competition round.  

At the beginning of the round, decision documents will be shared by the timekeeper via Zoom and judges will have 2 minutes 30 seconds to review the document prior to the competitors’ briefing.  

Teams will have 10 minutes to present their briefing and 10 minutes for Q&A. Judges will be asked to mute themselves for the 10-minute briefing session. 

Judges will then engage the team in a Q&A session, playing the role of members of the National Security Council (or other organization as listed on the Intelligence Report instructions).

Judges will then be invited to a digital breakout room and will have 5 minutes to discuss scores and fill out their scorecards via JotForm.  

After the scoring is over, judges will have 10 minutes to provide direct feedback to the team.  

A 10-minute break is scheduled before the start of the next round. Each round has been allotted several minutes of transition time for technical difficulties and troubleshooting. 

What do I need to log into a virtual session?  

Your team will need a computer (recommended), tablet, or smartphone with a webcam, microphone, and speaker or headphones. 

Your team will be provided with a link to the Zoom conference for each competition round your team is scheduled for. If you have any questions about the software, please see Zoom’s internal guide here. 

Will my team get scored the same way on Zoom as in-person? 

Yes, the rules of the competition remain the same, including the rubric for scoring. You can see the rules and the grading rubric here.

How does advancing through the competition work in a hybrid format? 

After the Qualifying Round on Day 1, the top 50% of in-person teams and the top 50% of virtual teams will advance to the Semi-Final Round on Day 2. After the Semi-Final Round, the top 3 teams, in-person or virtual, will advance to the Final Round.

How will my team receive Intelligence Report 2 and 3? 

We will send out the Intelligence Reports via email to all qualifying teams. 

How will the final round be run? 

The final round will be run identically to the traditional final round format, except that the judges will be in-person. The virtual team will follow the standard final round format as outlined in the rules. After finishing the competition round, the virtual finalist team(s) will then join the plenary session webinar for the final round and watch the remaining finalist teams present.

Frequently Asked QuestionsIn-person

Where will the event be held in-person? 

For participants attending in-person, the Cyber 9/12 Strategy Challenge will be held at American University’s Washington College of Law (WCL).

What time will the event start and finish? 

While the final schedule has yet to be finalized, participants will be expected at American University WCL at 8:00am on Day 1, and the competition will run until approximately 5:00pm, with an evening reception at approximately 6:30pm. Day 2 will commence at approximately 9:00am, and will finish at approximately 5:30pm. The organizing team reserves the right to modify the above timing. The official schedule of events will be distributed to teams in advance of the event and will be available on the Cyber 9/12 Linktree. All times are EST. 

Will my team get scored the same way in-person as on Zoom? 

Yes, the rules of the competition remain the same, including the rubric for scoring. You can see the rules and the grading rubric here.

How does advancing through the competition work in a hybrid format? 

After the Qualifying Round on Day 1, the top 50% of in-person teams and the top 50% of virtual teams will advance to the Semi-Final Round on Day 2. After the Semi-Final Round, the top 3 teams, in-person or virtual, will advance to the Final Round.

Can teams who are eliminated on Day 1 still participate in Day 2 events? 

Yes! All teams are welcome at all of the side-programming events. We strongly encourage teams eliminated on Day 1 to attend the competition on Day 2. There will be side-programming events such as Careers Talks, Resume Workshops, and other fun, cyber-related activities. See the Cyber 9/12 Linktree in the lead up to the event to see the full schedule of event.

Will meals be included for in-person attendees?

Yes, breakfast and lunch will be provided for all participants on both days. Light refreshments & finger foods will be provided at the evening reception on Day 1.

What should I pack/bring to a Cyber 9/12 event?

At the event: Please bring at least 5 printed copies of your decision documents to give to the judges on Day 1. Teams who do not have their decision document to give to judges will be assessed a penalty. We will help print documents on Day 2. Name tags will be provided to all participants, judges, and staff at registration on March 15. We ask you to wear these name tags throughout the duration of the competition. Name tags will be printed using the exact first and last name provided upon registration.

Dress Code: We recommend that students dress in business casual attire as teams will be conducting briefings. You can learn more about business casual attire here.

Electronic Devices: Cell phones, laptops, and wearable tech will not be used during presentations but we recommend teams bring their laptops as they will need to draft their decision documents for Day 2 and conduct research. Please refer to the competition rules for additional information and for our policy on technology accommodations.

Presentation Aids: Teams may not use any visual aid other than their decision documents in their oral policy brief, including but not limited to slideshow presentations, additional handouts, binders, or folders.

How do we get to American University?

American University is on the DC Metro Red line. Metro service from both Dulles International Airport (IAD) and Reagan National Airport (DCA) connect with the Metro Red Line at Metro Center. 

Zoom

What is Zoom? 

Zoom is a free video conferencing application. We will be using it to host the competition remotely. 

Do I need a Zoom account? 

You do not have to have an account BUT we recommend that you do and download the desktop application to participate in the Cyber 9/12 Strategy Challenge. 

Please use your real name to register so we can track participation. A free Zoom account is all that is necessary to participate.  

What if I don’t have Zoom? 

Zoom is available for download online. You can also access Zoom conferences through a browser without downloading any software or registering.  

How do I use Zoom on my Mac? Windows? Linux Machine? 

Follow the instructions here and here to get started. Please use the same email you registered with for your Zoom to sign up.

Can I use Zoom on my mobile device? 

Yes, but we recommend that you use a computer or tablet 

Can each member of my team call into the Zoom conference line independently for our competition round? 

Yes. Please see the troubleshooting section below for tips if multiple team members will be joining the competition round on independent devices in the same room.  

Can other teams listen-in to my team’s session? 

Zoom links to competition sessions are team specific—only your team, your coach and your judges will have access to a session and sessions will be monitored once all participants have joined. If an observer has requested to watch your team‘s presentation, your timekeeper will notify you at the start of your round.

Staff will be monitoring all sessions and all meetings will have a waiting room enabled in order to monitor attendance. Any team member or coach in a session they are not assigned to will be removed and disqualified. 

Troubleshooting

What if my team loses internet connection or is disconnected during the competition? 

If your team experiences a loss of internet connection, we recommend following Zoom’s troubleshooting steps listed here. Please remain in contact with your timekeeper.

If your team is unable to rejoin the Zoom conference – please use one of the several dial-in lines included in the Zoom invitation.  

What if there is an audio echo or other audio feedback issue? 

There are three possible causes for audio malfunction during a meeting: 

  • A participant has both the computer and telephone audio active. 
  • A participant computer and telephone speakers are too close together.  
  • Multiple participant computers with active audio are in the same room.  

If this is the case, please disconnect the computer’s audio from other devices, and leave the Zoom conference on one computer. To avoid audio feedback issues, we recommend each team use one computer to compete. 

What if I am unable to use a video conference, can my team still participate? 

Zoom has dial-in lines associated with each Zoom conference event and you are able to call directly using any landline or mobile phone. 

We do not recommend choosing voice only lines unless absolutely necessary.

Other

Will there be keynotes or any networking activity remotely? 

Keynotes will continue as reflected on our agenda and will be broadcast with links to be shared with competitors the day before the event. Some side-programming events may not be available virtually. We apologize for the inconvenience.

We also encourage competitors and judges to join the Cyber 9/12 Strategy Challenge Alumni Network on LinkedIn where we regularly share job and internship postings, as well as information about events and how to be a part of the cyber policy community worldwide.

How should I prepare for a Cyber 9/12?

Check out our preparation materials, which includes past scenarios, playbooks including award-winning policy recommendations and a starter pack for teams that includes templates for requesting coaching support or funding.

Cyber Statecraft Initiative

The post 2024 DC Cyber 9/12 Strategy Challenge appeared first on Atlantic Council.

]]>
Community watch: China’s vision for the future of the internet https://www.atlanticcouncil.org/in-depth-research-reports/report/community-watch-chinas-vision-for-the-future-of-the-internet/ Mon, 04 Dec 2023 14:00:00 +0000 https://www.atlanticcouncil.org/?p=707988 In 2015, Beijing released Jointly Building a Community with a Shared Future in Cyberspace, a white paper outlining the CCP’s vision for the future of the internet. In the eight years since then, this vision has picked up steam outside of China, largely as the result of Beijing’s efforts to export these ideas to authoritarian countries.

The post Community watch: China’s vision for the future of the internet appeared first on Atlantic Council.

]]>
Table of contents

Executive summary
Introduction
The core of China’s approach
Case studies in China’s “shared future”

Executive summary

China recognizes that many nondemocratic and illiberal developing nations need internet connectivity for economic development. These countries aim to digitize trade, government services, and social interactions, but interconnectivity risks better communication and coordination among political dissidents. China understands this problem and is trying to build global norms that facilitate the provision of its censorship and surveillance tools to other countries. This so-called Community with a Shared Future in Cyberspace, is based around the idea of cyber sovereignty. China contends that it is a state’s right to protect its political system, determine what content is appropriate within its borders, create its own standards for cybersecurity, and govern access to the infrastructure of the internet. 

Jointly Building a Community with a Shared Future in Cyberspace, a white paper from the government of the People’s Republic of China (most recently released in 2022 but reissued periodically since 2015), is a continuation of diplomatic efforts to rally the international community around China’s concept of cyber sovereignty.1 By extending the concept of sovereignty to cyberspace, China makes the argument that the state decides the content, operations, and norms of its internet; that each state is entitled to such determinations as a de facto right of its existence; that all states should have equal say in the administration of the global internet; and that it is the role of the state to balance claims of citizens and the international community (businesses, mostly, but also other states and governing bodies). 

But making the world safe for authoritarian governments is only part of China’s motivation. As the key provider of censorship-ready internet equipment and surveillance tools, China’s concept of cyber sovereignty offers political security to other illiberal governments. Case studies in this report demonstrate how such technologies may play a role in keeping China’s friends in power.

The PRC supports other authoritarian governments for good reason. Many countries in which Chinese state-owned enterprises and PRC-based companies own mineral drawing rights or have significant investments are governed by authoritarians. Political instability threatens these investments, and, in some cases, China’s access to critical mineral inputs to its high-tech manufacturing sector. Without a globally capable navy to compel governments to keep their word on contracts, China is at the mercy of democratic revolutions and elite power struggles in these countries. By providing political security to a state through censorship, surveillance, and hacking of dissidents, China improves its chances of maintaining access to strategic plots of land for military bases or critical manufacturing inputs. A government that perceives itself to be dependent on China for political security is in no position to oppose it.

Outside of China’s strategic objectives, the push for a Community with a Shared Future in Cyberspace may also have an operational impact on state-backed hacking teams.  

As China’s cybersecurity companies earn more customers, their defenders gain access to more endpoints, better telemetry, and a more complete view of global cyber events. Leveraged appropriately, a larger customer base improves defenses. The Ministry of Industry and Information Technology’s Cybersecurity Threat and Vulnerability Information Sharing Platform, which collects information about software vulnerabilities, also collects voluntary incident response reports from Chinese firms responding to breaches of their customers.2 Disclosure of incidents and the vulnerabilities of overseas clients of Chinese cybersecurity firms would significantly increase the PRC’s visibility into global cyber operations by other nations or transnational criminal groups. China’s own defensive posture should also improve as its companies attract more global clients. 

China’s offensive teams could benefit, too. Many cybersecurity firms often allow their own country’s security services to operate unimpeded in their customers’ networks.3 Therefore, it is likely that more companies protected by Chinese cybersecurity companies means fewer networks where China’s offensive hacking teams must worry about evading defenses. 

This report uses cases studies from the Solomon Islands, Russia, and beyond to show how China is operationalizing its view of cyber sovereignty. 

Introduction

A long black slate wall covered in dark hexagonal tiles runs along the side of Nuhong Street in Wuzhen, China, eighty miles southwest of Shanghai. A gap in the middle of the wall leads visitors to the entrance of the Waterside Resort that, for the last nine years, has hosted China’s World Internet Conference, a premier event for Chinese Communist Party (CCP) cyber policymakers.

The inaugural conference didn’t seem like a foreign policy forum. The thousand or so attendees from a handful of countries and dozens of companies listened to a speaker circuit asserting that 5G is the future, big data was changing the world, and the internet was great for economic development—hardly groundbreaking topics in 2014.4 But the internet conference was more than a platform for platitudes about the internet: it also served as China’s soft launch for its international strategy on internet governance.

By the last evening of the conference, some of the attendees had already left, choosing the red-eye flight home over another night by the glass-encased pool on the waterfront. Around 11 p.m., papers slid under doorways up and down the hotel halls. Conference organizers went room by room distributing a proclamation they hoped attendees would endorse just nine hours later.5 Attendees were stunned. The document said: “During the conference, many speakers and participants suggest [sic] that a Wuzhen declaration be released at the closing ceremony.” The papers, stapled and stuffed under doors, outlined Beijing’s views of the internet. The conference attendees—many of whom were members of the China-friendly Shanghai Cooperation Organization—balked at the last-minute, tone-deaf approach to getting an endorsement of Beijing’s thoughts on the internet. The document went unsigned, and the inaugural Wuzhen internet conference wrapped without a sweeping declaration. It was clear China needed the big guns, and perhaps less shady diplomatic tactics, to persuade foreigners of the merits of their views of the internet. 

President Xi Jinping headlined China’s second World Internet Conference in 2015.6 This time the organizers skipped the late-night antics. On stage and reportedly in front of representatives from more than 120 countries and many more technology firm CEOs, Xi outlined a vision that is now enshrined in text as “Jointly Building a Community with a Shared Future in Cyberspace.”7 The four principles and five proposals President Xi laid out in his speech, which generally increase the power of the state and aim to model the global internet in China’s image, remain a constant theme in China’s diplomatic strategy on internet governance.8 In doing so, Xi fired the starting gun on an era of global technology competition that may well lead to blocs of countries aligned by shared censorship and cybersecurity standards. China has reissued the document many times since Xi’s speech, with the latest coming in 2022. 

Xi’s 2015 speech came at a pivotal moment in history for China and many other authoritarian regimes. The Arab Spring shook authoritarian governments around the world just years earlier.9 Social media-fueled revolutions saw some autocrats overthrown or civil wars started in just a few months. China shared the autocrats’ paranoia. A think tank under the purview of the Cyberspace Administration of China acutely summarized the issue of internet governance, stating: “If our party cannot traverse the hurdle represented by the Internet, it cannot traverse the hurdle of remaining in power for the long term.”10 Another PRC government agency report went even further: blaming the US Central Intelligence Agency for no fewer than eleven “color revolutions” since 2003: the National Computer Virus Emergency Response Center claimed that the United States was providing critical technical support to pro-democracy protestors.11 Specifically, the center blamed the CIA for five technologies—ranging from encrypted communications to “anti-jamming” WiFi that helped connect protestors—that played into the success of color revolutions. Exuberance in Washington over the internet leveling the playing field between dictators and their oppressed citizens was matched in conviction, if not in tone, by leaders from Beijing to Islamabad.

But China and other repressive regimes could not eschew the internet. The internet was digitizing everything, from social relationships and political affiliations to commerce and trade. Authoritarians needed a way to reap the benefits of the digital economy without introducing unacceptable risks to their political systems. China’s approach, called a Community with a Shared Future in Cyberspace,12 responds to these threats as a call to action for authoritarian governments and a path toward more amenable global internet governance for authoritarian regimes. It is, as one expert put it, China switching from defense to offense.13

The core of China’s approach

The PRC considers four principles key to structuring the future of cyberspace. These principles lay the conceptual groundwork for the five proposals, which reflect the collective tasks to build this new system. Table 1 shows the principles, which were drawn from Xi’s 2015 speech.14


Table 1: China’s Four Principles, in Xi’s Words

  • Respect for cyber sovereignty: “The principle of sovereign equality enshrined in the Charter of the United Nations is one of the basic norms in contemporary international relations. It covers all aspects of state-to-state relations, which also includes cyberspace. We should respect the right of individual countries to independently choose their own path of cyber development, model of cyber regulation and Internet public policies, and participate in international cyberspace governance on an equal footing. No country should pursue cyber hegemony, interfere in other countries’ internal affairs or engage in, connive at or support cyber activities that undermine other countries’ national security.”
  • Maintenance of peace and security: “A secure, stable and prosperous cyberspace is of great significance to all countries and the world. In the real world, there are still lingering wars, shadows of terrorism and occurrences of crimes. Cyberspace should not become a battlefield for countries to wrestle with one another, still less should it become a hotbed for crimes. Countries should work together to prevent and oppose the use of cyberspace for criminal activities such as terrorism, pornography, drug trafficking, money laundering and gambling. All cyber crimes, be they commercial cyber thefts or hacker attacks against government networks, should be firmly combated in accordance with relevant laws and international conventions. No double standards should be allowed in upholding cyber security. We cannot just have the security of one or some countries while leaving the rest insecure, still less should one seek the so-called absolute security of itself at the expense of the security of others.”
  • Promotion of openness and cooperation: “As an old Chinese saying goes, ‘When there is mutual care, the world will be in peace; when there is mutual hatred, the world will be in chaos.’ To improve the global Internet governance system and maintain the order of cyberspace, we should firmly follow the concept of mutual support, mutual trust and mutual benefit and reject the old mentality of zero-sum game or ‘winner takes all.’ All countries should advance opening-up and cooperation in cyberspace and further substantiate and enhance the opening-up efforts. We should also build more platforms for communication and cooperation and create more converging points of interests, growth areas for cooperation and new highlights for win-win outcomes. Efforts should be made to advance complementarity of strengths and common development of all countries in cyberspace so that more countries and people will ride on the fast train of the information age and share the benefits of Internet development.”
  • Cultivation of good order: “Like in the real world, freedom and order are both necessary in cyberspace. Freedom is what order is meant for and order is the guarantee for freedom. We should respect Internet users’ rights to exchange their ideas and express their minds, and we should also build a good order in cyberspace in accordance with law as it will help protect the legitimate rights and interests of all Internet users. Cyberspace is not a place beyond the rule of law. Cyberspace is virtual, but players in cyberspace are real. Everyone should abide by the law, with the rights and obligations of parties concerned clearly defined. Cyberspace must be governed, operated and used in accordance with law, so that the Internet can enjoy sound development under the rule of law. In the meantime, greater efforts should be made to strengthen ethical standards and civilized behaviors in cyberspace. We should give full play to the role of moral teachings in guiding the use of the Internet to make sure that the fine accomplishments of human civilizations will nourish the growth of cyberspace and help rehabilitate cyber ecology.”

The four principles are not of equal importance. “Respecting cyber sovereignty” is the cornerstone of China’s vision for global cyber governance. China introduced and argued for the concept in its first internet white paper in 2010.15 But cyber sovereignty is not itself controversial. The idea that a government can regulate things within its borders is nearly synonymous with what it means to be a state. Issues arise with the prescriptive and hypocritical nature of the three following principles. 

Under the “maintenance of peace and security principle,” China—a country with a famously effective and persistent ability to steal and commercialize foreign intellectual property16—suggests that all countries should abhor cyberattacks that lead to IP theft or government spying. Xi’s statement establishes equivalency between two things held separate in Western capitalist societies: intellectual property rights and trade secrets versus espionage against other governments. China holds what the US prizes but cannot defend well, IP and trade secrets, next to what China prizes but cannot guarantee for itself, the confidentiality of state secrets. The juxtaposition was an implicit bargain and one that neither would accept. In considering China’s proposition, the US continuation of traditional intelligence-collection activities contravenes China’s “peace and security principle,” providing the Ministry of Foreign Affairs spokesperson a reason to blame the United States when China is caught conducting economic espionage. 

“Promotion of openness and cooperation” is mundane enough to garner support until users read the fine print or ask China to act on this principle. Asking other countries to throw off a zero-sum mentality and view the internet as a place for mutual benefit, Xi unironically asks states to pursue win-win benefits. This argument blatantly ignores the clear differences in market access between foreign tech companies in the PRC and Chinese firms’ access to foreign markets. Of course, if a country allows a foreign firm into its market, by Xi’s argumentation, the country must have decided it was a win-win decision. It’s unclear if refusing market access to a Chinese company would be acceptable or if that would fall under zero-sum mentality and contravene the value of openness. Again, China’s rhetoric misrepresents the conditions it would likely accept. 

Cultivating “good order” in cyberspace, at least as Xi conceptualizes it, is impossible for democratic countries with freedom of speech. Entreaties that “order” be the guarantor of freedom of speech won’t pass muster in many nations, at least not the “order” sought by China’s policymakers. A report from the Institute for a Community with a Shared Future shines light onto what type of content might upset the “good order.” In its Governing the Phenomenon of Online Violence Report, analysts identify political scandals like a deadly 2018 bus crush in Chongqing or the 2020 “Wuhan virus leak rumor” as examples of online violence, alongside a case where a woman was bullied to suicide.17 Viewing political issues as “online violence” associated with good order is not just a one-off report. Staff at the Institute argue that rumors spread at the start of the pandemic in 2020 “highlight the necessity and urgency of building a community with a shared future in cyberspace.”18 For China, “online violence” is a euphemism for speech deemed politically sensitive by the government. If “making [the internet] better, cleaner and safer is the common responsibility of the international community,”19 as Xi argues, how will China treat countries it sees as abrogating its responsibility to combat such online violence? Will countries whose internet service providers rely on Chinese cloud companies or network devices be able to decide that criticizing China is acceptable within its own borders?

China’s five proposals 

The five proposals used to construct China’s Community with a Shared Future in Cyberspace carry less weight and importance than its four principles. The proposals are not apparently attached to specific funding or policy initiatives, and did not receive attention from China’s foreign ministry. They are, at most, way stations along the path to a shared future. The proposals are:

  1. Speeding up the construction of a global internet infrastructure and promoting interconnectivity.
  2. Building an online platform for cultural exchange and mutual learning.
  3. Promoting the innovative development of the cyber economy and common prosperity. 
  4. Maintaining cyber security and promoting orderly development. 
  5. Building an internet governance system and promoting equity and justice.

Implications and the future of the global internet

China’s argument for its view of global internet governance and the role of the state rests on solid ground. The PRC frequently points to the General Data Protection Regulation (GDPR) in the European Union as a leading example of the state’s role in internet regulation. The GDPR allows EU citizens to have their data deleted, forces businesses to disclose data breaches, and requires websites to give users a choice to accept or reject cookies (and what kind) each time they visit a new website. China points to concerns in the United States over foreign interference on social media as evidence of US buy-in on China’s view of cyber sovereignty. Even banal regulations like the US “know your customer” rule—which requires some businesses to collect identifying personal information about users, usually for tax purposes—fit into Beijing’s bucket of evidence. But the alleged convergence between the views of China and democratic nations stops there.

Divergent values between liberal democracies and the coterie of PRC-aligned autocracies belie our very different interpretations of the meaning of cyber sovereignty. A paper published in the CCP’s top theoretical journal mentions both the need to regulate internet content and “promote positive energy,” a Paltrowesque euphemism for party-boosting propaganda, alongside 

endorsements of the cyber sovereignty principle.20 The article extrapolates on what Xi made clear in his 2015 speech. For the CCP, censorship and sovereignty are inextricably linked. 

These differences are not new. Experts dedicate significant coverage to ongoing policy arguments at the UN, where China repeatedly pushes to classify the dissemination of unwanted content—read politically intolerable—as a crime.21 As recently as January 2023, China offered an amendment to a UN treaty attempting to make sharing false information online illegal.22 A knock-on effect of media coverage related to disinformation campaigns from China and Russia—despite their poor performance23—means policymakers, pundits, and journalists make China’s point that narratives promoted by other nations is an issue to be solved. What counts as disinformation can be meted out on a country-by-country basis. The tension between the desire to protect democracy from foreign influence and the liberal value of promoting free speech and truth in authoritarian systems is palpable. 
The United States has fueled the CCP’s concern with its public statements. China’s internet regulators criticized the United States’ Declaration for the Future of the Internet.24 The CCP, which is paranoid about foreign attempts to support “color revolutions” or foment regime change, is rightfully concerned. The United States’ second stated principle for digital technologies is to promote “democracy,” a value antithetical to continuing CCP rule over the PRC. The universal value democratic governments subscribe to—the consent of the governed—drives the US position on the benefits of connectedness. That same value scares authoritarian governments. 

Operationalizing our shared future

Jointly Build a Community with a Shared Future in Cyberspace alludes to the pathways the CCP will use to act on its vision. The document includes detailed statistics about the rollout of IPv6—a protocol for issuing internet-connected device addresses that could ease surveillance—use of the Beidou Satellite Navigation system within China and elsewhere, the domestic and international use of 5G, development of transformational technologies like artificial intelligence and Internet of Things devices, and the increasingly widespread use of internet-connected industrial devices.25 The value of different markets, like that of e-commerce or trade enabled by any of the preceding systems, are repeated many times over the course of the document. It’s clear that policymakers see the fabric of the internet—its devices, markets, and economic value—as expanding. Owning the avenues of expansion, then, is key to spreading the CCP’s values as much as it is about making money.  

Authoritarian and nondemocratic developing countries provide a bountiful market for China’s goods. Plenty of developing nations and authoritarian governments want to tighten control over the internet in their countries. Recent research demonstrates an increasing number of incidents when governments shut off the internet in their countries—a good proxy for their interest in censorship.26 These governments need the technology and tools to finely tune their control over the internet. Owing to the political environment inside the PRC, Chinese tech firms already build their products to facilitate censorship and surveillance.27 Some countries are having luck rolling out these services. The Australian Strategic Policy Institute found that “with technical support from China, local governments in East Africa are escalating censorship on social media platforms and the internet.”28 These findings are mirrored by reporting from Censys, a network data company, that found, among other things, a significant footprint for PRC-made network equipment in four African countries.29 In fact, there is no public list of countries that acknowledge supporting the Community with a Shared Future in Cyberspace approach, but there are good indicators for which nations are mostly likely to participate. 

A 2017 policy paper entitled International Strategy of Cooperation on Cyberspace indicated that China would carry out “cybersecurity cooperation” with “the Conference on Interaction and Confidence Building Measures in Asia (CICA), Forum on China-Africa Cooperation (FOCAC), China-Arab States Cooperation Forum, Forum of China and the Community of Latin American and Caribbean States and Asian-African Legal Consultative Organization.”30 But an international strategy document stating the intent to cooperate with most of the Global South is not the same as actually doing so. The 2017 strategy document is, at most, aspirational.

Instead, bilateral agreements and technical agreements between government agencies to work together on cybersecurity or internet governance are better indicators of who is part of China’s “community with a shared future.” For example, Cuba and the PRC signed a comprehensive partnership agreement on cybersecurity in early 2023, though the content of the deal remains secret.31 China has made few public announcements about other such agreements. In their place, the China National Computer Emergency Response Center (CNCERT) has “established partnerships with 274 CERTs in 81 countries and territories and signed cybersecurity cooperation memorandums with 33 of them.”32 But even these countries are not publicly identified.33 A few nations or groups are regularly mentioned around the claims of CNCERT’s international partnerships, however. Thailand, Cambodia, Laos, Malaysia, the Association of Southeast Asian Nations, the United Arab Emirates, Saudi Arabia, Brazil, South Africa, Benin, and the Shanghai Cooperation Organization are frequently mentioned. The paper on jointly building a community also mentions the establishment of the China-ASEAN Cybersecurity Exchange and Training Center, the utility of which may be questioned given China’s track record of state-backed hacking campaigns against its members.34

Along with the identity of their signatories, the contents of these agreements and their benefits also remain private. Without access to any of these agreements, one can only speculate about their benefits. There are also no countries especially competent at cyber operations or cybersecurity mentioned in the list above. The result may be that CNCERT and its certified private-sector partners receive “first dibs” when government agencies or other entities in these countries need incident response services; receiving favorable terms or financing from the Export-Import Bank of China to facilitate the purchase of PRC tech also aligns with other observed behavior.35

Besides favorable terms of trade for PRC tech and cybersecurity firms, some of the CNCERT international partners may also be subject to intelligence-sharing agreements. CNCERT operates a software vulnerability database called China National Information Security Vulnerability Sharing Platform, which accepts submissions from the public and partners with at least three other vulnerability databases.36 CNCERT’s international partnerships could add another valuable pipeline of software vulnerability information into China’s ecosystem. Moreover, under a 2021 regulation, Chinese firms conducting incident response for clients can voluntarily disclose those incidents to the Ministry of Industry and Information Technology’s “Cybersecurity Threat and Vulnerability Information Sharing Platform,” which has a separate system for collecting information about breaches.37 The voluntary disclosure of incidents and mandatory disclosure of vulnerabilities observed in overseas clients of Chinese cybersecurity firms would significantly increase the PRC’s visibility into global cyber operations by other nations or transnational criminal groups. 

Offensive capabilities, not just global cybersecurity, might be on CCP policymakers’ minds, too, when other countries agree to partner with China. Cybersecurity firms frequently allow their own country’s offensive teams to work unimpeded on their customers’ networks: with each new client China’s cybersecurity companies add to their rosters, China’s state-backed hackers may well gain another network where they can work without worrying about defenders.38 In this vein, Chen Yixin, the head of the Ministry of State Security, attended a July 2023 meeting of the Cyberspace Administration of China that underlined the importance of the Community with a Shared Future in Cyberspace.39 In September 2023, Chen published commentary in the magazine of the Cyberspace Administration of China arguing that supporting the Shared Future in Cyberspace was important work.40 Researchers from one cybersecurity firm found that the PRC has been conducting persistent, offensive operations against many African and Latin American states, even launching a special cross-industry working group to monitor PRC activities in the Global South.41 Chinese cybersecurity companies operating in those markets have not drawn similar attention to those operations. 

But China’s network devices and cybersecurity companies don’t just facilitate surveillance, collect data for better defense, or offer a potential offensive advantage, they can also be used to shore up relationships between governments and provide Beijing an avenue for influence. The Wall Street Journal exposed how Huawei technicians were involved in helping Ugandan security services track political opponents of the government.42 China’s government and its companies support such operations elsewhere, too. One source alleged that PRC intelligence officers were involved in cybersecurity programs of the UAE government, including offensive hacking and collection for the security services.43 The closeness of the relationship is apparent in other ways, too. The UAE is reportedly allowing China’s military to build a naval facility, jeopardizing the longevity of US facilities in the area, and tarnishing the UAE’s relationship with the United States.44

Providing other nondemocratic governments with offensive services and capabilities allows China to form close relationships with other regimes whose primary goal, like the CCP, is to maintain the current government’s hold on power. In illiberal democracies, such cooperation helps Beijing expand its influence and provides backsliding governments capabilities they would not otherwise have. 

China is plainly invested in the success of many other nondemocratic governments. Around the world, state-owned enterprises and private companies have inked deals in extractive industries that total billions of dollars. Many of these deals, say for mining copper or rare earth elements, provide critical inputs to China’s manufacturing capacity—they are the lifeblood of many industries, from batteries to semiconductors.45 In countries without strong rule of law, continued access to mining rights may depend on the governments that signed and approved those operations staying in power. China is already suffering from such abrogation of agreements in Mexico after the country’s president renationalized the country’s lithium deposits.46 Countries where China has significant interests, like the Democratic Republic of the Congo, are also considering nationalizing such assets.47 Close relationships with political elites, bolstered by agreements that provide political security, make it more difficult for those elites to renege on their contracts—or lose power to someone else who might. 

China cannot currently project military power around the world to enforce contracts or compel other governments. In lieu of a blue-water navy, China offers what essentially amounts to political security services by censoring internet content, monitoring dissidents, and hacking political opponents—and a way to align the interests of other authoritarian governments with its own. If a political leader feels that China is a guarantor of their own rule, they are much more likely to side with Beijing on matters big and small. A recent series of events in the Solomon Islands provide a portrait of what this can look like. 

Case studies in China’s “shared future”

The saga surrounding the Solomon Islands provides a good example of China’s model for internet governance and the reasons for its adoption. 

Over the course of 2022, the international community watched as the Solomon Islands vacillated on its course and in statements, and prevaricated about secret commitments to build a naval base for China. After a draft agreement for the Solomon Islands to host the People’s Liberation Army Navy (PLAN), the navy of the CCP’s military, was leaked to the press in March 2022, representatives of the Solomon Islands stated the agreement would not allow PLA military bases.48 Senior delegations from both Australia and the United States rushed to meet with representatives of the Pacific Island nation.49 Even opposition leaders in the Solomon Islands—who were surprised by the leaked documents—agreed that claims of PLA military bases should not be taken at face value.50 The back and forth by the Solomon Islands’ political parties worried China. In May 2022, a Chinese hacking team breached the Solomon Islands’ government systems, likely to assess the future of their agreement in the face of the island nation’s denials.51

But the denials only bought Solomon Islands Prime Minister Manasseh Sogavare more time. In August, the ruling party introduced a bill to delay elections from May 2023 to December of that year.52 Shortly thereafter, the Solomon Islands announced a deal to purchase 161 Huawei telecoms towers financed by the Export-Import Bank of China.53 (The deal came just four years after Australia had successfully prevented the Solomon Islands from partnering with Huawei to lay undersea cables to provide internet access to the island nation.)54 Months later, the foreign press reported in October 2022 that the Solomon Islands had sent police to China for training.55 Local contacts in the security services may be useful for the PRC. A provision of the drafted deal leaked in March 2022 allows PLA service members to travel off base in the event of “social unrest.”56 Such contacts could facilitate interventions in a political crisis on behalf of PM Sogavare or his successor. In the summer of 2023, China and the Solomon Islands signed an agreement expanding cooperation on cybersecurity and policing.57

To recap, in a single year the Solomon Islands agreed to host a PLAN base, delayed an election for Beijing’s friend, sent security services to train in the PRC, and rolled out PRC-made telecommunications equipment that can facilitate surveillance of political opponents. In the international system the CCP seeks, one that makes normal the censorship of political opponents and makes it a crime to disseminate information critical of authoritarian regimes, the sale of censorship as a service directly translates into the power to influence domestic politics in other nations. If there was a case study to sell China’s version of internet governance to nascent authoritarian regimes around the world, it would be the Solomon Islands.


In the international system the CCP seeks, one that makes normal the censorship of political opponents and makes it a crime to disseminate information critical of authoritarian regimes, the sale of censorship as a service directly translates into the power to influence domestic politics in other nations.


For countries with established authoritarian regimes, buying into China’s vision of internet governance and control is less about delaying elections and buying Huawei cell towers, and more about the transfer of expertise and knowledge of how to repress more effectively. Already convinced on the merits of China’s vision, these governments lack the expertise and technical capabilities to implement their shared vision of control over the internet. 

Despite its capable but sometimes blunder-prone intelligence services, Russia was recently found to be soliciting technical expertise and training from China on how to better control its domestic internet content.58Documents obtained by Radio Free Europe/Radio Liberty detailed how Russian government officials met with teams from the Cyberspace Administration of China in 2017 and 2019 to discuss how to crack down on virtual private networks, messaging apps, and online content. Russian officials even went so far as to request that a Russian team visit China to better understand how China’s Great Firewall works and how to “form a positive image” of Russia on the domestic and foreign internet.59 The leaked documents align with what the PRC’s policy document details already. 

Since 2016, they have co-hosted five China-Russia Internet Media Forum[s] to strengthen new media exchanges and cooperation between the two sides. Through the Sino-Russian Information Security Consultation Mechanism, they have constantly enhanced their coordination and cooperation on information security.

The two countries formalized the agreement that served as the basis for their cooperation on the sidelines of the World Internet Conference in 2019.60 They could not have picked a better venue to signify what China’s Community with a Shared Future in Cyberspace policy would mean for the world. 

The Solomon Islands and Russia neatly capture the spectrum of countries that might be most interested in China’s vision for the global internet. At each step along the spectrum, China has technical capabilities, software, services, and training it can offer to regimes from Borneo to Benin. 

In conclusion, the chart below provides a visualization of the spectrum of countries that could be the most interested in implementing China’s Community for a Shared Future in Cyberspace.61

Figure 1: PRC tech influence vs. democracy index score

Sources: Data from “China Index 2022: Measuring PRC Influence Around the Globe,” Doublethink Lab and China In The World Lab, https://china-index.io/; and “The World’s Most, and Least, Democratic Countries in 2022,” Economist, February 1, 2023, https://www.economist.com/graphic-detail/2023/02/01/the-worlds-most-and-least-democratic-countries-in-2022

By combining data from The Economist Democracy Index (a proxy for a country’s adherence to democratic norms and institutions) and Doublethink Lab’s China Index for PRC Technology Influence (limited to eighty countries and a proxy for a country’s exposure to, and integration of, PRC technology in its networks and services), this chart represents countries with low scores on democracy and significant PRC technology influence in the bottom right. Based on this chart, Pakistan in the most likely to support the Shared Future concept. Indeed, Pakistan has its own research center on the “Community for a Shared Future” concept.62The research center is hosted by the Communications University of China, which works closely with the CCP’s International Liaison Department responsible for keeping good relationships with foreign political parties. 

Internet conference goes prime time

The 2022 Wuzhen World Internet Conference got an upgrade and name change: the annual conference became an organization based in Beijing and the summit continues as its event, now called the World Internet Conference (WIC). The content from all previous Wuzhen conferences plasters the new organization’s website.63

An odd collection of six entities founded the new WIC organization: Groupe Speciale Mobile Association (GSMA), a mobile device industry organization; China Internet Network Information Center (CNNIC), which is responsible for China’s top-level .cn domain and IPv6 rollout, among others functions; ChinaCERT, mentioned above; Alibaba; Tencent; and Zhejiang Labs.64 Another report by the author connects the last organization, Zhejiang Labs, to research on AI for cybersecurity and some oversight by members of the PLA defense establishment.65

Though the Wuzhen iteration of the conference also included components of competition for technical innovation and research, the new collection of organizations overseeing WIC suggests it will focus more on promoting the fabric of the internet—hardware, software, and services—made by PRC firms. China’s largest tech companies including Alibaba and Tencent stand to benefit from China’s vision for global internet governance if the PRC can convince other countries to support its aims (and choose PRC firms to host their data in the process). Any policy changes tied to the elevation of the conference will become apparent over the coming years. For now, WIC will maintain the mission and goals of the Wuzhen conference.

Conclusion

China’s vision for the internet is really a vision for global norms around political speech, political oppression, and the proliferation of tools and capabilities that facilitate surveillance. Publications written by current and former PRC government officials on China’s “Shared Future for Humanity in Cyberspace” argue that the role of the state has been ignored until now, that each state can determine what is allowed on its internet—through the idea of cyber sovereignty, and that the political interests of the state are the core value that drives decision-making. Dressed up in language about the future of humanity, China’s vision for the internet is one safe for authoritarians to extract value from the interconnectedness of today’s economy while limiting risk to their regime’s stability. 

China is likely to pursue agreements on cybersecurity and internet content control in regimes where it stands to lose most if the government changed hands. China’s grip on the critical minerals market is only as strong as its partners’ grip on power. In many authoritarian, resource-rich countries, a change of government could mean the renegotiation of contracts for access to natural resources or their outright nationalization—jeopardizing China’s access to important industrial inputs. Although internet censorship and domestic surveillance capabilities do not guarantee an authoritarian government will stay in power, it does improve their odds. China lacks a globally capable navy to project power and enforce contracts negotiated with former governments, so keeping current signatories in power is China’s best bet. 

China will not have to work hard to promote its vision for internet governance in much of the world. Instead of China advocating for a new system that countries agree to use, then implement, the causality is reversed. Authoritarian regimes that seek economic benefits of widespread internet access are more apt to deploy PRC-made systems that facilitate mass surveillance, thus reducing the risks posed by increased connectivity. China’s tech companies are well-positioned to sell these goods, as their domestic market has forced them to perfect the capabilities of oppression.66 The example of Russia’s cooperation and learning from China demonstrates what the demand signal from other countries might look like. Elsewhere, secret agreements between national CERTs could facilitate access that allows for greater intelligence collection and visibility. Many Arabian Gulf countries already deploy PRC-made telecoms kit and hire PRC cybersecurity firms to do sensitive work. As the world’s autocrats roll out China’s technology, their countries will be added to the brochures of firms advertising internet connectivity, surveillance, and censorship services to their peers. Each nation buying into China’s Community for a Shared Future may well be a case study on the successful use of internet connectivity without increasing political risks: a world with fewer Arab Springs or “color revolutions.” 

About the author

Dakota Cary is a nonresident fellow at the Atlantic Council’s Global China Hub and a consultant at SentinelOne. He focuses on China’s efforts to develop its hacking capabilities.

The author extends special thanks to Nadège Rolland, Tuvia Gering, Tom Hegel, Kenton Thibaut, and Kitsch Liao for their edits and contributions. 

1    “China’s Internet White Paper,” China.org.cn, last modified June 8, 2010, accessed January 24, 2022, https://web.archive.org/web/20220124005101/http:/www.china.org.cn/government/whitepaper/2010-06/08/content_20207978.htm.
2    Dakota Cary and Kristin Del Rosso, “Sleight of Hand: How China Weaponizes Software Vulnerability,” Atlantic Council, 2023, https://www.atlanticcouncil.org/in-depth-research-reports/report/sleight-of-hand-how-china-weaponizes-software-vulnerability/.
3    I assume that a process for counterintelligence and operational deconfliction exists within the PRC security services, particularly for the more than one hundred companies that support the civilian intelligence service. Other mature countries have such processes and I graciously extend that competency to China.
4    Liu Zheng, “Foreign Experts Keen on Interconnected China Market,” China Daily, 2014, https://www.wuzhenwic.org/2014-11/20/c_548230.htm.
5    Catherine Shu, “China Tried to Get World Internet Conference Attendees to Ratify This Ridiculous Draft Declaration,” TechCrunch, 2014, https://techcrunch.com/2014/11/20/worldinternetconference-declaration/.
6    Xi Jinping, “Remarks by H.E. Xi Jinping President of the People’s Republic of China at the Opening Ceremony of the Second World Internet Conference,” Ministry of Foreign Affairs of the People’s Republic of China, December 24, 2015, https://www.fmprc.gov.cn/eng/wjdt_665385/zyjh_665391/201512/t20151224_678467.html.
7    State Council Information Office of the People’s Republic of China, “Full Text: Jointly Build a Community with a Shared Future in Cyberspace,” November 7, 2022, http://english.scio.gov.cn/whitepapers/2022-11/07/content_78505694.htm. At the time, Xi was building on the nascent “shared future for humanity” concept introduced at the Eighteenth Party Congress in 2012; see Xinhua News Agency, “A Community of Shared Future for All Humankind,” Commentary, March 20, 2017, http://www.xinhuanet.com/english/2017-03/20/c_136142216.htm. However, state media has since claimed that the “shared future” concept was launched during a March 2013 event that Xi participated in while visiting Moscow; see Central Cyberspace Affairs Commission of the People’s Republic of China, “共行天下大道 共创美好未来——写在习近平主席提出构建人类命运共同体理念十周年之际,” PRC, March 24, 2023, http://www.cac.gov.cn/2023-03/24/c_1681297761772755.htm. The party rolled out the concept as part of its foreign policy and even added its language to the constitution in 2018; see N. Rolland [@RollandNadege], “My latest for @ChinaBriefJT on China’s ‘community with a shared future for humanity,’ which is BTW now enshrined in PRC Constitution,” Twitter (now X), February 26, 2018, https://twitter.com/RollandNadege/status/968152657226555392, as also seen in N. Rolland, ed., An Emerging China-Centric Order: China’s Vision for a New World Order in Practice, National Bureau of Asian Research, 2020, https://www.nbr.org/wp-content/uploads/pdfs/publications/sr87_aug2020.pdf.
8    The PRC has even republished the 2015 document with updated statistics every few years, most recently in 2022; see State Council Information Office, “Full Text: Jointly Build a Community with a Shared Future in Cyberspace.”
9    US Director of National Intelligence (DNI), “Digital Repression Growing Globally, Threatening Freedoms,” [PDF file],  ODNI, April 24, 2023, https://www.dni.gov/files/ODNI/documents/assessments/NIC-Declassified-Assessment-Digital-Repression-Growing-April2023.pdf.
10    E. Kania et al., “China’s Strategic Thinking on Building Power in Cyberspace,” New America, September 25, 2017, https://www.newamerica.org/cybersecurity-initiative/blog/chinas-strategic-thinking-building-power-cyberspace/.
11    National Computer Virus Emergency Response Center, “‘Empire of Hacking’: The U.S. Central Intelligence Agency—Part I,” [PDF file], May 4, 2023, https://web.archive.org/web/20230530221200/http:/gb.china-embassy.gov.cn/eng/PressandMedia/Spokepersons/202305/P020230508664391507653.pdf.
12    Occasionally, translations refer to this as “a Community with a Shared Destiny [for Mankind]” or “Shared Future for Humanity in Cyberspace.” See State Council Information Office of the People’s Republic of China, “Full text: Jointly Build a Community with a Shared Future in Cyberspace.”
13    Thanks to Nadege Rolland for her keen insight. 
14    Xi, “Remarks by H.E. Xi Jinping President of the People’s Republic of China.” 
15    “China’s Internet White Paper,” China.org.cn. Thanks to Tuvia Gering for flagging this.
16    W. C. Hannas, J. Mulvenon, and A. B. Puglisi, Chinese Industrial Espionage: Technology Acquisition and Military Modernisation (Abingdon, United Kingdom: Routledge, 2013), https://doi.org/10.4324/9780203630174.
17    Institute for a Community with Shared Future, “《网络暴力现象治理报告》 [Governance Report on the Phenomenon of Internet Violence],” Communication University of China, July 1, 2022, https://web.archive.org/web/20221205001148/https:/icsf.cuc.edu.cn/2022/0701/c6043a194580/page.htm; andInstitute for a Community with Shared Future, “Full Text《网络暴力现象治理报告》[Governance Report on the Phenomenon of Internet Violence],” Communication University of China, July 1, 2022, https://archive.ph/B741D.
18    Institute for a Community with Shared Future, “Understanding the Global Cyberspace Development and Governance Trends to Promote the Construction of a Cyberspace Community with a Shared Future,” Communication University of China, September 9, 2020, www.archive.ph/7XQyX.
19    Xi, “Remarks by H.E. Xi Jinping President of the People’s Republic of China.”
20    R. Creemers, P. Triolo, and G. Webster, “Translation: China’s New Top Internet Official Lays Out Agenda for Party Control Online,” New America, September 24, 2018, https://www.newamerica.org/cybersecurity-initiative/digichina/blog/translation-chinas-new-top-internet-official-lays-out-agenda-for-party-control-online/.
21    M. Schmitt, “The Sixth United Nations GGE and International Law in Cyberspace,” Just Security (forum), June 10, 2021, https://www.justsecurity.org/76864/the-sixth-united-nations-gge-and-international-law-in-cyberspace/; and S. Sabin, “The UN Doesn’t Know How to Define Cybercrime,” Axios Codebook (newsletter), January 10, 2023, https://www.axios.com/newsletters/axios-codebook-e4388c1d-d782-4743-b96f-c228cdc7baa1.html.
22    A. Martin, “China Proposes UN Treaty Criminalizes ‘Dissemination of False Information,’ ” Record, January 17, 2023, https://web.archive.org/web/20230118135457/https:/therecord.media/china-proposes-un-treaty-criminalizing-dissemination-of-false-information/.
23    R. Serabian and L. Foster, “Pro-PRC Influence Campaign Expands to Dozens of Social Media Platforms, Websites, and Forums in at Least Seven Languages, Attempted to Physically Mobilize Protesters in the U.S.,” Mandiant, September 7, 2021, https://www.mandiant.com/resources/blog/pro-prc-influence-campaign-expands-dozens-social-media-platforms-websites-and-forums; and G. Eady et al., “Exposure to the Russian Internet Research Agency Foreign Influence Campaign on Twitter in the 2016 US Election and Its Relationship to Attitudes and Voting Behavior, Nature Communications 14, no. 62 (2023), https://www.nature.com/articles/s41467-022-35576-9#MOESM1.
24    State Council of Information Office, PRC, “LIVE: Press Conference on White Paper on Jointly Building Community with Shared Future in Cyberspace,” New China TV, streamed live November 6, 2022, YouTube video, https://www.youtube.com/watch?v=hBYbjnSeLX0.
25    China Daily, “Jointly Build a Community with a Shared Future in Cyberspace,” November 8, 2022, https://archive.ph/ch3LP+.
26    Access Now, “Internet Shutdowns in 2022,” 2023, https://www.accessnow.org/internet-shutdowns-2022/.
27    K. Drinhausen and J. Lee, “CCP 2021: Smart Governance, Cyber Sovereignty, and Tech Supremacy,” Mercator Institute for China Studies (MERICS), June 15, 2021, https://merics.org/en/ccp-2021-smart-governance-cyber-sovereignty-and-tech-supremacy.
28    N. Attrill and A. Fritz, “China’s Cyber Vision: How the Cyberspace Administration of China Is Building a New Consensus on Global Internet Governance,” Australian Strategic Policy Institute, November 24, 2021, https://www.aspi.org.au/report/chinas-cyber-vision-how-cyberspace-administration-china-building-new-consensus-global.
29    S. Hoffman, “Potential Chinese influence on African IT infrastructure,” Censys, March 8, 2023,   https://censys.com/potential-chinese-influence-on-african-it-infrastructure/.
30    Xinhua, “Full Text: International Strategy of Cooperation on Cyberspace,” March 1, 2017, https://perma.cc/GDY6-6ZF8.
31    Prensa Latina, “Cuba and China Sign Agreement on Cybersecurity,” 2023, April 3, 2023,  https://www.plenglish.com/news/2023/04/03/cuba-and-china-sign-agreement-on-cybersecurity/.
32    China Daily, “Jointly Build.” CNCERT is a government-organized nongovernmental organization, not a direct government agency. It reports incidents and software vulnerabilities to PRC government agencies, including the 867-917 National Security Platform, and a couple of Ministry of Public Security Bureaus. See About Us (archive.vn).
33    When asked for records of these international partners, CNCERT directed the author back to the home page of the organization’s website.
35    Asian Development Bank, “Information on the Export-Import Bank of China,” n.d., https://www.adb.org/sites/default/files/linked-documents/46058-002-sd-04.pdf.
36    D. Cary and K. Del Rosso, Sleight of Hand: How China Weaponizes Software Vulnerabilities, Atlantic Council, September 6, 2023,  https://www.atlanticcouncil.org/in-depth-research-reports/report/sleight-of-hand-how-china-weaponizes-software-vulnerability/ 
37    Cary and Del Rosso, Sleight of Hand.
38    I assume that a process for counterintelligence and operational deconfliction exists with the PRC security services. Other mature countries have such processes and I graciously extend that competency to China.
39    Xinhua, “习近平对网络安全和信息化工作作出重要指示,” July 15, 2023, https://archive.ph/GkqnS.
40    Chen Yixin, Secretary of the Party Committee and Minister of the Ministry of National Security, “Strengthening National Security Governance in the Digital Era,” China Internet Information Journal, September 26, 2023,  (中国网信). 国家安全部党委书记、部长陈一新:加强数字时代的国家安全治理–理论-中国共产党新闻网 (archive.ph).
41    M. Hill, “China’s Offensive Cyber Operations Support Soft Power Agenda in Africa,” CSO Online, September 21, 2023, https://www.csoonline.com/article/652934/chinas-offensive-cyber-operations-support-soft-power-agenda-in-africa.html; and T. Hegel, “Cyber Soft Power | China’s Continental Takeover,” SentinelOne, September 21, 2023, https://www.sentinelone.com/labs/cyber-soft-power-chinas-continental-takeover/.
42    J. Parkinson, N. Bariyo, and J. Chin, “Huawei Technicians Helped African Governments Spy on Political Opponents, Wall Street Journal, August 15, 2019, https://archive.ph/Xtwl1.
43    Interview conducted in confidentiality; the name of the interviewee is withheld by mutual agreement.
44    J. Hudson, E. Nakashima, and L. Sly, “Buildup Resumed at Suspected Chinese Military Site in UAE, Leak Says,”  Washington Post, April 26, 2023, https://www.washingtonpost.com/national-security/2023/04/26/chinese-military-base-uae/.
45    Congressional Research Service, “Rare Earth Elements: The Global Supply Chain,” December 16, 2013,   https://crsreports.congress.gov/product/pdf/R/R41347/20; M. Humphries, “China’s Mineral Industry and U.S. Access to Strategic and Critical Minerals: Issues for Congress,” Congressional Research Service, March 20, 2015,  https://sgp.fas.org/crs/row/R43864.pdf; and the White House, “Building Resilient Supply Chains, Revitalizing American Manufacturing, and Fostering Broad-based Growth: 100-Day Reviews Under Executive Order 14017,”  June 2021, https://www.whitehouse.gov/wp-content/uploads/2021/06/100-day-supply-chain-review-report.pdf.
47    “The Green Revolution Will Stall without Latin America’s Lithium,” Economist, May 2, 2023, https://www.economist.com/the-americas/2023/05/02/the-green-revolution-will-stall-without-latin-americas-lithium.
48    N. Fildes and K. Hille, “Beijing Closes in on Security Pact That Will Allow Chinese Troops in Solomon Islands,”  Financial Times, March 24, 2022, https://archive.ph/X5a4h; and Associated Press, “Solomon Islands Says China Security Deal Won’t Include Military Base,” via National Public Radio, April 1, 2022, https://www.npr.org/2022/04/01/1090184438/solomon-islands-says-china-deal-wont-include-military-base
49    N. Fildes, “Australian Minister Flies to Solomon Islands for Urgent Talks on China Pact,” Financial Times, April 12, 2022, https://www.ft.com/content/9da02244-2a10-4f18-a5c5-e88b14a2530b; and K. Lyons and D. Wickham, “The Deal That Shocked the World: Inside the China-Solomons Security Pact,” Guardian, April 20, 2022, https://www.theguardian.com/world/2022/apr/20/the-deal-that-shocked-the-world-inside-the-china-solomons-security-pact.
50    N. Fildes, “Australian PM Welcomes Solomon Islands Denial of Chinese Base Reports,” Financial Times, July 14, 2022, https://www.ft.com/content/789340da-8c1a-4aff-8cf6-276c97c9f200.
51    Microsoft, Microsoft Digital Defense Report 2022, 2022,  https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE5bUvv.
52    Reuters, “Bill to Delay Solomon Islands Election until December 2023 Prompts Concern,” in Guardian, August 9, 2022, https://www.theguardian.com/world/2022/aug/09/bill-to-delay-solomon-islands-election-until-december-2023-prompts-concern; and D. Cave, “Solomon Islands’ Leader, a Friend of China, Gets an Election Delayed,” New York Times, September 8, 2022,  https://www.nytimes.com/2022/09/08/world/asia/solomon-islands-election-delay.html.
53    N. Fildes, “China Funds Huawei’s Solomon Islands Deal in Sign of Deepening Ties,” Financial Times, August 19, 2022, https://archive.ph/R47T0.
54    “Huawei Marine Signs Submarine Cable Contract in Solomon Islands,” Huawei, July 2017, https://web.archive.org/web/20190129114026/https:/www.huawei.com/en/press-events/news/2017/7/HuaweiMarine-Submarine-Cable-Solomon; and W. Qiu, “Coral Sea Cable System Overview,” Submarine Cable Networks, December 13, 2019, https://archive.ph/E049b.
55    Kirsty Needham, “Solomon Island Police Officers Head to China for Training,” Reuters, October 12, 2022,  https://www.reuters.com/world/asia-pacific/solomon-island-police-officers-head-china-training-2022-10-12/.
56    Fildes and Hillie, “Beijing Closes in on Security Pact.”
57    Nikkei Asia, “Solomons Says China Will Assist in Cyber, Community Policing,” Nikkei, July 17, 2023, https://archive.ph/90diZ.
58    D. Belovodyev, A. Soshnikov, and R. Standish, “Exclusive: Leaked Files Show China and Russia Sharing Tactics on Internet Control, Censorship,” Radio Free Europe/Radio Liberty, April 5, 2023, https://www.rferl.org/a/russia-china-internet-censorship-collaboration/32350263.html.
59    Belovodyev, Soshnikov, and Standish, “Exclusive: Leaked Files.”
60    Belovodyev, Soshnikov, and Standish, “Exclusive: Leaked Files.”
61    Thanks to Tuvia Gering for this idea.
62    “〖转载〗人类命运共同体巴基斯坦研究中心主任哈立德·阿克拉姆接受光明日报采访:中巴关系“比山高、比蜜甜”名副其实,” Communication University of China, June 4, 2021, https://comsfuture.cuc.edu.cn/2021/1027/c7810a188141/pagem.htm.
63    Office of the Central Cyberspace Affairs Commission, “我国网络空间国际交流合作领域发展成就与变革,” China Internet Information Journal, December 30, 2023, www.archive.vn/tCnEa; D. Bandurski, “Taking China’s Global Cyber Body to Task,” China Media Project, 2023, https://chinamediaproject.org/2022/07/14/taking-chinas-global-cyber-body-to-task/; and Xinhua, “世界互联网大会成立,” Gov.cn, July 12, 2022,  https://web.archive.org/web/20220714134027/http:/www.gov.cn/xinwen/2022-07/12/content_5700692.htm.
64    World Internet Conference, “Introduction,” WIC website, August 31, 2022, www.archive.ph/Axmuc.
65    Dakota Cary, “Downrange: A Survey of China’s Cyber Ranges,” Issue Brief, Center for Security and Emerging Technology, September 2022, https://doi.org/10.51593/2021CA013.
66    Drinhausen and Lee, “CCP 2021: Smart Governance, Cyber Sovereignty, and Tech Supremacy.”

The post Community watch: China’s vision for the future of the internet appeared first on Atlantic Council.

]]>
The 5×5—Veteran perspectives on cyber workforce development https://www.atlanticcouncil.org/content-series/the-5x5/the-5x5-veteran-perspectives-on-cyber-workforce-development/ Wed, 29 Nov 2023 05:01:00 +0000 https://www.atlanticcouncil.org/?p=707775 In honor of National Military Veterans and Families Month, a group of veterans discuss their transitions from the military to the cyber workforce and suggest ways to improve the process for others. 

The post The 5×5—Veteran perspectives on cyber workforce development appeared first on Atlantic Council.

]]>
This article is part of The 5×5, a monthly series by the Cyber Statecraft Initiative, in which five featured experts answer five questions on a common theme, trend, or current event in the world of cyber. Interested in the 5×5 and want to see a particular topic, event, or question covered? Contact Simon Handler with the Cyber Statecraft Initiative at SHandler@atlanticcouncil.org.

On November 3, the Atlantic Council’s Cyber Statecraft Initiative hosted “Joining forces: Veteran perspectives on cyber and tech workforce development” to discuss transitioning veterans interested in careers in cybersecurity and cyber policy. The veteran community is diverse but the transition out of uniform to civilian work is a well-recognized and widely challenging shift, both for servicemembers and their families.  

In July 2023, the Biden administration released the National Cyber Workforce and Education Strategy, aimed at developing and maintaining the United States’ cybersecurity advantage through a skilled workforce. The Strategy highlights the importance of attracting veterans to careers in cybersecurity, given that the community is comprised of “diverse, and technologically skilled … people who have served the country and are committed to mission success.” Enhancing career pathways for servicemembers and the veteran community to join the cyber workforce can go a long way toward both meeting the urgent demand for cyber talent while providing job opportunities to those aspiring to meaningful careers beyond the military. 

To continue these conversations, and in honor of National Military Veterans and Families Month, we brought together a group of veterans to discuss their own transitions from the military to the cyber workforce and suggest ways to improve the process for others. 

#1 What are the barriers to entry for veterans seeking careers in cybersecurity? What is one way for hiring managers to overcome or mitigate them? 

Nicholas Andersen, nonresident senior fellow, Cyber Statecraft Initiative, Digital Forensic Research Lab (DFRLab), Atlantic Council; chief operating officer, Invictus International Consulting; former US Marine Corps

“A typical barrier for veterans seeking a career in this field is that hiring managers may not be familiar with the missions throughout the military cyber community; they may only focus on experiences that are like those of typical applicants. We sometimes see the same challenges with traditional pathways to technology jobs, where managers are more inclined to hire applicants with degrees. Hiring managers need to shift their thinking from traditional qualification to focusing on competencies. Hiring managers should be thinking about how they can find the most competent people to fill these critical roles within their companies and what skills do they need to have?” 

Cait Conley, senior advisor to the director, Cybersecurity and Infrastructure Security Agency; former US Army

“Leaving the military and starting a new career either in the private sector or in federal or state government can be an intimidating (and outright confusing) process, especially if the military has been the servicemember’s only career experience. Hiring managers and leaders can make a huge difference here. They can show incoming veteran teammates that joining the team not only matters but is a priority. They can put in extra time to explain the application process and help veterans seeking to join their team navigate any questions or challenges that may come up during the process.” 

Steve Luczynski, senior manager, Accenture Federal Services; chairman of the board, Aerospace Village; former US Air Force

“One challenge that is not necessarily specific to cybersecurity is translating military experience to corporate roles, especially when cybersecurity job descriptions often have a difficult time adequately capturing the nature of the work to be done. Hiring managers and human resources teams would benefit from ensuring that they have someone on their teams, or easily accessible, to read resumes and provide explanations for military roles. I know servicemembers invest significant effort in attempting to remove jargon from their resumes, but that additional perspective from someone who shares their background ensures valuable skills are not lost simply because of an imperfect resume.” 

Brandon Pugh, director, cybersecurity and emerging threats, R Street Institute; US Army

“The transition for servicemembers into most civilian career fields presents challenges, and cybersecurity is no exception. It is imperative for servicemembers and veterans to learn from and network with those who have successfully transitioned before them and with those who are working in the field already. Hiring managers play a key role and should strive to proactively create a culture internally of hiring and supporting veterans, including linking job seekers to veterans at their organizations. I can attest firsthand that many individuals in the cyber field are willing to be a resource, and veterans should seek mentors early on in their job search.” 

Maggie Smith, nonresident senior fellow, Cyber Statecraft Initiative, Digital Forensic Research Lab (DFRLab), Atlantic Council; director, Cyber Project, Irregular Warfare Initiative; US Army

“A major barrier for many veterans is higher education and credentialing. While the military provides funding opportunities to pursue a degree while serving, access to those opportunities is often difficult—operational tempo, field training requirements, and other time constraints often prevent or deter a servicemember from taking classes. Additionally, most civilian certification opportunities are based on work role, meaning servicemembers who are not in cyber-related career fields are unlikely to encounter opportunities to earn credentials unless they pursue them on their own time—which, as discussed above, is often unpredictable and in short supply! I have encountered lots of soldiers in non-cyber military occupational specialties with an affinity for computers, networking, and technology but their lack of job experience in a cyber field, lack of any credentials, and a high school diploma prevent them from pursuing cybersecurity as a career. Expanding apprenticeship programs and revisiting job application requirements, as not all roles require a four-year degree, could get more veterans into cybersecurity.” 

#2 What kinds of military activities provide relevant experience for cybersecurity roles? 

Andersen: “I have seen plenty of non-technical veterans who transitioned to technical fields after they left active duty, but those with experience in cybersecurity, information technology, and intelligence make up the majority of the people in these roles. Servicemembers should take full advantage of tuition assistance and local technology training classes while they are still in the military! This does not cost them anything but time and can lead to any servicemember transitioning into a technology role if that is his or her desire.” 

Conley: “Today, technology is a fundamental factor in warfare. Regardless of branch, military experience provides critical thinking and risk management skills essential to succeeding in any cybersecurity role. From day one of basic training, servicemembers learn how to identify, assess, and manage risk—a foundational mental model for cybersecurity professionals. Servicemembers also learn how to lead teams under stressful conditions in operating environments where technical tools are as integral as the humans themselves. Servicemembers, sometimes without even realizing it, have experienced the operational integration of a myriad of technologies from communication platforms and electronic warfare sensors to satellite systems and machine learning data aggregation tools. Those perspectives can provide unique insights into understanding and mitigating risk in changing environments.” 

Luczynski: “Cybersecurity is comprised of a wide array of specializations in which high-level, broad governance and policy skills are more valuable in some domains than the deeply technical skills required in other domains. Security teams combining these diverse skillsets share the common need to prepare and then practice implementing response plans, which occurs often in the military. The ability to train in this manner, especially where open and honest after-action sessions can occur, is highly relevant and valuable in most cybersecurity roles.” 

Pugh: “Direct cyber experience while in uniform is very helpful when looking to transition to cyber roles in the civilian workforce, and servicemembers can have experiences that civilians do not from their service. It is important to realize, however, that individuals who have served in different fields are still valuable in cybersecurity, especially because servicemembers often are good at handling competing demands in high stress environments, are educated and/or have practical experience in professional settings, and often have security clearances already. These can all be beneficial in the cybersecurity field.” 

Smith: “This is a tricky question because it changes from service to service and, I would argue, every servicemember has a cybersecurity role to play! My own experience in the Army started when I enlisted in the Signal Corps and, later, I commissioned as an intelligence officer before becoming a cyber officer when the Army created the branch in 2014. I consider those three branches the Army’s trifecta—each has work roles that will result in an attractive resume. However, within every branch, there are opportunities to gain skills that technology companies and cybersecurity firms want: leadership, multi-tasking, curiosity, and mental agility. I think the challenge that many veterans face is translating their experience for the private sector so that companies can see their potential impact.” 

#3 What are some positive US government initiatives to assist veterans in entering the cyber workforce? Where is one place for the US government to improve on this front? 

Andersen: “Number one on the list must be the Department of Defense’s (DOD) SkillBridge Program, which is unmatched for the opportunities it provides to get firsthand experience with companies and have the military safety net while servicemembers consider their next career move. The generic Transition Assistance Program will not prepare servicemembers to exit the military successfully. The government needs to focus more on transitioning back to civilian life as a simple acknowledgement that the military is still part of regular society. Educating oneself, building savings, and addressing health needs are not tasks to begin at the end of a period of service. Those are tasks that are critical to making certain that our servicemembers return to civilian life ready to lead within communities and contribute to a different mission.” 

Conley: “While there is always room for improvement, I am incredibly proud of the work that the Cybersecurity and Infrastructure Security Agency (CISA) and the Department of Homeland Security have done to promote cybersecurity learning for the veteran community. One of the most impactful ways that CISA contributes to helping transitioning veterans is by operating and maintaining the National Initiative for Cybersecurity Careers and Studies (NICCS), an online training initiative and portal. NICCS offers over eight-hundred and fifty hours of course content on a variety of important cybersecurity topics such as cloud security, ethical hacking and surveillance, risk management, malware analysis, and more. While it does not come with any formal cybersecurity certification, it does provide critical knowledge and insight for veterans to feel confident about their foundational understanding of cybersecurity.” 

Luczynski: “The DOD’s SkillBridge career transition program is an incredible partnership between industry and servicemembers of all ranks and experience levels as they transition out of the military. In short, it is an internship where servicemembers can experience working outside the military as they look for their next role. Continuing to improve awareness among servicemembers about these opportunities and increase the industry participants will ensure that this program is a continued success.” 

Pugh: “Over time, the military has put more emphasis on assisting servicemembers with their transitions, including facilitating opportunities for them to work with industry and to pursue cyber certifications while in uniform. One challenge is that there are many programs and opportunities to assist with transition run by the government, nonprofit organizations, and industry. Knowledge of these programs and knowing where to start is not always straightforward, which is one area in which the government and military can do better.” 

Smith: “The new-ish Skillbridge program provides transitioning servicemembers with a chance to gain civilian work experience—any field, not just cybersecurity—through industry training, apprenticeships, or internships over their last one-hundred eighty days of service. Frankly, I am looking forward to taking advantage of this program when I retire in a couple years; it is a chance to spread my wings and test out a company or try something completely new. Even with Skillbridge, I think the military can do more. The Army is experimenting with a pilot program to allow soldiers to submit their retirement paperwork two full years before their anticipated end of service. That allows soldiers more time to plan for their life in retirement, but it is difficult to provide the same timeline for soldiers leaving service before they hit twenty years. Focusing on mid-career transitions and providing junior enlisted members with additional resources, such as career counseling, college counseling and application assistance, Department of Veterans Affairs, and financial benefits courses, could lead to better outcomes for veterans.”

More from the Cyber Statecraft Initiative:

#4 What is the biggest mistake you made (or avoided) in preparing for your transition from the military? 

Andersen: “The biggest mistake that I made was focusing on my own transition out of the Marine Corps as a series of boxes to be checked. Successfully entering the civilian workplace was highly dependent on networking and having a support system of people who have previously done it themselves. I almost ignored this critical piece for too long.” 

Conley: “I know a lot of veterans out there who struggle to find the same level of fulfillment in their career after the military, which sometimes leads them to question leaving the military in the first place. For me, after two decades in uniform with numerous deployments and over a decade in the special operations community, this was an important consideration when I looked at my next career choice. I knew that being part of a team with a mission focused on service and defending the Homeland was a necessity for me. That clarity helped me identify the best path forward for this new stage in my career. That is why I chose CISA. I know that I am not the only one either—veterans make up 40 percent of CISA’s workforce. Every day of my professional career—in or out of uniform—I have been excited to go to work because I know what I am doing makes a difference.” 

Luczynski: “I tended to focus on my role at the time and short-term goals. Shifting to a longer-term approach and investing the time to consider my options gave me the benefit of having more time to prepare. I developed a better understanding of where my experience could be best applied while fulfilling my family and personal goals.” 

Pugh: “I have been fortunate to serve in the military and now I am an active-duty military spouse. Before becoming a military spouse, I did not fully appreciate the unique employment challenges that military families face from their spouse’s military career caused by frequent moves and/or living in locations without the right job prospects. However, there are many opportunities for military spouses in cyber and many resources are available for them as well, along with some that are geared specifically toward spouses.” 

Smith: “So… I have less than one thousand days until I will retire so at this stage, my mistakes are still in the future! However, what I am doing now is working with a mentor to work towards retirement milestones, identify people, jobs, and work roles that I find interesting, and really think through my transition. My mentor currently has me reaching out to people to conduct information interviews to talk to them about their careers, gather information about their company, and things like that. I have also prioritized doing things like this 5×5 because I want to keep academia’s door open to me, and remaining engaged in research will benefit me in the long run. I know I will make mistakes, but I am working hard on my transition plan in the hopes that I can mitigate risk and identify hazards before it turns into a dumpster fire!” 

#5 What is the most important piece of advice you would share with a veteran interested in entering a career in cybersecurity or cyber policy? 

Andersen: “This is a field that is constantly shifting and no one expert can sit on their laurels hoping that they will still be relevant in a few years’ time. Find a group of likeminded people that will push you to grow, and you will be surprised by how many rewarding experiences come your way. And if you are heading back to school using your GI Bill, make sure to join your local Cyber 9/12 Strategy Challenge team!” 

Conley: “Recognize and own your value. Military service has taught you to be a good teammate, put mission first, and always remember that values matter. This combination of grit, selflessness, and reliability are rare qualities—and invaluable assets for any high performing security team. Be proud of your service history and look forward to what more good you can do!” 

Luczynski: “Do not be afraid to ask for help! Reach out to your former supervisors and subordinates to learn what they do and what roles are available, review your resume, or help you grow your network. It does not matter that you have not spoken in a long time; that is understandable and easily fixed. I strive to put in as much energy toward helping folks now as so many did to help me during my own departure from the Air Force.” 

Pugh: “There are many paths one can take within the cyber field. Too often people think opportunities within cybersecurity are very technical and that a technical background is essential. While those roles exist and are needed, there are many other ways to work in the cyber field, including in policy, law, and education, among many others.” 

Smith: “I love this question because it presents the chance for me to champion the need for cybersecurity professionals with public policy experience and vice versa! I am a public policy nerd that happens to work in cyber—I started my Army career in an electronic maintenance shop repairing radios and later found myself getting my PhD in public policy as a cyber officer. One of my former students is currently doing a master’s degree at the Massachusetts Institute of Technology in technology and public policy—a match made in heaven! People often say that cybersecurity is a team sport, and I understand ‘team’ (and you will be hard pressed to convince me otherwise) as a multidisciplinary team, comprised of individuals with diverse backgrounds and skillsets coming together to craft a security strategy. Because humans are the ones who use technology, cybersecurity can never be just a technical field! However, cyber policy can never be just public policy. Just as cyberspace is the only domain of warfare that is totally dependent upon and spans the other domains of warfare (maritime, land, air, space) to exist, cyber policy is the only domain of policy that spans all other public policy domains (e.g., healthcare, education, transportation). Understanding of how technology works and its role in society is critical to crafting useful cyber policy.” 

Simon Handler is a fellow at the Atlantic Council’s Cyber Statecraft Initiative within the Digital Forensic Research Lab (DFRLab). He is also the editor-in-chief of The 5×5, a series on trends and themes in cyber policy. Follow him on Twitter @SimonPHandler.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post The 5×5—Veteran perspectives on cyber workforce development appeared first on Atlantic Council.

]]>
The 5×5—The cybersecurity implications of artificial intelligence https://www.atlanticcouncil.org/content-series/the-5x5/the-5x5-the-cybersecurity-implications-of-artificial-intelligence/ Fri, 27 Oct 2023 04:01:00 +0000 https://www.atlanticcouncil.org/?p=696721 A group of experts with diverse perspectives discusses the intersection of cybersecurity and artificial intelligence.

The post The 5×5—The cybersecurity implications of artificial intelligence appeared first on Atlantic Council.

]]>
This article is part of The 5×5, a monthly series by the Cyber Statecraft Initiative, in which five featured experts answer five questions on a common theme, trend, or current event in the world of cyber. Interested in the 5×5 and want to see a particular topic, event, or question covered? Contact Simon Handler with the Cyber Statecraft Initiative at SHandler@atlanticcouncil.org.

The arrival of ChatGPT, a chat interface built atop OpenAI’s GPT-3 model, in November 2022 provoked a frenzy of interest and activity in artificial intelligence (AI) from consumers, investors, corporate leaders and policymakers alike. Its demonstration of uncanny conversational abilities, as well as, later, the ability to write code, stoked the collective imagination as well as predictions about its likely impacts and integration into myriad technology systems and tasks.  

The history of the field of AI stretches back to the 1950s, and more narrow machine learning models have been solving problems in prediction and analysis for nearly two decades. In fact, these models are already embedded in the cybersecurity lifecycle, most prominently in threat monitoring and detection. Yet, the emergence of the current generation of generative AI, powered by large language models, is producing markedly different capabilities than previous deep learning systems. Researchers are only beginning to explore the potential uses of generative AI systems in cybersecurity, as well as the potential threats arising from malign use or cyberattacks against generative AI systems themselves. 

With cybersecurity playing a significant role in recently announced voluntary commitments by leading AI companies, a sweeping Executive Order on AI expected next week, and leading AI companies allowing their products to be used to construct increasingly autonomous systems, a discussion about the intersection of generative AI and cybersecurity could not be timelier. To that end, we assembled a group with diverse perspectives to discuss the intersection of cybersecurity and artificial intelligence. 

#1 AI hype has risen and fallen in cycles with breakthrough achievements and paradigm shifts. How do large language models (LLM), and the associated hype wave, compare to previous AI paradigms? 

Harriet Farlow, chief executive officer and founder, Mileva Security Labs; PhD candidate, UNSW Canberra:  

“In my opinion, the excitement around large language models (LLMs) is similar [to excitement around past paradigm shifts] in that it showcases remarkable advancements in AI capabilities. It differs in that LLMs are significantly more powerful than the AI technologies of previous hype cycles. The concern I have with this hype—and I believe AI in general is already over-hyped—is that it gives the impression to non-practitioners that LLMs are the primary embodiment of AI. In reality, the natural language processing of LLMs is just one aspect of the myriad capabilities of AI, with other significant capabilities including computer vision and signal processing. My worry is that rapid adoption of AI and increasing trust in these systems, combined with the lack of awareness that AI systems can be hacked, means there are many productionized AI systems that are vulnerable to adversarial attack.”  

Tim Fist, fellow, technology & national security, Center for a New American Security:  

“While people’s excitement may have a similar character to previous AI ‘booms,’ such as in the 1960s, LLMs and other similar model architectures have some technical properties that together suggest the consequences of the current boom will be, to put it lightly, further reaching. These properties include task agnostic learning, in-context learning, and scaling. Unlike the AI models of yore, LLMs have impressive task performance in many domains at once—writing code, solving math problems, verbal reasoning—rather than one specific domain. Today’s ‘multimodal’ models are the next evolution of these capabilities, bringing the ability to understand and generate both natural language and images, with other modalities in the works. On top of their generality, once trained, LLMs can learn on the fly, allowing them to adapt to and perform reasonably well in novel contexts. LLMs and their multimodal cousins are AI architectures that can successfully leverage exponentially increasing amounts of computing power and data into greater and greater capabilities. This capacity means the basic recipe for more performance and generality is straightforward: just scale the inputs. This trend does not show any clear signs of slowing down.”  

Dan Guido, chief executive officer, Trail of Bits:  

“It is both the same and different. Like the hype surrounding LLMs, prior hype cycles arose due to the promise of fundamentally new capabilities in artificial intelligence, although not all the promised effects materialized. What is different this time is that the results of AI are immediately available to consumers. Now, AI is doing things that people thought computers could never do, like write stories, tell jokes, draw, or write your high school essays. This has occurred due to both fundamental advances like the Transformer model, and Sutter’s bitter lesson that AI becomes better with more computing power. We now have the computation to provide immense scale that was previously unachievable.” 

Joshua Saxe, senior staff research scientist, Meta:  

“The hype around LLMs rhymes with past hype cycles, but because AI is a real and substantive technology, each wave of hype does change security, even if less than AI boosters have anticipated. The hype wave of the 2010s fueled ideas that AI would fundamentally transform almost every aspect of cybersecurity practice, but, in fact, only disrupted security detection pipelines—for example, machine learning is now ubiquitous in malware and phishing detection pipelines. Similarly broad claims are being made about this current hype wave. Many of the imagined applications of LLMs will fall away, but as the bubble deflates we will see some genuinely new and load-bearing applications of LLMs within security.” 

Helen Toner, director of strategy and foundational research grants, Center for Security and Emerging Technology, Georgetown University:  

“I believe expectations are too high for what generative AI will be able to do this year or next. But on a slightly longer timeframe, I think the potential of the current deep learning-focused paradigm—LLMs being one its many faces—is still building. The level of investment and talent going into LLMs and other types of deep learning far outstrips AI waves of previous decades, which is evidence for—and a driver of—this wave being different.” 

#2 What potential applications of generative AI in cybersecurity most excite you? Which are over-hyped?  

Farlow: “In my experience, most people still use the term ‘AI’ the way they would ‘magic.’ I find too many conversations about how AI should be used in cybersecurity are based on trying to replicate and multiply the human workforce using AI. This is a very hard problem to solve, as most AI technologies are not good at operating autonomously across a range of tasks, especially when there is ambiguity and context-dependence. However, AI technologies are very good at assisting in narrow tasks like phishing and fraud detection, malware detection, and user and entity behavior analytics, for example. My focus is less on AI for cybersecurity, and instead on transferring cybersecurity principles into the field of AI to understand and manage the AI attack surface; this is where I think there needs to be more investment.”  

Fist: “I predict that most people, including myself, will be surprised about which specific generative AI-powered applications in cybersecurity end up being most important. The capabilities of today’s models suggest a few viable use cases. Proof-of-concepts exist for offensive tools that use the capabilities of state-of-the-art generative models (e.g., coding expertise, flexibility) to adapt to new environments and write novel attacks on the fly. Attackers could plausibly combine these capabilities with an ‘agentized’ architecture to allow for autonomous vulnerability discovery and attack campaigns. Spearphishing and social engineering attacks are other obvious use cases in the near term. A Center for a New American Security report lays out a few other examples in Section 3.1.2. One important question is whether these capabilities will disproportionately favor attackers or defenders. As of now, the relative ease of generation compared to detection suggests that detectors might not win the arms race.”  

Guido: “To judge whether something is overhyped or underhyped, consider whether it is a sustaining innovation or a disruptive innovation. That is, are any fundamental barriers being broken that were not before? Currently overhyped areas of cybersecurity research include crafting exploits, identifying zero-day vulnerabilities, and creating novel strains of malware. Attackers can already do these things very well. While AI will accelerate these activities, it does not offer a fundamentally new capability. AI shines in providing scalability to tasks that previously required an infeasible amount of effort by trained humans, including continuous cybersecurity education (AI is infinitely patient), testing and specification development, and many varieties of security monitoring and analysis. In July, Trail of Bits described how these capabilities may affect national security for the White House Office of Science and Technology Policy.” 

Saxe: “Much of what people claim around applications of generative AI in cybersecurity is not substantiated by the underlying capabilities of the technology. LLMs, which are the most important generative AI technology for security, have a few proven application areas: they are good at summarizing technical text (including code), they are good at classifying text (including code and cybersecurity relevant text), and they are good at auto-completion. They are good at all this, even without the presence of training data. Applications that exploit these core competencies in LLMs, such as detecting spearphishing emails, identifying risky programming practices in code, or detecting exfiltration of sensitive data, are likely to succeed. Applications that imagine LLMs functioning as autonomous agents, solving hard program analysis problems, or configuring security systems, are less likely to succeed.”  

Toner: “I am skeptical that deepfake videos are going to upend elections or destroy democracy. More generally, I think many applications are overhyped in terms of their likely effects in the very near term. Over the longer term, though—two-plus years from now—I think plenty of things are under-hyped. One is the possibility of mass spearphishing, highly individualized attacks at large scale. Another is the chance that generative AI could significantly expand the number of groups that are able to successfully hack critical infrastructure. I hope that I am wrong on both counts!”  

#3 In what areas of generative AI and cybersecurity do you want to see significant research and development in the next five years?  

Farlow: “While there is no denying that generative AI has garnered its fair share of hype, I cannot help but remain somewhat cynical about the singular focus on this technology. There is a vast landscape of AI advancements, including reinforcement learning, robotics, interpretable AI, and adversarial machine learning, that deserve equal attention. I find generative AI fascinating and exciting, but I also like to play devil’s advocate and note that the future of AI is not solely dependent on generative models. We should broaden our discussions to encompass the broader spectrum of AI research and its implications for various fields, as well as its security.”  

Fist: “I am excited to see more research and development on AI-driven defenses, especially in the automated discovery and patching of vulnerabilities in AI models themselves. The recent paper ‘Universal and Transferable Adversarial Attacks on Aligned Language Models’ is a great example of this kind of work. This research suggests that jailbreak discovery of open-source models like Llama is highly automatable and that these attacks transfer to closed-source models like GPT-4. This is an important problem to highlight. This problem also suggests that AI labs and cybersecurity researchers should work closely together to find vulnerabilities in models, including planned open-source models, and patch them before the models are widely deployed.”  

Guido: “In July, Trail of Bits told the Commodity Futures Trading Commission that our top wishlist items are benchmarks and datasets to evaluate AI’s capability in cybersecurity, like a Netflix prize but for Cybersecurity+AI. Like the original ImageNet dataset, these benchmarks help focus research efforts and drive innovation. The UK recently announced it was funding Trail of Bits to create one such benchmark. Second would be guides, tools, and libraries to help safely use the current generation of generative AI tools. Generative AI’s failure modes are different from those of traditional software and, to avoid a security catastrophe down the road, we should make it easy for developers to do the right thing. The field is progressing so rapidly that the most exciting research and development will likely happen to tools that have not been created yet. Right now, most AI deployments implement AI as a feature of existing software. What is coming are new kinds of things where AI is the toolsomething like an exact decompiler for any programming language, or an AI assistant that crafts specifications or tests for your code as you write.” 

Saxe: “I think there are multiple threads here, each with its own risk/reward profile. The low-risk research and development work will be in taking existing LLM capabilities and weaving them into security tools and workflows that extract maximal value from capabilities they already offer. For example, it seems likely that XDR/EDR/SIEM tooling and workflows can be improved by LLM next-token prediction and LLM embeddings at every node in current security workflows, and that what lies ahead is incremental work in iteratively figuring out how. On the higher-risk end of the spectrum, as successor models to LLMs and multimodal LLMs emerge that are capable of behaving as agents in the world in the next few years, we will need to figure out what these models can do autonomously.” 

Toner: “This is perhaps not directly an area of cybersecurity, but I would love to see more progress in digital identity—in building and deploying systems that allow humans to prove their humanity online. There are some approaches to this under development that use cryptography and clever design to enable you to prove things about your identity online while also preserving your privacy. I expect these kinds of systems to be increasingly important as AI systems become more capable of impersonating human behavior online.”

More from the Cyber Statecraft Initiative:

#4 How can AI policy account for both the technology itself as well as the contexts in which generative AI is developed and deployed?  

Farlow: “As I am sure readers are aware, the question of regulating AI has become quite a philosophical debate, with some jurisdictions creating policy for the AI technology, and others focusing on policy that regulates how different industries may choose to use that technology. And then within that, some jurisdictions are choosing to regulate only certain kinds of AI, such as generative AI. Given that AI encompasses an incredibly large landscape of technologies across an even broader range of use cases, I would like to see more analysis that explores both angles from a risk lens that can be used to inform internationally recognized and relevant regulation. While some AI applications can be risky and unethical and should be regulated or blocked, such as facial recognition for targeted assassinations, policy should not stifle innovation by constraining research and frontier labs. I would like to see regulation informed by a scientific method with the intention to be universally applicable and adopted.” 

Fist: “End-use-focused policies make sense for technology used in any high-risk domain, and generative AI models should be no different. An additional dedicated regulatory approach is likely required for highly capable general-purpose models at the frontier of research and development, known as ‘frontier models.’ Such systems develop new capabilities in an unpredictable way, are hard to make reliably safe, and are likely to proliferate rapidly due to their multitude of possible uses. These are problems that are difficult to address with sector-specific regulation. Luckily, a dedicated regulatory approach for these models would only affect a handful of models and model developers. The recent voluntary commitments secured by the White House from seven leading companies is a great start. I recently contributed to a paper that goes into some of these considerations in more detail.”  

Guido: “In June, Trail of Bits told the National Telecommunications and Information Administration that there can be no AI accountability or regulation without a defined context. An audit of an AI system must be measured against actual verifiable claims regarding what the system is supposed to do, rather than against narrow AI-related benchmarks. For instance, it would be silly to have the same regulation apply to medical devices, home security systems, automobiles, and smart speakers solely because they all use some form of AI. Conversely, we should not allow the use of AI to become a ‘get out of regulation free’ card because, you see, ‘the AI did it!.’”  

Toner: “We need some of both. The default starting point should be that existing laws and regulations cover specific use cases within their sectors. But in some areas, we may need broader rules—for instance, requiring AI-generated content to be marked as such, or monitoring the development of potentially dangerous models.” 

#5 How far can existing legal structures go in providing guardrails for AI in context? Where will new policy structures be needed?  

Farlow: “Making policy for generative AI in context means tailoring regulations to specific industries and applications. There are a number of challenges associated with AI that are not necessarily new—data protection laws, for example, may be quite applicable to the use of AI (or attacks on AI) that expose information. However, AI technology is fundamentally different to cyber and information systems on which much of existing technology law and policy is based. For example, AI systems are inherently probabilistic, whereas cyber and information systems are rule-based. I believe there need to be new policy structures that can address novel challenges like adversarial attacks, deep fakes, model interpretability, and mandates on secure AI design.”  

Fist: “Liability is a clear example of an existing legal approach that will be useful. Model developers should probably be made strictly liable for severe harm caused by their products. For potential future models that pose severe risks, those risks may not be able to be adequately addressed using after-the-fact remedies like liability. For these kinds of models, ex-ante approaches like licensing could be appropriate. The Food and Drug Administration and Federal Aviation Administration offer interesting case studies, but neither seems like exactly the right approach for frontier AI. In the interim, an information-gathering approach like mandatory registration of frontier models looks promising. One thing is clear: governments will need to build much more expertise than they currently possess to define and update standards for measuring model capabilities and issuing guidance on their oversight.”  

Guido: “Existing industries have robust and effective regulatory and rule-setting bodies that work well for specific domains and provide relevant industry context. These same rule-setting bodies are best positioned to assess the impact of AI with the proper context. Some genuinely new emergent technologies may not fit into a current regulatory structure; these should be treated like any other new development and regulated based on the legislative process and societal needs.”  

Toner: “Congress’ first step to manage new concerns from AI, generative and otherwise, should be to ensure that existing sectoral regulators have the resources, personnel, and authorities that they need. Wherever we already have an agency with deep expertise in an area—the Federal Aviation Administration for airplanes, the Food and Drug Administration for medical devices, the financial regulators for banking—we should empower them to handle AI within their wheelhouse. That being said, some of the challenges posed by AI would fall through the cracks of a purely sector-by-sector approach. Areas that may need more cross-cutting policy include protecting civil rights from government use of AI, clarifying liability rules to ensure that AI developers are held accountable when their systems cause harm, and managing novel risks from the most advanced systems at the cutting edge of the field.”

Simon Handler is a fellow at the Atlantic Council’s Cyber Statecraft Initiative within the Digital Forensic Research Lab (DFRLab). He is also the editor-in-chief of The 5×5, a series on trends and themes in cyber policy. Follow him on Twitter @SimonPHandler.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post The 5×5—The cybersecurity implications of artificial intelligence appeared first on Atlantic Council.

]]>
Roberts featured as guest on San Francisco Experience podcast https://www.atlanticcouncil.org/insight-impact/in-the-news/roberts-featured-as-guest-on-san-francisco-experience-podcast/ Fri, 20 Oct 2023 19:06:30 +0000 https://www.atlanticcouncil.org/?p=715529 On October 19, IPSI/GCH nonresident senior fellow Dexter Tiff Roberts spoke on an episode of the San Francisco Experience podcast, where he discussed the recent meeting of the Five Eyes intelligence chiefs in Silicon Valley. He explained that this unprecedented meeting was a response to a massive push in Chinese espionage to steal cutting-edge technology. […]

The post Roberts featured as guest on San Francisco Experience podcast appeared first on Atlantic Council.

]]>

On October 19, IPSI/GCH nonresident senior fellow Dexter Tiff Roberts spoke on an episode of the San Francisco Experience podcast, where he discussed the recent meeting of the Five Eyes intelligence chiefs in Silicon Valley. He explained that this unprecedented meeting was a response to a massive push in Chinese espionage to steal cutting-edge technology. With the PRC ramping up its espionage efforts against tech companies, he explained that this meeting was a warning call to these companies to put anti-espionage protections in place immediately. 

The post Roberts featured as guest on San Francisco Experience podcast appeared first on Atlantic Council.

]]>
The US-EU Summit: Time to focus on geopolitics https://www.atlanticcouncil.org/blogs/new-atlanticist/the-us-eu-summit-time-to-focus-on-geopolitics/ Wed, 18 Oct 2023 14:44:28 +0000 https://www.atlanticcouncil.org/?p=693503 Faced with an increasingly hostile and divided world, US and EU officials must make the most of the upcoming summit in Washington DC.

The post The US-EU Summit: Time to focus on geopolitics appeared first on Atlantic Council.

]]>
The last summit between the European Union (EU) and the United States, in June 2021, focused on reaffirming the transatlantic partnership after some difficult years. At the summit in Washington, DC, this Friday, the United States and Europe must address the geopolitical challenges they face in an increasingly hostile and divided world. Transatlantic diplomacy can no longer be solely about the now strengthened partnership itself. Instead, its primary task must be to build joint efforts to ensure a more secure and resilient place for US and European citizens, in keeping with the transatlantic partnership’s democratic values.

The 2021 summit faced a relatively peaceful world. At this 2023 summit, the United States and the EU must demonstrate their determination and close coordination in their responses to Hamas’s strike on Israel and Russia’s invasion of Ukraine. Most immediately, this will require holding Israel to the standards of international law as it justifiably seeks to remove the threat of Hamas. Russia’s war on Ukraine has been a key catalyst in energizing the US-EU partnership, fostering transatlantic cooperation on sanctions, export controls, and supplies of armaments. This summit should leave no doubt about the continued willingness of the United States and the EU to work together to supply weapons and financial support to Ukraine for as long as needed. 

These are not the only conflicts and tensions challenging the United States and the EU. This geopolitical summit must also show unity in the face of threats from Iran and other countries that encourage terrorism and foster extremism. The United States and the EU must also look beyond physical threats to focus—both domestically and abroad—on disruptive perils online, from cyberattacks to state-sponsored disinformation. 

As the United States and the EU seek to make their own economies more secure, they should ensure that developing economies are not collateral damage.

The summit cannot just be about defending against aggression, however. It should also be an opportunity for the EU and United States to begin building a strategy based on a positive case for democracy and the rule of law, and for the critical nature of these values in making societies and economies prosperous and resilient. The United States and the EU have already reached out to other like-minded countries—Japan, South Korea, Australia, and others—that share these values. Now it is time to address other democracies, such as India, Brazil, and other regional powers, as well as those developing countries that are much more ambivalent toward democratic principles. In today’s tense geopolitical moment, such outreach is an essential part of making the United States and Europe more secure and resilient. Such a strategy will also require genuine assistance to developing countries, especially in helping them weather the green and digital transitions. The small projects that have been initiated under the US-EU Trade and Technology Council (TTC) can only be a beginning. 

Much of the summit will be focused on how to make the transatlantic economies stronger and more competitive, especially when faced with the challenges of nonmarket economies, such as China. The United States and the EU need to use this summit to make progress in their negotiations on critical raw materials and greening the global steel market in the face of Chinese overcapacity. But they should also think about how to include others in these arrangements. As the United States and the EU seek to make their own economies more secure, they should ensure that developing economies are not collateral damage. 

Technology offers another avenue for engaging these countries. The United States and the EU have started a very necessary conversation over the risks involved in generative and frontier artificial intelligence (AI). Indeed, leaders may adopt more initiatives in this area at the AI Safety Summit at Bletchley Park in the United Kingdom, which will be held in November. But there are many uses of AI that offer opportunities, including in agriculture, research, health care, and public services. Used with care and training, these can help many developing countries. Will China provide these opportunities, perhaps in a new version of the Belt and Road Initiative, or will the United States and the EU, as well as their partners, provide the systems and training that could make a real difference? The summit this week in Washington provides a opportunity to demonstrate transatlantic willingness to assist others in a safe, positive, and open digital transition. 

Finally, with the United Nations Climate Change Conference, known as COP28, just a few weeks away, the United States and the EU must use the summit to demonstrate their commitment to climate goals. This is not only about assistance for climate mitigation, but also about the openness and accountability of US and EU climate policies, ensuring that subsidy schemes and clean energy standards are fair and do not create additional challenges for developing countries. For Europe especially, its southern neighbors could be a huge source of renewable energy. Any US-EU arrangements on clean tech that may emerge at the summit should be constructed to encourage this trade and engage developing countries with initiatives designed to build greener energy markets. 

The EU-US relationship has come a long way since 2021. The TTC, which was created at the June 2021 summit, has proven to be an innovative and productive mechanism for addressing bilateral transatlantic tensions and for building consensus and relationships among officials. While focused mostly on emerging tech and supply chain issues, it has also organized real cooperation on critical issues such as export controls against Russia. The United States and the EU should now begin to consider how to make the TTC an even stronger, more legitimate, and perennial mechanism of transatlantic cooperation, for instance, through a small permanent team and parliamentary dialogue. But more broadly, the United States and the EU must look beyond their own relationship to cooperate on building a broader coalition to address today’s geopolitical challenges. This October summit is the place to start.


Frances G. Burwell is a distinguished fellow at the Atlantic Council’s Europe Center and a senior director at McLarty Associates.

Georg Riekeles is associate director and head of Europe’s political economy programme at the European Policy Centre.

The post The US-EU Summit: Time to focus on geopolitics appeared first on Atlantic Council.

]]>
Cartin quoted in Politico on Huawei use in Germany https://www.atlanticcouncil.org/insight-impact/in-the-news/cartin-quoted-in-politico-on-german-huawei-use/ Mon, 16 Oct 2023 19:35:53 +0000 https://www.atlanticcouncil.org/?p=707689 On October 15, IPSI nonresident senior fellow Josh Cartin was quoted in a Politico article on continued attempts by the United States to convince Germany to curb its use of Huawei technology. Cartin explained that “having a Chinese company that is strongly susceptible to the [Communist Party of China] leading the globe in a foundational […]

The post Cartin quoted in Politico on Huawei use in Germany appeared first on Atlantic Council.

]]>

On October 15, IPSI nonresident senior fellow Josh Cartin was quoted in a Politico article on continued attempts by the United States to convince Germany to curb its use of Huawei technology. Cartin explained that “having a Chinese company that is strongly susceptible to the [Communist Party of China] leading the globe in a foundational technology was not only a security problem, but a major economic problem.”  

The post Cartin quoted in Politico on Huawei use in Germany appeared first on Atlantic Council.

]]>
Atkins on Industrial Cybersecurity Pulse podcast https://www.atlanticcouncil.org/insight-impact/in-the-news/atkins-on-industrial-cybersecurity-pulse-podcast/ Sat, 14 Oct 2023 20:54:08 +0000 https://www.atlanticcouncil.org/?p=707772 On October 13, IPSI nonresident senior fellow Victor Atkins spoke on an episode of the Industrial Cybersecurity Pulse Cybersecurity Awareness Month podcast series. He discussed issues such as patching organizational and personal cyber vulnerabilities, IT/OT integration, and emerging technologies such as AI and machine learning. Atkins noted that with the increasing automation of critical infrastructure […]

The post Atkins on Industrial Cybersecurity Pulse podcast appeared first on Atlantic Council.

]]>

On October 13, IPSI nonresident senior fellow Victor Atkins spoke on an episode of the Industrial Cybersecurity Pulse Cybersecurity Awareness Month podcast series. He discussed issues such as patching organizational and personal cyber vulnerabilities, IT/OT integration, and emerging technologies such as AI and machine learning. Atkins noted that with the increasing automation of critical infrastructure sectors from communications to shipping, the target for cyberattacks is increasing; however, he explained, private-sector threat intelligence entities are increasing opportunities to discover and respond to these threats. 

The post Atkins on Industrial Cybersecurity Pulse podcast appeared first on Atlantic Council.

]]>
The sixth domain: The role of the private sector in warfare https://www.atlanticcouncil.org/in-depth-research-reports/report/the-sixth-domain-the-role-of-the-private-sector-in-warfare/ Wed, 04 Oct 2023 15:40:01 +0000 https://www.atlanticcouncil.org/?p=683477 The private sector is the "sixth domain" of modern warfare, argues Frank Kramer, and the government should act to protect it.

The post The sixth domain: The role of the private sector in warfare appeared first on Atlantic Council.

]]>

Table of contents

I. Homelands at risk in wartime
II. Lessons from the Ukraine-Russia war—the role of the private sector in warfare
A. Cybersecurity
B. Cloud computing
C. Space
D. Artificial intelligence
E. Communications
III. The US homeland security framework does not include wartime requirements for the private sector
IV. Recommendations
A. Congress and the Biden administration should expand the existing national framework to provide for effective engagement with the private sector in wartime
B. Establish a critical infrastructure wartime planning and operations council with government and private-sector membership
C. Establish regional resilience collaboratives
D. Establish private-sector systemic risk analysis and response centers
E. Establish an integrated cybersecurity providers corps
F. Create a wartime surge capability of cybersecurity personnel by establishing a cybersecurity civilian reserve corps and expanding National Guard cyber capabilities
G. Expansion of Cyber Command’s “hunt forward” model to support key critical infrastructures in wartime in the United States
H. Establish an undersea infrastructure protection corps
I. Expand usage of commercial space-based capabilities
J. Authorities and resources
Conclusion
About the author

The United States and its allies have for some time recognized, as NATO doctrine provides, five operational domains—air, land, maritime, cyberspace, and space.1 Each of those arenas fully fits with the understanding of a domain as a “specified sphere of activity” and, in each, militaries undertake critical wartime actions. But in the ongoing Ukraine-Russia war, certain key operational activities have been undertaken by the private sector as part of the conduct of warfare.2 By way of example, private-sector companies have been instrumental both in providing effective cybersecurity and in maintaining working information technology networks. As part of such efforts, these firms have established coordinated mechanisms to work with relevant government actors.

These operational and coordinated activities by the private sector demonstrate that there is a “sixth domain”—specifically, the “sphere of activities” of the private sector in warfare—that needs to be included as part of warfighting constructs, plans, preparations, and actions if the United States and its allies are to prevail in future conflicts. As will be elaborated below, that sphere of activities focuses mainly on the roles of information and critical infrastructures, including their intersections—ranging from the transmission and protection of information to the assurance of critical infrastructure operations.

Many of the United States’ activities in the sixth domain will take place in the United States homeland. However, while “defending the homeland” is listed as the first priority in the 2022 National Defense Strategy, insufficient attention has been paid to the actions that will be required of the private sector beyond just the defense industrial base as part of accomplishing an effective defense.3 Likewise, when US military forces are engaged in overseas combat, private-sector companies in allied countries (as well as US companies operating overseas) will be critical for the effectiveness of US forces, as well as for the allies’ own militaries. In short, establishing an effective strategy for the private sector in warfare is a key requirement for the United States and its allies.

This report sets forth the elements of such a strategy.4 In substantial part, the paper builds on lessons regarding the sixth domain derived from the ongoing Ukraine-Russia war. The report discusses the key operational activities that fall within the sixth domain and how such activities need to be included in war planning with a focus on the organizational structures and authorities required for effective implementation of private-sector activities in warfare. For clarity of exposition, the report focuses its recommendations for the most part on the United States, though comparable approaches will be important for allies and partners.

The report recognizes the existing frameworks that have been established in the United States for interactions between the government and the private sector as set forth in Presidential Policy Directive 21 (PPD-21) of 2013 on critical infrastructure security and resilience, the statutory requirements including those in the FY 2021 National Defense Authorization Act, the National Infrastructure Protection Plan, which addresses the resilience of critical infrastructures, and the role of the Cybersecurity and Infrastructure Security Agency (CISA) as the national coordinator for critical infrastructure security and resilience.5The report expands on those existing structures to recommend actions that will provide the framework for effective operational activities by the private sector in wartime.

Specifically, the report recommends:

  1. Congress and the administration should work together to expand the existing national framework to provide for effective engagement with and coordination of the role of the private sector in wartime. This expanded framework for coordination between the private sector and federal government should include the requisite authorities and resources to accomplish each of the recommended actions below.
  2. A Critical Infrastructure Wartime Planning and Operations Council (CIWPOC) with government and private-sector membership should be established to oversee planning for, and coordination of, government and private-sector wartime activities in support of national defense.
  3. Regional resilience collaboratives should be established in key geographical locations to plan for and coordinate US government and private-sector activities in wartime and other high-consequence events and wartime efforts, including by the creation of regional risk registries that evaluate systemic risks.
  4. Private-sector systemic risk analysis and response centers should be established for key critical infrastructures: a) using as an initial model the Analysis and Resilience Center for Systemic Risk that has been established by large private-sector firms for the financial and energy sectors, and b) focusing on cascading as well as other high-consequence, sector-specific risks. New centers should include key firms in the transportation, health, water, and food sectors.
  5. An integrated corps of cybersecurity providers should be established whose private-sector members would provide high-end cybersecurity in wartime to key critical infrastructures and, if requested, to states, localities, tribes, and territories (SLTTs).
  6. A “surge capability” of cybersecurity personnel in wartime should be established through the creation of a national cybersecurity civilian reserve corps and expansion of National Guard military reserve cybersecurity capabilities.
  7. Cyber Command’s “Hunt Forward” model of operations should be expanded in wartime to support key critical infrastructures in the United States and, if requested, to provide support to SLTTs.
  8. An international undersea infrastructure protection corps should be established that would combine governmental and private activities to support the resilience of undersea cables and pipelines. Membership should include the United States, allied nations with undersea maritime capabilities, and key private-sector cable and pipeline companies.
  9. The Department of Defense should continue to expand its utilization of commercial space capabilities including the establishment of wartime contractual arrangements and other mechanisms to ensure the availability of commercial space assets in wartime.
  10. Congress should enact the necessary authorities and provide the appropriate resources to accomplish the actions recommended above.

I. Homelands at risk in wartime

While the United States has largely not been subject to armed attack on the homeland, the National Defense Strategy now makes explicit that the “scope and scale of threats to the homeland have fundamentally changed . . . as the “PRC and Russia now pose more dangerous challenges to safety and security at home.”6 Gen. Glenn VanHerck, commander of US Northern Command, has similarly testified that the:


. . . primary threat to the homeland is now . . . significant and consequential. Multiple peer competitors and rogue states possess the capability and capacity to threaten our citizens, critical infrastructure, and vital institutions.7

As Gen. VanHerck has stated, the challenges are particularly acute regarding critical infrastructures. The cyber attack on Colonial Pipeline, the attack on SolarWinds software supply chains, and multiple major ransomware attacks are illustrative of the types of attacks that have taken place in the United States.8 Such attacks could be expected to be substantially expanded in the event of armed conflict.

The potential for attacks on critical infrastructures in a conflict with Russia is significant. The Annual Threat Assessment of the US Intelligence Community has stated that, while “Russia probably does not want a direct military conflict with US and NATO forces, . . . there is potential for that to occur,” including in the context of the Ukraine-Russia war where “ the risk for escalation remains significant.”9 The 2023 Annual Threat Assessment is unequivocal regarding Russia’s capabilities to attack infrastructure in such an event:


Russia is particularly focused on improving its ability to target critical infrastructure, including underwater cables and industrial control systems, in the United States as well as in allied and partner countries, because compromising such infrastructure improves and demonstrates its ability to damage infrastructure during a crisis.10

Similarly, the 2023 report speaks to China’s capacity to threaten critical US infrastructures:


If Beijing feared that a major conflict with the United States were imminent, it almost certainly would consider undertaking aggressive cyber operations against U.S. homeland critical infrastructure and military assets worldwide. . . . China almost certainly is capable of launching cyber attacks that could disrupt critical infrastructure services within the United States, including against oil and gas pipelines, and rail systems.11

Moreover, Chinese intrusions into US critical infrastructures appear to have already occurred, according to media reports:


The Biden administration is hunting for malicious computer code it believes China has hidden deep inside the networks controlling power grids, communications systems and water supplies that feed military bases in the United States and around the world, according to American military, intelligence and national security officials.12

Of course, as the foregoing indicates, Russia or China could be expected not only to attack critical infrastructures in the United States, but also to undertake comparable actions against US allies. Indeed, such actions have already occurred in the context of the Ukraine-Russia war, in which Russia’s attack on the Viasat satellite network disrupted information networks in multiple countries, including Germany, France, Greece, Italy, and Poland.13 Other Russian activities in its war against Ukraine have similarly targeted allied critical infrastructures including “destructive attacks with the Prestige ransomware operation against the transportation sector in Poland, a NATO member and key logistical hub for Ukraine-bound supplies,” and additionally “compromis[ing] a separate Polish transportation sector firm, and later increas[ing] reconnaissance against NATO-affiliated organizations, suggesting an intent to conduct future intrusions against this target set.”14

Moreover, as noted above, China has comparable capabilities that could be utilized in a conflict against US allies and partners. For example, as the Department of Defense’s 2022 report on China’s military activities states, in the context of a conflict over Taiwan, the PRC “could include computer network . . . attacks against Taiwan’s political, military and economic infrastructure.”15

In sum, in the event of a conflict with either Russia or China, US, allied, and partner critical infrastructures and information flows will “almost certainly” be subject to attacks. But most of those critical infrastructures, including information and communications technology capabilities, are owned and operated by the private sector. As discussed below, those private-sector capabilities will be critical for military operations, continuity of government, and maintaining the performance of the economy in the event of conflict. Accordingly, a key issue for the United States and its allies and partners is how to effectively engage the private sector in wartime in order to offset the consequences of expected adversarial actions.

II. Lessons from the Ukraine-Russia war—The role of the private sector in warfare

A useful starting place for understanding the sixth domain, and the role of the private sector in establishing an effective defense, comes from an overview of the efforts of private-sector companies in the context of the Ukraine-Russia war.

A worthwhile report by Irene Sánchez Cózar and José Ignacio Torreblanca summarized the actions of a number of companies:


Microsoft and Amazon, for example, have proven fundamental in helping Ukrainian public and private actors secure their critical software services. They have done so by moving their on-site premises to cloud servers to guarantee the continuity of their activities and aid in the detection of and response to cyber-attacks. Moreover, Google has assisted Ukraine on more than one front: it created an air raid alerts app to protect Ukraine’s citizens against Russian bombardment, while also expanding its free anti-distributed denial-of-service (DDoS) software-Project Shield-which is used to protect Ukraine’s networks against cyber-attacks.16

Similarly, Ariel Levite has described how Ukraine, the United States, and the United Kingdom have utilized their technical capabilities in cyber defense and other areas during the Ukraine-Russia conflict:


Ukraine and its Western allies have fared much better than Russia in the competition over cyber defense, early warning, battlefield situational awareness, and targeting information. This is due in large part to the richness and sophistication of the technical capabilities brought to bear by the U.S. and UK governments as well as various commercial entities (including SpaceX, Palantir, Microsoft, Amazon, Mandiant and many others), some of which received funding from the U.S. and UK governments. These actors came to Ukraine’s help with intelligence as well as invaluable space reconnaissance sensors, telecommunications, and other technical assets and capabilities for fusing information and deriving operational cues. The Ukrainians skillfully wove these assets together with their indigenous resources.17

The discussion below elaborates on these points, focusing on five functional sectors (which have some degree of overlap) where the private sector has had key roles: cybersecurity, cloud computing, space, artificial intelligence, and communications.

A. Cybersecurity

Effective cybersecurity has been a key element of Ukraine’s defense against Russia—achieving a degree of success that had not been generally expected:


The war has inspired a defensive effort that government officials and technology executives describe as unprecedented—challenging the adage in cybersecurity that if you give a well-resourced attacker enough time, they will pretty much always succeed. The relative success of the defensive effort in Ukraine is beginning to change the calculation about what a robust cyber defense might look like going forward.18

The key to success has been the high degree of collaboration:


This high level of defense capability is a consequence of a combination of Ukraine’s own effectiveness, significant support from other nations including the United States and the United Kingdom, and a key role for private sector companies.
The defensive cyber strategy in Ukraine has been an international effort, bringing together some of the biggest technology companies in the world such as Google and Microsoft, Western allies such as the U.S. and Britain and social media giants such as Meta who have worked together against Russia’s digital aggression.19

A crucial part of that effort has been the private sector’s willingness to expend significant resources:


The cybersecurity industry has thrown a huge amount of resources toward bolstering Ukraine’s digital defense. Just as the United States, European nations and many other countries have delivered billions of dollars in aid and military equipment, cybersecurity firms have donated services, equipment and analysts. Google has said it’s donated 50,000 Google Workspace licenses. Microsoft’s free technology support will have amounted to $400 million by the end of 2023, the company said in February. In the run-up to the invasion there was a broad effort by industry to supply Ukraine with equipment like network sensors and gateways and anti-virus and endpoint-detection and response tools.20

These combined actions have been highly effective. Ukraine was able to proactively foil Russian cyber operations at least two times, according to Dan Black. The threats involved were, he wrote, “a destructive malware targeting a shipping company in Lviv and the Industroyer2 operation against Ukraine’s energy infrastructure at the onset of the Donbas offensive.” Ukraine, with international, nongovernmental entities, disrupted them “through coordinated detection and response.”21

B. Cloud computing

Another critical set of activities—likewise focused on resilience—has been undertaken by private cloud companies. Ukraine has:


. . . worked closely with several technology companies including Microsoft, Amazon Web Services, and Google, to effect the transfer of critical government data to infrastructure hosted outside the country. . . . Cloud computing is dominated by . . . hyperscalers—[and] Amazon, Microsoft, [and] Google . . . provide computing and storage at enterprise scale and are responsible for the operation and security of data centers all around the world, any of which could host . . . data.22

The result has been consequential for both assuring continuity of governmental functions and for supporting the performance of the economy:


Ukraine’s emergency migration to the cloud has conferred immeasurable benefits. Within days of the war breaking out, key [critical infrastructure] assets and services came under the protection of Western technology companies, allowing Ukrainian authorities to maintain access and control over vital state functions. The uptime afforded by the public cloud cut across various critical services. Banking systems kept working, trains kept running on schedule, and Ukraine’s military kept its vital connections to situational awareness data. Physical risks to data centres and incident-response personnel were likewise mitigated.23

C. Space

Private-sector space capabilities have been crucial factors in Ukraine’s defense efforts. Most well-known perhaps are the activities of the satellite company Starlink, a unit of SpaceX. As described by Emma Schroeder and Sean Dack, Starlink’s performance in the Ukraine conflict demonstrated its high value for wartime satellite communications:


Starlink, a network of low-orbit satellites working in constellations operated by SpaceX, relies on satellite receivers no larger than a backpack that are easily installed and transported. Because Russian targeting of cellular towers made communications coverage unreliable, . . . the government ‘made a decision to use satellite communication for such emergencies’ from American companies like SpaceX. Starlink has proven more resilient than any other alternatives throughout the war. Due to the low orbit of Starlink satellites, they can broadcast to their receivers at relatively higher power than satellites in higher orbits. There has been little reporting on successful Russian efforts to jam Starlink transmissions.24

Starlink is not, however, the only satellite company involved in the war:


Companies both small and large, private and public, have supported Ukraine’s military operations. Planet, Capella Space, and Maxar technologies—all satellite companies—have supplied imagery helpful to the Ukrainian government. . . . The imagery has done everything from inform ground operations to mobilize global opinion . . . Primer.AI, a Silicon Valley startup, quickly modified its suite of tools to analyze news and social media, as well as to capture, translate, and analyze unencrypted Russian military leaders’ voice communications.25

The role of space assets presents a specific example of the systemic overlap among different capabilities operated by the private sector—and the need to coordinate with and protect them during wartime. As Levite indicates, the fusion of space and cyberspace as well as land- and space-based digital infrastructure is evident in the Ukraine conflict:


Digital information, telecommunication, navigation, and mass communication assets are vital for modern warfare, and many now operate in or through space. In the Ukraine conflict we can detect early signs that attacking (and defending) space assets is not only deeply integrated with warfare in the air, sea, and land but is also heavily intertwined with digital confrontation in other domains. Control (or conversely disruption or disablement) of digital assets in space is thus becoming indispensable to gaining the upper hand on the battlefield and in the overall war effort.26

D. Artificial intelligence

Artificial intelligence is another capability utilized in the Ukraine-Russia war that has been heavily supported by the private sector. Robin Fontes and Jorrit Kamminga underscore the voluntary role and impact of companies, primarily American ones, to heighten Ukraine’s wartime capacity:


What makes this conflict unique is the unprecedented willingness of foreign geospatial intelligence companies to assist Ukraine by using AI-enhanced systems to convert satellite imagery into intelligence, surveillance, and reconnaissance advantages. U.S. companies play a leading role in this. The company Palantir Technologies, for one, has provided its AI software to analyze how the war has been unfolding, to understand troop movements and conduct battlefield damage assessments. Other companies such as Planet Labs, BlackSky Technology and Maxar Technologies are also constantly producing satellite imagery about the conflict. Based on requests by Ukraine, some of this data is shared almost instantly with the Ukrainian government and defense forces.27

In providing such assistance, the private sector has often integrated its artificial intelligence capabilities with open-source information, combining them for military-effective results. Fontes and Kamminga also provide some granular examples of this and discuss how open-source data also bolster battlefield intelligence:


In general, AI is heavily used in systems that integrate target and object recognition with satellite imagery. In fact, AI’s most widespread use in the Ukraine war is in geospatial intelligence. AI is used to analyze satellite images, but also to geolocate and analyze open-source data such as social media photos in geopolitically sensitive locations. Neural networks are used, for example, to combine ground-level photos, drone video footage and satellite imagery to enhance intelligence in unique ways to produce strategic and tactical intelligence advantages.
This represents a broader trend in the recruitment of AI for data analytics on the battlefield. It is increasingly and structurally used in the conflict to analyze vast amounts of data to produce battlefield intelligence regarding the strategy and tactics of parties to the conflict. This trend is enhanced by the convergence of other developments, including the growing availability of low-Earth orbit satellites and the unprecedented availability of big data from open sources.28

E. Communications

Maintaining functional information technology networks has been a critical requirement of Ukraine’s defense. As Levite has pointed out, that has been accomplished despite massive Russian attacks essentially because of the inherent resilience of the underlying private-sector technologies including space and cloud capabilities (as described above):


One especially novel insight to emerge from the Ukraine conflict is the relative agility of digital infrastructure (telecommunications, computers, and data) compared to physical infrastructure. Physical, electromagnetic, and cyber attacks can undoubtedly disrupt and even destroy key digital assets and undermine or diminish the efficacy of the missions they serve. But Ukrainian digital infrastructure (especially its cell towers and data servers) has been able to absorb fairly massive Russian missile as well as cyber attacks and continue to function, notwithstanding some temporary setbacks. . . . It appears that modern digital technology networks (such as those based on mobile and satellite communications and cloud computing infrastructure) are more robust and resilient than older infrastructure, allowing relatively quick reconstitution, preservation, and repurposing of key assets and functions.29

III. The US homeland security framework does not include wartime requirements for the private sector

The current US framework for private-sector engagement with the government is not focused on wartime. Rather, as set forth in PPD-21, the scope is limited by the definition of the term “all hazards,” which stops short of armed conflict:


The term ‘all hazards’ means a threat or an incident, natural or man-made, that warrants action to protect life, property, the environment, and public health or safety, and to minimize disruptions of government, social, or economic activities. It includes natural disasters, cyber incidents, industrial accidents, pandemics, acts of terrorism, sabotage, and destructive criminal activity targeting critical infrastructure.30

A recent report by the Government Accountability Office (GAO) similarly notes that, while the US Department of Homeland Security (DHS) was initially established in the wake of the 9/11 terrorist attacks and correspondingly had a counterterror focus, PPD-21 “shifted the focus from protecting critical infrastructure against terrorism toward protecting and securing critical infrastructure and increasing its resilience against all hazards, including natural disasters, terrorism, and cyber incidents.”31

While wartime planning and operations are not covered, it is nonetheless important to recognize that the United States does undertake multiple efforts under the National Plan that are focused on the resilience of critical infrastructures and that the National Plan has been enhanced by each administration and the Congress since its inception. The National Plan is briefly reviewed below, as it provides the context and a valuable starting point for the recommendations made by this report with respect to the role of the private sector in wartime.

The GAO has described the National Plan as providing both a foundation for critical infrastructure protection and an “overarching approach” to make the work of protection and resilience an integrated national effort:


The National Plan details federal roles and responsibilities in protecting the nation’s critical infrastructures and how sector stakeholders should use risk management principles to prioritize protection activities within and across sectors. It emphasizes the importance of collaboration, partnering, and voluntary information sharing among DHS and industry owners and operators, and state, local, and tribal governments.32

DHS has the overall coordination responsibility under the National Plan and, within DHS, the Cybersecurity and Infrastructure Security Agency has been established as the “national coordinator for critical infrastructure protection,” partnering with federal, state, and municipal agencies as well as territorial and tribal authorities and the private sector.33

In conjunction with the National Plan, PPD-21 designated sixteen critical infrastructure sectors. In each sector, a lead agency or department—dubbed a sector risk management agency (SRMA)—coordinates with CISA; collaborates with critical infrastructure owners and operators; coordinates with the varying levels of governments, authorities, and territorial partners; and participates in a government coordinating council as well as a sector coordinating council with owners-operators of critical assets and relevant trade association representatives.34

Pursuant to PPD-21, including through actions taken by CISA, a host of coordination mechanisms exist to enhance the resilience of critical infrastructures, including the Federal Senior Leadership Council, the Critical Infrastructure Partnership Advisory Council, government coordinating councils, and sector coordinating councils.35 Congress also established the Office of the National Cyber Director (ONCD), whose mandate includes working with “all levels of government, America’s international allies and partners, non-profits, academia, and the private sector, to shape and coordinate federal cybersecurity policy.”36 ONCD’s mandate includes coordinating the recently issued National Cybersecurity Strategy Implementation Plan, whose multiple initiatives include defending critical infrastructures, disrupting threat actors, shaping market forces for security and resilience, undertaking investment, and forging international partnerships.37

In addition to the substantial efforts at coordination, CISA and the SRMAs have undertaken a number of other worthwhile steps to enhance the US capability to respond to attacks on critical infrastructures. Regulatory authority has been utilized to require or propose cybersecurity requirements including for air, rail, pipelines, and water.38 Utilizing the authority and resources provided by Congress, cybersecurity assistance is being provided to SLTT entities.39 A Joint Cyber Defense Collaborative has been established to effectuate “operational collaboration and cybersecurity information fusion between public and private sectors, for the benefit of the broader ecosystem, [and for] producing and disseminating cyber defense guidance across all stakeholder communities.”40 CISA additionally conducts exercises and training with the private sector, ranging from a tabletop exercise to the large-scale Cyber Storm exercise, which simulates a cyberattack.41

CISA also has set forth a “planning agenda” seeking to “combin[e] the capabilities of key industry partners with the unique insights of government agencies . . .[in order to] create common shoulder-to-shoulder approaches to confront malicious actors and significant cyber risks.”42 The agenda includes “efforts to address risk areas” such as open-source software, and the energy and water sectors, while recognizing that “our plans and doctrine have not kept up” with the requirements of cybersecurity.43 Similarly, CISA has recognized the value of effective cybersecurity firms supporting less-capable companies, specifically seeking to “advance cybersecurity and reduce supply chain risk for small and medium critical infrastructure entities through collaboration with remote monitoring and management (RMM), managed service providers (MSPs), and managed security service providers (MSSPs).”44

CISA’s efforts are complemented by the National Cyber Investigative Joint Task Force, led by the Federal Bureau of Investigation and by the Cybersecurity Collaborative Center (CCC) led by the National Security Agency (NSA). Under the recent National Cybersecurity Strategy Implementation Plan, the FBI is to “expand its capacity to coordinate takedown and disruption campaigns with greater speed, scale, and frequency.”45 The NSA’s CCC provides support to the private sector including cost-free protection for DIB companies through a “filter which blocks users from connecting to malicious or suspicious [Internet] domains” as well as “bi-directional cyber threat intelligence sharing with major IT and cybersecurity companies who are best positioned to scale defensive impacts [and which has] hardened billions of endpoints across the globe against foreign malicious cyber activity.”46

To sum up, while the National Plan is focused on significant threats and there is much to commend in the actions taken and planned, those efforts have not yet taken account of the significant disruptive potential of wartime threats. Neither CISA (through the Joint Cyber Defense Collaborative or otherwise) nor the SRMAs nor the ONCD have yet established the type of coordination mechanisms necessary for effective private-sector operations in wartime along the lines as have been undertaken in the Ukraine-Russia war. Similarly, while the FBI and the NSA undertake certain operational activities, in their current format those actions do not reach the level of effort required for effectiveness in wartime.

IV. Recommendations

The discussion above demonstrates both the ongoing engagement of the private sector in the Ukraine-Russia war and the potential for important private-sector future roles if the United States and its allies were involved in a future conflict. Maximizing that potential for the United States and its allies will require collaborative initiatives that engage the private sector as an operational partner. The discussion below sets forth ten such initiatives focusing largely on actions to be taken in the United States, though as previously noted, comparable actions should be undertaken by allies and key partners.

A. Congress and the Biden administration should expand the existing national framework to provide for effective engagement with the private sector in wartime

Congress and successive administrations have regularly focused on the need to upgrade homeland security and each branch of government has undertaken to assure an effective national defense. However, neither Congress nor the executive branch has yet brought the two together in a comprehensive approach, and neither has provided a framework for the inclusion of the private sector as part of operational wartime defense activities.

The importance of establishing such a framework has recently been made clear by the lessons drawn from the Ukraine-Russia war, as discussed above. Broadly, the administration should issue an executive order under existing authorities to begin the establishment of such a framework, and Congress should work with the administration to establish the necessary full-fledged approach, including the provision of the requisite authorities and resources. The specific actions are discussed at length in the recommendations below.

Initially, the administration should establish a Critical Infrastructure Wartime Planning and Operations Council with government and private-sector membership (including, as requested, SLTTs); establish regional resilience collaboratives; and help facilitate the establishment of sector-specific coordinating mechanisms. Congress and the administration should work together to establish an Integrated Cybersecurity Providers Corps; authorize the establishment of a national Cybersecurity Civilian Reserve Corps and an expansion of National Guard cybersecurity capabilities; authorize Cyber Command in wartime to support key critical infrastructures; establish an international Undersea Infrastructure Protection Corps; expand the use of private-sector space capabilities; and enact the required authorities and provide the necessary resources to accomplish each of the foregoing.

B. Establish a critical infrastructure wartime planning and operations council with government and private-sector membership

In the United States (and in most other allied countries), there is no comprehensive mechanism to engage the private sector in warfare. While there are worthwhile efforts—such as by CISA and the SRMAs, as described above—they are focused on prewar resilience. By contrast, Finland, NATO’s newest member, has long had a comprehensive approach to national security that fully engages the private sector, including in the event of an “emergency,” which is defined to include “an armed or equally serious attack against Finland and its immediate aftermath [or] a serious threat of an armed or equally serious attack against Finland.”47

In such an event, the Finland model of “comprehensive security” provides that the “vital functions of society are jointly safeguarded by the authorities, business operators, organisations and citizens.”48 The Security Strategy for Society describes a “cooperation model in which actors share and analy[z]e security information, prepare joint plans, as well as train and work together.”49 Participants include the central government, authorities, business operators, regions and municipalities, universities, and research and other organizations.50 Quite importantly, “[b]usiness operators are playing an increasingly important role in the preparedness process . . . [and in] ensuring the functioning of the economy and the infrastructure.”51

Finland has a small population, so the precise mechanisms it utilizes for its comprehensive approach would need to be modified for other countries, including the United States. But the key point is that there needs to be such an overarching cooperation model involving this range of actors and activities.

To accomplish such a coordinated effort—and to focus on the United States—a CIWPOC with government and private-sector membership should be established through the issuance of an executive order as part of the overall White House national security structures.

At the governmental level, it is important to recognize that neither the existing Federal Senior Leadership Council, which includes CISA and the SMRAs, nor any of the other councils and coordinating efforts described above are operationally oriented for wartime activities, nor are they designed to undertake the necessary actions required to “analyze security information, prepare joint plans, as well as train and work together” in the context of conflict or imminent threat of conflict.52Accordingly, a better mechanism to guide actions in wartime would be to establish a CIWPOC along the lines of a joint interagency task force (JIATF) with appropriate personnel from relevant agencies plus private-sector subject matter experts, each of whom would have the background and capabilities to plan for and, if required, act in a wartime context.53

Such a CIWPOC could be headed by CISA prior to a wartime-related emergency, with the Defense Department acting as the deputy and organizing the necessary planning and training. In the event of a conflict or if a threat is imminent, the Defense Department would take command to integrate the CIWPOC into the full context of responding to the conflict, with CISA then in the deputy role. The dual-hatting of CISA and the Defense Department is key to ensuring a smooth transition in the event of conflict as that will allow for coordination mechanisms to be established prior to conflict. The planning and training led by the Defense Department prior to conflict will also establish lines of coordination as well as the necessary familiarity with tasks required in wartime, both for DOD and CISA as well as for the other government departments and private sector entities that are engaged with the CIWPOC.

Initially, at least, the CIWPOC membership should be limited to departments with responsibility for sectors most relevant to wartime military efforts as well as to continuity of government and to key elements of the economy. Utilizing that criterion, a first set of members would include defense, homeland security, energy, finance, information and communications technology, transportation, SLTTs, food, and water.

Private-sector representation on the CIWPOC should come from the key critical infrastructures, noted above, most relevant to planning and operations in a conflict. As discussed below, that would include representatives from the proposed Integrated Cybersecurity Providers Corps and the Undersea Infrastructure Protection Corps, as well as from the regional resilience collaboratives and the private-sector systemic risk analysis and response centers, established as recommended below. As would be true for governmental departments, private-sector membership will not necessarily include all critical infrastructures, as the focus for the CIWPOC is on the operational capabilities that the private sector can provide in the event of a conflict. There would be costs to the private-sector entities associated with the planning and training efforts described, and, inasmuch as those costs are associated with providing national defense, Congress should undertake to include them in the national defense budget.
As part of organizing the proposed CIWPOC, DOD would have to determine which military command would have the lead and what resources would be required. In order to achieve the full degree of effectiveness required, the administration should undertake a thorough review of command arrangements and resources required for homeland defense, as the current arrangements are not sufficient.54

  • Northern Command’s current mission is to provide “command and control of . . . DOD homeland defense efforts and to coordinate defense support of civil authorities.”55 While it is analytically the appropriate command to lead in the context of the CIWPOC, in reality, Northern Command would need substantial additional resources and expanded authorities to undertake the requisite actions. By way of example, its mission would need to expand beyond “defense support to civil authorities” to include planning for wartime and operational control as required in the event of conflict.
  • Transportation Command, Cyber Command, Space Command, and the Coast Guard each would have important roles in generating the necessary plans, training, and (if required) operations. They likely should be supporting commands in undertaking those missions in the United States in order to maintain unity of command at the DOD level and unity of effort both at the interagency and private-sector levels. However, the arrangements within DOD and with interagency participants are not yet established.
  • The review recommended above should be undertaken promptly, and the results presented to the president and then to the Congress for such actions as may be required—but that process should not be a bar to the initial establishment of the CIWPOC, including DOD’s engagement.

C. Establish regional resilience collaboratives

In addition to the central Critical Infrastructure Wartime Planning and Operations Council discussed above, it will be important to coordinate government and private-sector activities in key geographical locations with a focus on support to national defense wartime efforts.

Not everything can best be done centrally in the context of a conflict. By way of example, the Finnish model of collective security underscores the importance of regional efforts:


There should be cooperation forums of security actors (such as preparedness forums) . . . in each region . . . [which] would form the basis for the preparedness plan that would also include the lines of authority, continuity management, use of resources, [and] crisis communications plan[s] . . . The workability of the preparedness plans and the competence of the security actors would be ensured by training and joint exercises.56

CISA does have established mechanisms to reach out to private sector companies and to SLTTs, including through its regional offices and its SLTT grant program.57 However, in accord with its overall approach, those efforts are not focused on wartime activities. One way to generate the necessary regional efforts for wartime would be to establish regional resilience collaboratives for key geographic areas with an initial focus on those areas that provide critical support to military operations such as key US ports on the East, Gulf, and West coasts. To increase the attractiveness for the private sector, the regional resilience cooperatives should focus on both wartime and other high-consequence risks, such as cascading impacts in circumstances short of war.

The Senate version of the FY2024 National Defense Authorization Act includes a provision focused on regional resilience. The bill provides for a pilot program to evaluate “how to prioritize restoration of power, water, and telecommunications for a military installation in the event of a significant cyberattack on regional critical infrastructure that has similar impacts on State and local infrastructure.”58 The bill requires that the pilot program should be “coordinated with . . . private entities that operate power, water, and telecommunications” for the military installations included in the pilot program.59

It should be apparent that the Defense Department will not be able of itself to create the necessary cyber resilience against an attack nor the necessary restoration processes (though, as discussed below, DOD can provide important support). Those actions will have to be undertaken by the private sector (or, in some cases, by SLTTs that operate critical infrastructure).

Accordingly, the FY2024 NDAA when enacted should include provisions to establish regional resilience collaboratives, initially to operate to generate sustained engagement among public and private entities designed to respond to wartime attacks and high-consequence cybersecurity risks in peacetime through collaboration among key private, SLTT, and federal entities. As a first step (and consistent with the Senate bill calling for mapping dependencies) , a regional resilience collaborative should build a regional risk registry focused on regional dependency models, including cascading risks.60

As with the case of the CIWPOC discussed above, CISA would lead in peacetime and DOD in wartime. Support would also come from the integrated cybersecurity protection corps described below. Regional resilience collaboratives would undertake operational planning led by the Department of Defense that would utilize both private and public capabilities. Continuous planning (including updated threat reviews and net assessments) and implementing actions would enhance resilience and allow for effective responses, if required. While the benefits from a regional resilience collaborative would be made widely available, the actual participants would be selectively included as relevant to the risks identified by the regional risk registry.

A regional risk collaborative effort would have costs associated with its activities. As would be the case regarding the CIWPOC as well as the integrated corps of cybersecurity providers, and since those costs are associated with providing national defense, Congress should undertake to include them in the national defense budget.

D. Establish private-sector systemic risk analysis and response centers

Certain sectors of the economy are sufficiently critical that undertaking enhanced efforts to reduce risk in wartime would be important to the national defense. To be sure, all critical infrastructures already undertake a variety of coordination efforts, including those noted above, as well as through Information Sharing and Analysis Centers (ISACs) and Information Sharing and Analysis Organizations.61However, particularly in the context of wartime, it will be important to go beyond information sharing and to undertake coordinated risk-reduction efforts.

A model for this in the United States is the Analysis and Resilience Center for Systemic Risk (ARC), which is a “coalition that is identifying, prioritizing, and mitigating risks to their infrastructure and the points of connection to other critical infrastructure sectors.”62 The ARC brings together “small groups of industry experts [who] identify risks and find solutions that benefit the larger critical infrastructure community.”63 The activities of the ARC go well beyond the information sharing currently undertaken by the ISACs, seeking to respond to systemic risk in a coordinated way. While the existing ARC members come from leading financial and energy firms, the concept should be extended to key functional areas including transportation, food, water, and healthcare.

Newly established private-sector systemic risk analysis and response centers will also benefit from close coordination with key providers of network infrastructure and services, as is currently being accomplished for the financial industry through the Critical Providers Program of the financial services ISAC (FS-ISAC).64 That program “enables critical providers to use FS-ISAC channels to communicate during large-scale security upgrades, technical outages, cyber-based vulnerabilities, software and hardware misconfigurations, and/or changes that could impact multiple FS-ISAC members.”65 As the foregoing suggests, there is already a certain amount of coordination being undertaken in the information and communications technology (ICT) arena, and a determination can be undertaken as to the value of establishing an ICT systemic risk analysis and response center.

E. Establish an integrated cybersecurity providers corps

As discussed above, one of the key roles that the private sector has played in the Ukraine-Russia war is to provide highly effective cybersecurity for critical infrastructures despite significant and continuing Russian cyberattacks. In the event of a conflict with either Russia or China, US cybersecurity firms could be expected to undertake similar actions, including based on service-level agreements they have with critical infrastructures in the United States and efforts like the Critical Providers Program noted above. However, also as noted above, the actions being taken in Ukraine are part of a larger operational collaborative effort that includes firms working together and with governments (including the United States, the UK, and Ukraine). Accordingly, for private-sector cybersecurity support to be most effective in the United States in wartime, a similar approach to coordinated support should be organized in advance of the need, in conjunction with the government, including appropriate information sharing, planning, and exercises relevant to wartime operations.

To begin such an effort, an Integrated Cybersecurity Providers Corps (ICPC) should be established and focused on providing effective cybersecurity for those critical infrastructures most relevant to military activities, continuity of government, and maintaining the performance of the economy. One of the fundamental recommendations of the National Cybersecurity Strategy is to “ask more of the most capable and best-positioned actors to make our digital ecosystem secure and resilient,” and that should certainly apply to wartime.66

The ICPC should operate under the general ambit of the Critical Infrastructure Wartime Planning and Operations Council, described above. Membership should consist of highly capable cybersecurity firms and major cloud providers, with CISA and DOD jointly determining whether a cybersecurity provider met the requirements for membership in the corps. Broadly speaking, an integrated cybersecurity provider should be able to provide high-end cybersecurity services including authentication, authorization, segmentation, encryption, continuous monitoring, and protection against DDoS attacks. Cloud providers should have the ability to protect the cloud itself and to offer other expert security providers the opportunity to provide cybersecurity as a service on the cloud. The intent would be to ensure that key critical infrastructures have the support of effective integrated cybersecurity providers in wartime.67

Concomitant with the establishment of the ICPC, DHS/CISA and DOD, who will work closely with the ICPC members, should undertake to assure the engagement of the key critical infrastructures most relevant in wartime to military activities, continuity of government, and maintaining the performance of the economy. Usefully, DHS/CISA already is required to identify infrastructures of critical importance to the United States:


The Department of Homeland Security (DHS), in coordination with relevant Sector Specific Agencies (SSAs), annually identifies and maintains a list of critical infrastructure entities that meet the criteria specified in Executive Order (EO) 13636, Improving Critical Infrastructure Cybersecurity, Section 9(a)(‘Section 9 entities’) utilizing a risk-based approach. Section 9 entities are defined as ‘critical infrastructure where a cybersecurity incident could reasonably result in catastrophic regional or national effects on public health or safety, economic security, or national security.’68

The Section 9 list could provide the basis—or at a minimum, a starting point—for identifying the infrastructures most critical in the context of wartime. Additionally, however, since one key objective in wartime will be continuity of government, at least some SLTT governments will need to be included on the list—though there will have to be some very significant prioritization since there are approximately ninety thousand local governments in the United States.69Initial inclusion of SLTTs might be for those related to areas for which regional resilience collaboratives are established.

A third step will be to create a process to provide assured linkages between the designated key critical infrastructures (including the key SLTTs) and integrated cybersecurity providers. Congress should enact legislation authorizing regulations requiring such support in wartime for designated critical infrastructures and should establish a voluntary program for key SLTTs. A regulatory approach is particularly necessary as, for the most part, critical infrastructure companies are far less capable at cybersecurity than are the expert cybersecurity providers—and that would certainly be true in wartime, when the threat would be more substantial. Under the regulations, designated critical infrastructures should be required to plan and train with integrated cybersecurity providers prior to conflict so that the requisite cybersecurity resilience could be achieved in wartime. SLTTs should likewise be provided the opportunity for cybersecurity support, including planning and training on a voluntary basis, for reasons of federalism. As noted above, there will be costs associated with such activities which, since they would be undertaken in support of national defense, should be included by Congress in the Defense Department budget.

F. Create a wartime surge capability of cybersecurity personnel by establishing a cybersecurity civilian reserve corps and expanding National Guard cyber capabilities

The need for the federal government to overcome the currently existing shortage of qualified cybersecurity personnel is well understood, and the importance of having sufficient cybersecurity personnel would be even greater in wartime. At the time of this writing, both the House and Senate versions of the fiscal year (FY) 2024 National Defense Authorization Act (NDAA) have provisions intended to help ameliorate that shortage, but more substantial improvements are warranted.

In the House, Representative Mark Green had proposed requiring a report on the “feasibility of establishing a cyber unit in every National Guard of a State.”70 That recommendation was not included in the House version of the NDAA but there is a provision authorizing Cyber Command to “accept voluntary and uncompensated services from cybersecurity experts.”71 By contrast, in the Senate, Senators Jacky Rosen and Marsha Blackburn had proposed establishing a pilot program for a cyber reserve for DOD and DHS.72 That proposal also was not included in its entirety in the Senate version of the NDAA but there is a provision for the Secretary of the Army to “carry out a pilot project to establish a Civilian Cybersecurity Reserve.”73 Each of the proposed provisions is a step forward and enacting both the House and Senate provisions would be worthwhile, but the final version of the NDAA should go further than the existing proposals and move promptly to full-fledged cyber civilian reserve and augmented National Guard cyber capabilities.

Establishing a “surge capability” able to add significant numbers of personnel from the private sector for cybersecurity activities in the event of a conflict should be a high priority for the United States. The value of such a capability has been underscored in the context of the conflict in Ukraine, in which:


[i]mmediately after the invasion, Ukraine also began to elicit support from the private sector to supplement its own cyber capabilities. One aspect of this effort was to call on national private-sector experts. Requests for volunteers to help protect [critical infrastructures] were reportedly circulated through communities at the request of a senior Ukrainian defence ministry official. These volunteers were requested to help defend infrastructure, identify critical vulnerabilities and carry out other defensive tasks.74

In the United States, such a reserve capability could be established by a combination of the proposed measures now in the House and Senate versions of the NDAA as well as Representative Green’s proposal for expanding National Guard cyber capabilities.

  • A cybersecurity civilian reserve corps would provide for the United States access to personnel beyond those seeking to be part of the military. Such an approach is being utilized by US allies with very substantial cyber capabilities. The UK has already established its Joint Cyber Reserve Force with a “mantra of high-end cyber talent first,” so that the “Reserves ‘conventional’ physical entry standards (physical ability, fitness, etc.) are not our immediate concern. This ensures that we can select untapped talented individuals who would not normally see reserve service as an option or possibility.”75 Other countries such as Estonia have also developed reserve models to “bring together competent IT experts who can solve significant and long-term cyber incidents.”76
  • The National Guard currently includes both Army and Air Force cyber units.77 However, expanding their numbers and better integrating them into the force would have high value. Given the substantial demand for additional cyber personnel, and as previously recommended, “the number of National Guard personnel directed toward the cyber mission should be significantly increased. . . . [and] a reasonable initial step would be to increase Guard end strength in order to increase the number of cyber personnel to approximately double the current levels.”78 In accomplishing that increase, the “Department of Defense [should] bolster its operational capacity in cyberspace through improved utilization of the National Guard,” as Congress has previously called for: “Despite [Congressional] calls for change, the Department of Defense and the military services appear not to have made any meaningful change in how the expertise resident within the National Guard and the Reserve Component can be better leveraged.”79

In sum, combining the current versions of the House and Senate NDAA legislation and additionally establishing an expanded National Guard cyber capability would result in significant benefits to the United States in the event of a conflict.

G. Expansion of Cyber Command’s “hunt forward” model to support key critical infrastructures in wartime in the United States

US Cyber Command regularly works with allied and partner nations at their request to enhance the cybersecurity of their critical infrastructures.80 Testimony from Cyber Command has described that “since 2018, [it] has deployed hunt forward teams 40 times to 21 countries to work on 59 networks.”81 Cyber Command has described its Hunt Forward operations (HFOs) as follows:


. . . strictly defensive cyber operations conducted by U.S. Cyber Command (USCYBERCOM) at the request of partner nations. Upon invitation, USCYBERCOM Hunt Forward Teams deploy to partner nations to observe and detect malicious cyber activity on host nation networks. The operations generate insights that bolster homeland defense and increase the resiliency of shared networks from cyber threats.82

A Hunt Forward operation is a joint effort, as the Cyber Command operators “sit side-by-side with partners and hunt for vulnerabilities, malware, and adversary presence on the host nation’s networks.”83

As a matter of policy, Cyber Command does not currently undertake operations in the United States. In wartime, however, Cyber Command should have an expanded mission to support key critical infrastructures most relevant to national defense. As described above, such governmental efforts have been instrumental—along with the actions of the private sector—in supporting Ukraine, and a similar collaborative approach should be undertaken for wartime in the United States.

In the United States in wartime, Cyber Command hunting capabilities should be coordinated with the relevant critical infrastructures and with the proposed Integrated Cybersecurity Providers Corps. Undertaking prior training and exercises would, of course, make any actual operations more effective. Additionally, to accomplish such a mission without diverting resources from Cyber Command’s core mission set (i.e., global cyber operations and defense of DOD networks), Cyber Command would likely require a substantial increase in personnel for wartime operations.84 As discussed in the prior section, there are good reasons to establish a wartime cyber civilian reserve and to increase National Guard cybersecurity capabilities—and supporting Cyber Command wartime operations would be one of the most important.

In expanding the mission as recommended above, Cyber Command would be subject to the same constitutional requirements as other federal departments and agencies, including the Fourth Amendment’s limits on intrusion into private activities. While searches based on enemy actions in wartime would likely be deemed reasonable and warrants could be obtained, a much better approach—both as a matter of constitutional law and appropriate policy—would be for the federal government to work with the key critical infrastructures to establish a consensual wartime set of arrangements and for Congress to undertake a review of the agreed activities.85

H. Establish an undersea infrastructure protection corps

The United States and its allies have long recognized the vulnerability of undersea pipelines and cables.86 Attacks on the Nord Stream 1 and 2 pipelines in September 2022 have underscored those vulnerabilities and raised the visibility of the security issue at the highest levels of government.87 At the May 2023 G7 summit, the group determined, “[w]e are committed to deepen our cooperation within the G7 and with like-minded partners to support and enhance network resilience by measures such as extending secure routes of submarine cables.”88 Relatedly, the Quad grouping of countries (i.e., Australia, India, Japan, United States) agreed to establish “the Quad Partnership for Cable Connectivity and Resilience [which] will bring together public and private sector actors to address gaps in the infrastructure and coordinate on future builds.”89

The G7 and Quad actions are future-oriented, but pipelines and undersea cables are currently subject to more immediate vulnerabilities, with Russia being a particularly concerning threat.90 As NATO Secretary General Jens Stoltenberg has stated:


So we know that Russia has the capacity to map, but also potentially to conduct actions against critical infrastructure. And that’s also the reason why we have, for many years, addressed the vulnerability of critical undersea infrastructure. This is about gas pipelines, oil pipelines, but not least thousands of kilometres of internet cables, which is so critical for our modern societies—for financial transaction, for communications, and this is in the North Sea, in the Baltic Sea, but across the whole Atlantic, the Mediterranean Sea.91

A report to the European Parliament similarly highlighted the issues, noting the Russian Navy has a “special focus” on the Yantar-class intelligence ships and auxiliary submarines, which have the capacity to disrupt undersea cable infrastructure. Also of note are “new abilities to deploy mini-submarines” to explore underwater sea cables by stealth, according to the report.92

As a consequence of those concerns, NATO has established a NATO Maritime Centre for the Security of Critical Undersea Infrastructure as a partnership with the private sector.The Maritime Centre for the Security of Critical Undersea Infrastructure will be based in Northwood near London. NATO had earlier set up a coordination cell in Brussels to better monitor pipelines and subsea cables that are deemed especially endangered by underwater drones and submarines. 93 Per Secretary General Stoltenberg, the purpose is to strengthen the protection of undersea infrastructure:


And of course, there’s no way that we can have NATO presence alone [surveilling] all these thousands of kilometres of undersea, offshore infrastructure, but we can be better at collecting information, intelligence, sharing information, connecting the dots, because also in the private sector is a lot of information. And actually, there’s a lot of ongoing monitoring of traffic at sea and to connect all those flows of information will increase our ability to see when there is something abnormal and then react dependent on that.94

Secretary General Stoltenberg highlighted the importance of collaborating with the private sector:


And then most of it is owned and operated by the private sector and they also have a lot of capabilities, to protect, to do repair and so on. So the purpose of this Centre . . . is to bring together different Allies to share information, share best practices, and to be able to react if something abnormal happens and then also to ensure that the private sector and the government, the nations are working together.95

As the new NATO effort underscores, resilience of undersea infrastructure will be of high consequence in the event of armed conflict. However, NATO itself does not generally provide the capabilities that the organization utilizes, but rather relies on the capabilities provided by its member nations. Accordingly, the United States should work with allies and those elements of the private sector that have relevant undersea capabilities to establish an international Undersea Infrastructure Protection Corps, both to support NATO activity and because security for undersea infrastructures is inherently international. This corps should include both the private-sector builders/maintainers and the owners of undersea cables and pipelines. That group would organize the actions required to enhance the resilience that would be necessary in wartime.

The countries and companies connected by cables and pipelines involve substantial numbers of US allies. According to one industry analysis, the top five undersea cable vendors are Alcatel-Lucent Enterprise (France), SubCom LLC (United States), NEC Corporation (Japan), Nexans (France), and Prysmian Group (Italy).96 In terms of ownership, US companies are significantly involved with Google, Facebook, Microsoft, and Amazon being significant investors in cables.97 With respect to undersea pipelines, there are multiple such pipelines in the North Sea, Baltic Sea, Mediterranean Sea, and the Gulf of Mexico, all, of course, involving US allies and/or the United States.“98 Accordingly, there should be sufficient geopolitical alignment with respect to establishing an Undersea Infrastructure Protection Corps, and while the precise arrangements will have to be negotiated, it is notable that several countries have already taken steps. The UK, Norway, and Italy are each organizing security efforts to enhance pipeline security, and the United States, the UK, and France have well-established undersea capabilities.99

An international Undersea Infrastructure Protection Corps should have three areas of focus. First, as is true with respect to other information and communication technology networks, undersea cables will need the same type of effective cybersecurity. As noted above, several significant undersea cable owners are also companies that have been extensively involved in the defense of Ukraine’s ICT networks, including working with the United States and the UK. That operational experience and real-time experience with public-private coordination should provide a basis for extending such an approach to undersea cables.100

Second, all undersea cables eventually come out of the sea to on-ground “landing points.” John Arquila has indicated that “concerns about the vulnerability of landing points, where the cables come ashore . . . has led to the idea of having many branch points near landfall.”101 Arquila also describes efforts “to improve landing-point security through concealment and hardening—including, in the latter case, the shielding with armor of the cable segments in shallower waters near landing points. . . . [and also use of] both surveillance technologies and increased on-site security.”102 An Undersea Infrastructure Protection Corps can build on such approaches.103

Third, undersea infrastructures can be repaired, with cable repairs regularly undertaken for commercial reasons.104 However, as a report to the European Parliament describes, the availability of cable repair capabilities deserves review:


A key and often neglected vulnerability of the cable infrastructure is the
capabilities . . . for repair. The capabilities within Europe are very limited . . . The repair infrastructure is often not featured in risk analyses, although it is in larger-scale coordinated attack scenarios.105

The proposed international Undersea Infrastructure Protection Corps should evaluate whether sufficient repair capability exists under the conditions that might occur if there were an active conflict and recommend such remediation steps as should be undertaken in the face of any deficiencies.

I. Expand usage of commercial space-based capabilities

In the Ukraine-Russia war, commercial space capabilities have been critical to Ukraine’s defense (as described above), as well as to maintaining governmental and economic functioning. The United States is already undertaking significant activities with the commercial space sector in the defense arena. The discussion below summarizes key elements of that effort and further proposes additional actions for the use of private-sector space capabilities that would enhance resilience in wartime for defense, government continuity, and the economy.

First, in the defense arena, commercial capabilities are being increasingly relied upon to meet the military’s space launch requirements. Private-sector SpaceX Falcon 9 reusable rockets, which regularly put commercial satellites in place, have recently been used, for example, to launch “the first 10 of the planned 28 satellites [for defense] low-latency communications [and] missile warning/missile tracking.”106 That space architecture is planned to expand to 163 satellites.107 Similarly, other companies such as Rocket Lab have commercial launch capabilities.108 Continuing the use of commercial launch capabilities to generate military constellations as well assuring their availability in wartime will be critical to effective defense operations.

Second, and as the foregoing suggests, the proliferation of satellites that the DOD can rely on in wartime significantly adds to the resilience of the space enterprise. As one report describes:


The use of small, inexpensive satellites in a pLEO [proliferated low-Earth orbit] constellation also improves deterrence because of its increased cost imposition potential. The cost of a direct-ascent KE ASAT [kinetic antisatellite] is now greater than the target satellite, and because of the sheer number of assets an enemy must attack, proliferation reduces the effectiveness and impact of these weapons and other coorbital threats.109

Third, commercial sensing capabilities can complement the military’s more exquisite sensing. Satellite companies such as Planet, Capella Space, and Maxar Technologies have supplied imagery upon Ukraine’s request, as noted above.110 The Defense Department has likewise been utilizing such commercial space-based, ground-sensing capabilities having, for example, recognized a “critical need for improved, large scale, situational awareness satisfied by less expensive, day/night, all-weather imaging satellites capable of filling gaps in space-based reconnaissance.”111 For example, Planet was awarded a National Reconnaissance Office (NRO) contract in October 2019 for “an unclassified, multi-year subscription service contract for daily, large-area, 3-5 meter resolution commercial imagery collection. . . . [for] access to new daily unclassified imagery over multiple areas of interest to military planners, warfighters, and the national security community.”112

Moreover, commercial sensing is becoming increasingly capable, going beyond optical capabilities, with Umbra having launched commercial “radar-imaging” microsatellites whose capabilities can be used for “remote wildlife habitat protection, pollution and plastic waste tracking, oil spill detection, military intelligence gathering [italics added], live flooding estimation during storms, and more.113

The Defense Department also has been seeking to expand its “space domain awareness” through collaboration with the private sector. Maxar Technologies, for example, recently signed a contract with the NRO which “includes a provision to experiment with using its satellites to provide ‘non-Earth’ data, which includes high-resolution imagery of the space environment.”114 That effort would complement ongoing actions by Space Force, whose “fleet of radars, known as the Space Surveillance Network, observe space from the ground and feed data into command and control systems that catalog space objects” to deal both with issues of “congestion and debris in low Earth orbit . . . and aggression from adversaries like Russia and China.”115

Fourth, the information and communications technology networks being established by commercial providers can themselves be utilized for wartime operations, again as has been demonstrated by the use of Starlink in Ukraine. But Starlink would not be the only provider. Currently, another constellation consisting “of small, low-cost satellites under 100 kilograms capable of multiple rapid-launch” is under development, based “on an orbital mesh network of . . . commercial and military microsatellites,” which will be “capable of providing low-latency internet connectivity between sensors and weapons for military mission.”116 Future capabilities include the establishment of “free space optical networks” which will potentially have “immense benefits including high security, better data rates [and] fast installations, no requirement of licensed spectrum, best costs [and] simplicity of design,” and will be challenging to detect and to intercept “in view of small divergence of the laser beams.”117

Governments plan to develop position, navigation, and timing capabilities—now generally done in medium-Earth orbit by the Global Positioning System or equivalent satellites—with a variety of capabilities including but not limited to low-Earth orbit capabilities.118 In the United States, Xona Space Systems is “developing PULSAR—a high-performance positioning, navigation, and timing (PNT) service enabled by a commercial constellation of dedicated [low-Earth orbit] satellites.”119

Another application of commercial capabilities for defense space support is the use of the cloud for development of space-related software:


The Space Development Agency awarded a $64 million contract to Science Applications International Corp. (SAIC) to develop a software applications factory for the agency’s low Earth orbit constellation [but] . . not [by] build[ing] an actual factory but [rather] a cloud-based development process to design, test and update software applications using a repeatable path.120

In light of the very substantial ongoing interactions between the Department of Defense and the commercial space sector, as discussed above, the key issue for wartime is simply to ensure that the existing (and future) capabilities are available for use as required. That can be accomplished in the first instance by contractual arrangements along the lines of those utilized by DOD for support from the airline and maritime industries. By way of example, the Civil Reserve Air Fleet (CRAF) provides “selected aircraft from US airlines [which are] contractually committed to CRAF [to] augment Department of Defense airlift requirements in emergencies when the need for airlift exceeds the capability of military aircraft.”121

The US Space Force is in process of developing the Commercial Augmentation Space Reserve (CASR) program. As with CRAF, CASR would seek to establish “voluntary pre-negotiated contractual arrangements” that would provide support to the military by ensuring that “services like satellite communication and remote sensing are prioritized for U.S. government use during national security emergencies.”122 Among the issues that Space Force presumably is discussing with the private sector in connection with CASR would be determining which services and in what amounts could reliably be provided in a wartime environment, whether such services could be based on existing (or planned) private-sector constellations or whether those would need to be expanded, what provisions would need to be made for satellite and/or ground station replacement in the event of adversary attacks, what provisions for indemnification need to be agreed upon, and what level of funding would be appropriate both to incentivize the private sector and to accomplish the requisite wartime tasks as well as to undertake planning and training prior to conflict.

Relatedly, it is worth noting that the Defense Production Act authorizes the government to require the prioritized provision of services—which would include services from space companies—and exempts any company receiving such an order from liabilities such as inability to support other customers.123 However, it would be much more desirable—and much more effective—if the necessary arrangements were established in advance through a voluntary arrangement as the CASR program is seeking.

J. Authorities and resources

Undertaking the actions recommended above will require some important changes to governmental authorities as well as the provision of additional resources necessary to accomplish the recommended outcomes.

Regarding authorities, the administration currently has the authority to establish a Critical Infrastructure Wartime Planning and Operations Council with government and private-sector membership (including, as requested, SLTTs); establish regional resilience collaboratives; and help facilitate the establishment of sector-specific coordinating mechanisms. The administration and the Congress should work together to establish the authorities necessary to:

  • Create an Integrated Cybersecurity Providers Corps.
  • Establish a national Cybersecurity Civilian Reserve Corps and expand National Guard cybersecurity capabilities.
  • Authorize Cyber Command to support key critical infrastructures in wartime.
  • Establish an international Undersea Infrastructure Protection Corps.
  • Expand the use of private-sector space capabilities.

In undertaking such enactments as required, Congress should also evaluate whether any antitrust or other safe harbor exemptions would be necessary to allow for the desired level of collaboration.

In terms of resources, funding, as noted above, will be required for each of the recommended activities. Including such costs as line items in the Defense Department budget would be appropriate to support each of the proposed activities as the activities are all to be undertaken in support of national defense in a wartime context. As a complement to line-item budgeting, Congress might also consider authorizing the use of transferable tax credits, which could be utilized as payment in order to offset the costs of the provision of capabilities and services prior to or in wartime.124 The precise nature of the funding arrangement might differ among the different activities. Space Force’s CASR initiative is a useful model but whatever the precise mechanism, it is important to recognize that the private sector would incur potentially significant costs including pre-conflict planning and training activities, and that those are being undertaken to support national defense.

Conclusion

The United States has made significant efforts in enhancing the resilience of critical infrastructures, but has not yet focused on how to support those infrastructures in wartime. The recommendations in this report provide a basis for such an effort. That effort should start now. Indeed, one of the lessons from Ukraine’s wartime experience is the importance of beginning as soon as possible. As one analysis states:


. . . others seeking to replicate Ukraine’s model of success should recognise that building an effective cyber-defence posture is a marathon, not a sprint. Ukraine’s capacity to withstand Russia’s offensive stems from incremental improvements in its cyber defences over years of painstaking effort and investment. The specific plans and contingencies developed for the war would not have been possible without modernising national cyber-defence systems and raising the maturity levels of public and private critical infrastructure providers in the years leading up to the invasion. Take for example the unprecedented levels of threat intelligence sharing from external partners—undeniably a significant boon to Ukrainian situational awareness and ability to detect emerging threats. Without prior efforts to close visibility gaps, train defenders and adopt a more active cyber-defence posture, the ability to integrate and exploit this intelligence at scale would have been severely limited.125

The private sector will have important roles in any future conflict in which the United States engages. To maximize that potential, there needs to be active development of the sixth domain, with the private sector being fully included in wartime constructs, plans, preparations, and actions, as recommended in this report.

About the author

Franklin D. Kramer is a distinguished fellow and board director at the Atlantic Council. Kramer has served as a senior political appointee in two administrations, including as assistant secretary of defense for international security affairs. At the Department of Defense, Kramer was in charge of the formulation and implementation of international defense and political-military policy, with worldwide responsibilities including NATO and Europe, the Middle East, Asia, Africa, and Latin America.

In the nonprofit world, Kramer has been a senior fellow at CNA; chairman of the board of the World Affairs Council of Washington, DC; a distinguished research fellow at the Center for Technology and National Security Policy of the National Defense University; and an adjunct professor at the Elliott School of International Affairs of The George Washington University. Kramer’s areas of focus include defense, both conventional and hybrid; NATO and Russia; China, including managing competition, military power, economics and security, and China-Taiwan-US relations; cyber, including resilience and international issues; innovation and national security; and irregular conflict and counterinsurgency.

Kramer has written extensively. In addition to the current report, recent publications include China and the New Globalization; Free but Secure Trade; NATO Deterrence and Defense: Military Priorities for the Vilnius Summit; NATO Priorities: Initial Lessons from the Russia-Ukraine War; “Here’s the ‘Concrete’ Path for Ukraine to Join NATO”; and Providing Long-Term Security for Ukraine: NATO Membership and Other Security Options.

Forward Defense, housed within the Scowcroft Center for Strategy and Security, generates ideas and connects stakeholders in the defense ecosystem to promote an enduring military advantage for the United States, its allies, and partners. Our work identifies the defense strategies, capabilities, and resources the United States needs to deter and, if necessary, prevail in future conflict.

1    “Multi-Domains Operations Conference—What We Are Learning,” Allied Command Transformation, April 8, 2022, https://www.act.nato.int/articles/multi-domains-operations-lessons-learned.
2    Christine H. Fox and Emelia S. Probasco, “Big Tech Goes to War,” Foreign Affairs, October 19, 2022, https://www.foreignaffairs.com/ukraine/big-tech-goes-war.
3    Department of Defense (DOD), 2022 National Defense Strategy, 7, https://media.defense.gov/2022/Oct/27/2003103845/-1/-1/1/2022-NATIONAL-DEFENSE-STRATEGY-NPR-MDR.PDF.
4    The report elaborates on the discussion of the private sector and the sixth domain in Franklin D. Kramer, NATO Deterrence and Defense: Military Priorities for the Vilnius Summit, Atlantic Council, April 18, 2023, https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/nato-summit-military-priorities/.
5    PPD-21 is in process of being updated. Tim Starks, “A Presidential Critical Infrastructure Protection Order Is Getting a Badly Needed Update, Officials Say,” Washington Post, May 11, 2023, https://www.washingtonpost.com/politics/2023/05/11/presidential-critical-infrastructure-protection-order-is-getting-badly-needed-update-officials-say/; White House, “Presidential Policy Directive—Critical Infrastructure Security and Resilience,” February 12, 2013, https://obamawhitehouse.archives.gov/the-press-office/2013/02/12/presidential-policy-directive-critical-infrastructure-security-and-resil; William M. (Mac) Thornberry National Defense Authorization Act For Fiscal Year 2021, Pub. L. No. 116–283, 134 Stat. 3388 (2021), https://www.congress.gov/116/plaws/publ283/PLAW-116publ283.pdf; Cybersecurity and Infrastructure Security Agency (CISA), National Infrastructure Protection Plan and Resources,  accessed July 6, 2023, https://www.cisa.gov/topics/critical-infrastructure-security-and-resilience/national-infrastructure-protection-plan-and-resources; CISA, “About CISA,” accessed July 6, 2023, https://www.cisa.gov/about.
6    DOD, National Defense Strategy 2022, 5.
7    “Statement of General Glen D. VanHerck, Commander, United States Northern Command and North American Aerospace Defense Command Before the Senate Armed Services Committee,” March 23, 2023, 8-9, https://www.armed-services.senate.gov/imo/media/doc/NNC_FY23%20Posture%20Statement%2023%20March%20SASC%20FINAL.pdf.
8    CISA, “The Attack on Colonial Pipeline: What We’ve Learned & What We’ve Done Over the Past Two Years,” May 7, 2023, https://www.cisa.gov/news-events/news/attack-colonial-pipeline-what-weve-learned-what-weve-done-over-past-two-years; Saheed Oladimeji and Sean Michael Kerner, “SolarWinds Hack Explained: Everything You Need to Know,” Tech Target, June 27, 2023, https://www.techtarget.com/whatis/feature/SolarWinds-hack-explained-Everything-you-need-to-know; and “Stop Ransomware,” CISA (website), accessed July 6, 2023, https://www.cisa.gov/stopransomware/resources.
9    Office of the Director of National Intelligence (ODNI), Annual Threat Assessment of the U.S. Intelligence Community, February 6, 2023, 12, https://www.dni.gov/files/ODNI/documents/assessments/ATA-2023-Unclassified-Report.pdf.
10    ODNI, Annual Threat Assessment, 14.
11    ODNI, Annual Threat Assessment, 10.
12    David E. Sanger and Julian E. Barnes, “U.S. Hunts Chinese Malware That Could Disrupt American Military Operations,” New York Times, July 29, 2023, https://www.nytimes.com/2023/07/29/us/politics/china-malware-us-military-bases-taiwan.html.
13    Cyber Peace Institute, “Case Study, Viasat,” June 2022, https://cyberconflicts.cyberpeaceinstitute.org/law-and-policy/cases/viasat. The case study describes the breath of the impact: “The attack on Viasat also impacted a major German energy company who lost remote monitoring access to over 5,800 wind turbines, and in France nearly 9,000 subscribers of a satellite internet service provider experienced an internet outage. In addition, around a third of 40,000 subscribers of another satellite internet service provider in Europe (Germany, France, Hungary, Greece, Italy, Poland) were affected. Overall, this attack impacted several thousand customers located in Ukraine and tens of thousands of other fixed broadband customers across Europe.”
14    Microsoft Threat Intelligence, “A Year of Russian Hybrid Warfare in Ukraine,” March 15, 2023, 19, https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW10mGC.
15    DOD, Military and Security Developments Involving the People’s Republic of China 2022, 127, https://media.defense.gov/2022/Nov/29/2003122279/-1/-1/1/2022-military-and-security-developments-involving-the-peoples-republic-of-china.pdf.
16    Irene Sánchez Cózar and José Ignacio Torreblanca, “Ukraine One Year On: When Tech Companies Go to War,” European Council on Foreign Relations, March 7, 2023, https://ecfr.eu/article/ukraine-one-year-on-when-tech-companies-go-to-war/.
17    Ariel E. Levite, Integrating Cyber Into Warfighting: Some Early Takeaways from the Ukraine Conflict, Working Paper, Carnegie Endowment for International Peace, April 2023, 14, https://carnegieendowment.org/files/Levite_Ukraine_Cyber_War.pdf.
18    Elias Groll and Aj Vicens, “A Year After Russia’s Invasion, the Scope of Cyberwar in Ukraine Comes into Focus,” CyberScoop, February 24, 2023), https://cyberscoop.com/ukraine-russia-cyberwar-anniversary/.
19    Groll and Vicens, “A Year After Russia’s Invasion.”
20    Groll and Vicens, “A Year After Russia’s Invasion.” A report from Google, Fog of War: How the Ukraine Conflict Transformed the Cyber Threat Landscape, underscores the “unprecedented” nature of the efforts including “expanded eligibility for Project Shield, our free protection against distributed denial of service attacks (DDoS), so that Ukrainian government websites and embassies worldwide could stay online and continue to offer critical services” as well as “rapid Air Raid Alerts system for Android phones in the region; support for refugees, businesses, and entrepreneurs . . . and “compromise assessments, incident response services, shared cyber threat intelligence, and security transformation services—to help detect, mitigate and defend against cyber attacks.” See Threat Analysis Group, Fog of War, Google, February 2023, 2, https://services.google.com/fh/files/blogs/google_fog_of_war_research_report.pdf.
21    Dan Black, Russia’s War in Ukraine: Examining the Success of Ukrainian Cyber Defences, International Institute for Strategic Studies, March 2023, 14, https://www.iiss.org/globalassets/media-library—content–migration/files/research-papers/2023/03/russias-war-in-ukraine-examining-the-success-of-ukrainian-cyber-defences.pdf.
22    Emma Schroeder and Sean Dack, A Parallel Terrain: Public-Private Defense of the Ukrainian Information Environment, Atlantic Council, February 2023, 14, https://www.atlanticcouncil.org/wp-content/uploads/2023/02/A-Parallel-Terrain.pdf.
23    Black, Russia’s War in Ukraine, 17-18.
24    Schroeder and Dack, A Parallel Terrain, 16.
25    Fox and Probasco, “Big Tech Goes to War,” 4.
26    Levite, Integrating Cyber Into Warfighting, 17-18.
27    Robin Fontes and Jorrit Kamminga, “Ukraine: A Living Lab for AI Warfare,” National Defense, March 24, 2023, https://www.nationaldefensemagazine.org/articles/2023/3/24/ukraine-a-living-lab-for-ai-warfare; their report notes that “the Russia-Ukraine war can also be considered the first conflict where AI-enhanced facial recognition software has been used on a substantial scale. In March 2022, Ukraine’s defense ministry started using facial recognition software produced by the U.S. company Clearview AI. This allows Ukraine to identify dead soldiers and to uncover Russian assailants and combat misinformation. What’s more, AI is playing an important role in electronic warfare and encryption. For example, the U.S. company Primer has deployed its AI tools to analyze unencrypted Russian radio communications. This illustrates how AI systems were constantly retrained and adapted, for example, to deal with idiosyncrasies in customized ways, such as colloquial terms for weaponry.”
28    Fontes and Kamminga, “Ukraine: A Living Lab”; they also note that AI has also been used for the “spread of misinformation and the use of deep fakes as part of information warfare. AI has, for example, been used to create face images for fake social media accounts used in propaganda campaigns. While the spread of disinformation is not new, AI offers unprecedented opportunities for scaling and targeting such campaigns, especially in combination with the broad range of social media platforms.”
29    Levite, Integrating Cyber Into Warfighting, 17.
30    White House, “Presidential Policy Directive—Critical Infrastructure Security and Resilience, Definitions,” February 12, 2013, https://obamawhitehouse.archives.gov/the-press-office/2013/02/12/presidential-policy-directive-critical-infrastructure-security-and-resil.
31    Government Accountability Office (GAO), Critical Infrastructure Protection: Time Frames to Complete DHS Efforts Would Help Sector Risk Management Agencies Implement Statutory Responsibilities, February 2023, 7, https://www.gao.gov/assets/gao-23-105806.pdf.
32    GAO, Critical Infrastructure Protection.
34    GAO, Critical Infrastructure Protection, 8.
35    CISA, “FSLC Charter and Membership,” accessed July 6, 2023, https://www.cisa.gov/fslc-charter-and-membership; CISA, “Critical Infrastructure Partnership Advisory Council (CIPAC),” accessed July 6, 2023, https://www.cisa.gov/resources-tools/groups/critical-infrastructure-partnership-advisory-council-cipac; CISA, “Government Coordinating Councils,” accessed July 6, 2023), https://www.cisa.gov/resources-tools/groups/government-coordinating-councils; and CISA, “Sector Coordinating Councils,” accessed July 6, 2023, https://www.cisa.gov/resources-tools/groups/sector-coordinating-councils.
36    White House, “Office of National Cyber Director,” accessed July 6, 2023, https://www.whitehouse.gov/oncd/.
37    White House, National Cybersecurity Strategy Implementation Plan, July 2023, https://www.whitehouse.gov/wp-content/uploads/2023/07/National-Cybersecurity-Strategy-Implementation-Plan-WH.gov_.pdf.
38    Transportation Security Administration (TSA), “TSA Issues New Cybersecurity Requirements for Airport and Aircraft Operators,” March 7, 2023, https://www.tsa.gov/news/press/releases/2023/03/07/tsa-issues-new-cybersecurity-requirements-airport-and-aircraft; TSA, “TSA Issues New Cybersecurity Requirements for Passenger and Freight Railroad Carriers,” October 18, 2022, https://www.tsa.gov/news/press/releases/2022/10/18/tsa-issues-new-cybersecurity-requirements-passenger-and-freight; TSA, “TSA Revises and Reissues Cybersecurity Requirements for Pipeline Owners and Operators, July 21, 2022, https://www.tsa.gov/news/press/releases/2022/07/21/tsa-revises-and-reissues-cybersecurity-requirements-pipeline-owners; and Environmental Protection Agency, “EPA Cybersecurity for the Water Sector,” accessed July 6, 2023, https://www.epa.gov/waterriskassessment/epa-cybersecurity-water-sector.
39    CISA, “State and Local Cybersecurity Grant Program,” accessed July 4, 2023, https://www.cisa.gov/state-and-local-cybersecurity-grant-program.
40    CISA, “JCDC FAQs, What Are JCDC’s Core Functions,” accessed June 24, 2023, https://www.cisa.gov/topics/partnerships-and-collaboration/joint-cyber-defense-collaborative/jcdc-faqs.
41    CISA, “Cybersecurity Training and Exercises,” accessed July 4, 2023, https://www.cisa.gov/cybersecurity-training-exercises.
43    CISA, “JCDC 2023 Planning Agenda.”
44    CISA, “JCDC 2023 Planning Agenda.”
45    “National Cyber Investigative Joint Task Force,” Federal Bureau of Investigation, accessed July 18, 2023, https://www.fbi.gov/investigate/cyber/national-cyber-investigative-joint-task-force; White House, National Cybersecurity Strategy Implementation Plan, July 2023, 21, https://www.whitehouse.gov/wp-content/uploads/2023/07/National-Cybersecurity-Strategy-Implementation-Plan-WH.gov_.pdf.
46    National Security Agency, NSA Cybersecurity Collaboration Center, accessed September 7, 2023, https://www.nsa.gov/About/Cybersecurity-Collaboration-Center/
47    Government of Finland, Ministry of Defense, Security Committee, Security Strategy for Society, November 2, 2017, 98, https://turvallisuuskomitea.fi/wp-content/uploads/2018/04/YTS_2017_english.pdf.
48    Government of Finland, Security Strategy for Society, 5.
49    Government of Finland, Security Strategy for Society, 5.
50    Government of Finland, Security Strategy for Society, 7.
51    Government of Finland, Security Strategy for Society, 7-8.
52    CISA, Federal Senior Leadership Council Charter, accessed July 4, 2023, https://www.cisa.gov/sites/default/files/publications/fslc-charter-2021-508.pdf.
53    The FBI-led National Cybersecurity Investigative Joint Task Force is, of course, a joint task force, but it is not oriented to wartime activities.
54    The National Cybersecurity Implementation Plan requires DOD to issue an “updated DOD cyber strategy,” and while the full scope of homeland defense goes beyond cyber, the two efforts might be undertaken in a coordinated fashion. White House, National Cybersecurity Strategy Implementation Plan, July 2023, 21, https://www.whitehouse.gov/wp-content/uploads/2023/07/National-Cybersecurity-Strategy-Implementation-Plan-WH.gov_.pdf.
55    Northern Command, “Defending the Homeland,” accessed July 6, 2023, https://www.northcom.mil/HomelandDefense.
56    Government of Finland, Security Strategy for Society, 10.
57    CISA, “CISA Regional Office Fact Sheets,” August 4, 2021, https://www.cisa.gov/resources-tools/resources/cisa-regional-office-fact-sheets; and CISA, “State and Local Cybersecurity Grant Program.”
58    Section 331(c)(1)(a), Senate Armed Services Committee, National Defense Authorization Act for Fiscal Year 2024, accessed September 2, 2023, https://www.armed-services.senate.gov/imo/media/doc/fy24_ndaa_bill_text.pdf
59    Section 331(d), Senate Armed Services Committee, National Defense Authorization Act for Fiscal Year 2024, accessed September 2, 2023, https://www.armed-services.senate.gov/imo/media/doc/fy24_ndaa_bill_text.pdf
60    Section 331(c)(2), Senate Armed Services Committee, National Defense Authorization Act for Fiscal Year 2024, accessed September 2, 2023, https://www.armed-services.senate.gov/imo/media/doc/fy24_ndaa_bill_text.pdf.
61    CISA, “Information Sharing: A Vital Resource,” accessed July 2, 2023, https://www.cisa.gov/topics/cyber-threats-and-advisories/information-sharing/information-sharing-vital-resource.
62    Analysis and Resilience Center for Systemic Risk, “Who We Are,” https://systemicrisk.org/.
63    Analysis and Resilience Center for Systemic Risk, “What We Do,” https://systemicrisk.org/.
64    FS-ISAC, “Critical Providers Program FAQ,” accessed July 2, 2023),  https://www.fsisac.com/faq-criticalproviders.
65    FS-ISAC, “Critical Providers.”
66    White House, National Cybersecurity Strategy, March 2023, 4, https://www.whitehouse.gov/wp-content/uploads/2023/03/National-Cybersecurity-Strategy-2023.pdf.
67    The National Cybersecurity Strategy Implementation Plan takes a step in this direction by requiring the Department of Commerce to publish a “Notice of Proposed rulemaking on requirements, standards, and procedures for Infrastructure-as-a-Service (IaaS) providers and resellers.” White House, National Cybersecurity Strategy Implementation Plan, July 2023, 25, https://www.whitehouse.gov/wp-content/uploads/2023/07/National-Cybersecurity-Strategy-Implementation-Plan-WH.gov_.pdf.
68    CISA, “Support to Critical Infrastructure at Greatest Risk, (‘Section 9 Report’) Summary,” February 8, 2021, https://www.cisa.gov/resources-tools/resources/support-critical-infrastructure-greatest-risk-section-9-report-summary.
69    “Census Bureau Reports There Are 89,004 Local Governments in the United States,” US Census Bureau, August 30, 2012, https://www.census.gov/newsroom/releases/archives/governments/cb12-161.html.
70    “Amendment to Rules Comm. Print 118–10 Offered by Mr. Green of Tennessee,” June 27, 2023, https://amendments-rules.house.gov/amendments/Cyber%20in%20National%20Guard%20Amendment230630140357934.pdf.
71    Section 1521, Rules Committee Print 118–10 Text of H.R. 2670, The National Defense Authorization Act for Fiscal Year 2024, June 23, 2023, https://rules.house.gov/sites/republicans.rules118.house.gov/files/RCP_xml_1.pdf.
72    Office of Senator Jacky Rosen, “Rosen, Blackburn Introduce Bipartisan Bills to Strengthen Federal Response to Cyberattacks,” March 21, 2023, https://www.rosen.senate.gov/2023/03/21/rosen-blackburn-introduce-bipartisan-bills-to-strengthen-federal-response-to-cyberattacks/.
73    Section 1116, Senate Armed Services Committee, National Defense Authorization Act for Fiscal Year 2024, accessed September 2, 2023,  https://www.armed-services.senate.gov/imo/media/doc/fy24_ndaa_bill_text.pdf.
74    Black, Russia’s War in Ukraine, 14.
75    “Joint Cyber Reserve Force,” Gov.UK, accessed June 3, 2023, https://www.gov.uk/government/groups/joint-cyber-reserve-force.
76    Republic of Estonia, Information System Authority, “Cyber Security in Estonia 2023,” 51, https://www.ria.ee/media/2702/download.
77    National Guard, “National Guard Cyber Defense Team,” accessed September 2, 2023,https://www.nationalguard.mil/Portals/31/Resources/Fact%20Sheets/Cyber%20Defense%20Team%202022.pdf.
78    Franklin D. Kramer and Robert J. Butler, “Expanding the Role of the National Guard for Effective Cybersecurity,” The Hill, April 21, 2021, https://thehill.com/opinion/cybersecurity/550740-expanding-the-role-of-the-national-guard-for-effective-cybersecurity/.
79    Mark Pomerleau, “Lawmakers Pushing for More Integration of National Guard, Reserve Personnel into DOD Cyber Forces,” Defensescoop, June 12, 2023, https://defensescoop.com/2023/06/12/lawmakers-pushing-for-more-integration-of-national-guard-reserve-personnel-into-dod-cyber-forces/.
80    Cyber Command, “Hunt Forward Operations,” November 15, 2022, https://www.cybercom.mil/Media/News/Article/3218642/cyber-101-hunt-forward-operations/.
81    “2023 Posture Statement of General Paul M. Nakasone,” US Cyber Command, March 7, 2023, https://www.cybercom.mil/Media/News/Article/3320195/2023-posture-statement-of-general-paul-m-nakasone/.
82    Cyber Command, “Hunt Forward Operations.”
83    Cyber Command, “Hunt Forward Operations.”
84    This is a nontrivial requirement, as there is a significant shortage of highly skilled cyber talent, and retaining such talent has been a challenge for US Cyber Command. As Gen. Nakasone recently observed, “someone that has this type of training is very, very attractive to those on the outside.” Jim Garamone, “Cyber Command, NSA Successes Point Way to Future,” DOD News, March 8, 2023, https://www.defense.gov/News/News-Stories/Article/Article/3322765/cyber-command-nsa-successes-point-way-to-future/.
85    There are important legal issues regarding the interface between the Fourth Amendment and constitutional wartime powers, but establishing a consensual regime—which should be in the self-interest of critical infrastructures —would avoid those questions.
86    There are approximately 550 existing and planned undersea cables; see TeleGeography, “Submarine Cable Frequently Asked Questions,” accessed July 2, 2023, https://www2.telegeography.com/submarine-cable-faqs-frequently-asked-questions. There are far fewer undersea pipelines, but for Europe, important pipelines include those in the North, Baltic, and Mediterranean seas with “about 8,000 kilometers (5,000 miles) of oil and gas pipelines crisscross[ing] the North Sea alone.” Lorne Cook, “NATO Moves to Protect Undersea Pipelines, Cables as Concern Mounts over Russian Sabotage Threat,” Associated Press, June 16, 2023, https://apnews.com/article/nato-russia-sabotage-pipelines-cables-infrastructure-507929033b05b5651475c8738179ba5c.
87    There is at least some indication that Ukraine undertook those Nord Stream actions. See Julian E. Barnes and Michael Schwirtz, “C.I.A. Told Ukraine Last Summer It Should Not Attack Nord Stream Pipelines,” New York Times, June 13, 2023, https://www.nytimes.com/2023/06/13/us/politics/nord-stream-pipeline-ukraine-cia.html.
88    White House, “G7 Hiroshima Leaders’ Communiqué,” May 20, 2023, paragraph 39, https://www.whitehouse.gov/briefing-room/statements-releases/2023/05/20/g7-hiroshima-leaders-communique/.
89    White House, “Quad Leaders’ Summit Fact Sheet,” May 20, 2023, https://www.whitehouse.gov/briefing-room/statements-releases/2023/05/20/quad-leaders-summit-fact-sheet/.
90    Though there is at least some indication that Ukraine undertook the Nord Stream actions. Barnes and Schwirtz, “C.I.A. Told Ukraine Last Summer It Should Not Attack Nord Stream Pipelines.”
91    Jens Stoltenberg, “Press Conference by NATO Secretary General Jens Stoltenberg Following the Meeting of NATO Ministers of Defense in Brussels,” Remarks (as delivered), NATO, June 16, 2023, https://www.nato.int/cps/en/natohq/opinions_215694.htm?selectedLocale=en.
92    Christian Bueger, Tobias Liebetrau, and Jonas Franken, Security Threats to Undersea Communications Cables and Infrastructure–Consequences for the EU, In-Depth Analysis Requested by the SEDE Sub-committee, European Parliament, June 2022, 31, https://www.europarl.europa.eu/RegData/etudes/IDAN/2022/702557/EXPO_IDA(2022)702557_EN.pdf.
93    See “NATO to Set Up New Unit to Monitor Pipelines/Other Critical Infrastructure,” Pipeline Technology Journal, June 19, 2023, https://www.pipeline-journal.net/news/nato-set-new-unit-monitor-pipelines-other-critical-infrastructure.
94    Stoltenberg, “Press Conference.”
95    Stoltenberg, “Press Conference.”
96    “Frequently Asked Questions, Submarine Cable Systems Market,” MarketsandMarkets, accessed July 1, 2023, https://www.marketsandmarkets.com/Market-Reports/submarine-cable-system-market-184625.html.
97    “Submarine Cable Frequently Asked Questions,” TeleGeography, accessed July 1, 2023.
98    Underwater Arteries—the World’s Longest Offshore Pipelines,” Offshore Technology, September 9, 2014, https://www.offshore-technology.com/features/featureunderwater-arteries-the-worlds-longest-offshore-pipelines-4365616/; “After Nord Stream Attack, Europe Scrambles to Secure Subsea Pipelines,” Maritime Executive, October 2, 2022, https://maritime-executive.com/article/after-nord-stream-attack-europe-scrambles-to-secure-subsea-pipelines; “Gulf of Mexico Data Atlas,” National Centers for Environmental Information (“There are over 26,000 miles of oil and gas pipeline on the Gulf of Mexico seafloor,”), accessed July 1, 2023, https://www.ncei.noaa.gov/maps/gulf-data-atlas/atlas.htm?plate=Gas%20and%20Oil%20Pipelines.
99    “After Nord Stream Attack,” Maritime Executive; and Christiana Gallardo, “UK and Norway Team Up to Protect Undersea Cables, Gas Pipes in Wake of Nord Stream Attacks,” Politico, June 28, 2023, https://www.politico.eu/article/uk-norway-team-up-protect-undersea-cables-gas-pipelines/.
100    For a series of specific recommendations, see Sherman, Cyber Defense Across the Ocean Floor.
101    John Arquila, “Securing the Undersea Cable Network,” Hoover Institution, 2023, 4, https://www.hoover.org/sites/default/files/research/docs/Arquilla_SecuringUnderseaCable_FINAL_0.pdf.
102    Arquila, “Securing the Undersea Cable Network,” 8, 9.
103    For recommendations on enhancing the cybersecurity of undersea cables, see also Justin Sherman, Cyber Defenses Across the Ocean Floor, Atlantic Council, September 2021, https://www.atlanticcouncil.org/in-depth-research-reports/report/cyber-defense-across-the-ocean-floor-the-geopolitics-of-submarine-cable-security/.
104    Mick Green et al., “Submarine Cable Network Security,” Slide Deck, International Cable Protection Committee, April 13, 2009, https://www.iscpc.org/publications/.
105    Bueger, Liebetrau, and Franken, “Security Threats to Undersea Communications Cables,” 53.
106    DOD, “Space Development Agency Successfully Launches Tranche 0 Satellites,” April 2, 2023, https://www.defense.gov/News/Releases/Release/Article/3348974/space-development-agency-successfully-launches-tranche-0-satellites/.
107    DOD, “Space Development Agency.”
108    Rocket Lab, “About Us,” accessed July 5, 2023, https://www.rocketlabusa.com/about/about-us/.
109    Charles S. Galbreath, “Building U.S. Space Force Counterspace Capabilities: An Imperative for America’s Defense,” Mitchell Institute, June 2023, 16, https://mitchellaerospacepower.org/wp-content/uploads/2023/06/Building-US-Space-Force-Counterspace-Capabilities-FINAL2.pdf.
110    Fontes and Kamminga, “Ukraine: A Living Lab.”
111    “Planet Labs, Inc.—Peacetime Indications & Warning,” Defense Innovation Unit (DIU), 2019, https://www.diu.mil/solutions/portfolio/catalog/a0Tt0000009En0yEAC-a0ht000000AYgyYAAT.
112    “Planet Labs,” DIU.
113    “Umbra Launches World’s Most Capable Commercial Radar-Imaging Satellite,” Umbra, June 25, 2021, https://umbra.space/blog/umbra-launches-worlds-most-capable-commercial-radar-imaging-satellite.
114    Courtney Albon, “Maxar Explores New Uses for Earth Observation Satellites,” C4ISRNET, May 30, 2023, https://www.c4isrnet.com/battlefield-tech/space/2023/05/30/maxar-explores-new-uses-for-earth-observation-satellites/.
115    Albon, “Maxar Explores New Uses.”
116    Offset-X: Closing the Deterrence Gap and Building the Future Joint Force, Special Competitive Studies Project (a bipartisan, nonprofit effort), May 2023, 51, https://www.scsp.ai/wp-content/uploads/2023/05/Offset-X-Closing-the-Detterence-Gap-and-Building-the-Future-Joint-Force.pdf.
117    Suresh Kumar and Nishant Sharma, “Emerging Military Applications of Free Space Optical Communication Technology: A Detailed Review,” 2022 Journal of Physics Conference Series (2022), 1, https://iopscience.iop.org/article/10.1088/1742-6596/2161/1/012011/pdf.
118    The European Commission has undertaken an evaluation of seven different systems that it found to have met technical requirements. See L. Bonenberg, B. Motella, and J. Fortuny Guasch, Assessing Alternative Positioning, Navigation and Timing Technologies for Potential Deployment in the EU, JRC Science for Policy Report, EUR 31450 EN (Luxembourg: Publications Office of the European Union, 2023), https://doi.org/10.2760/596229.
119    “Safran to Provide GNSS Simulation Solutions for Xona Space System’s Low-Earth-Orbit Constellation and Navigation Signal,” Electronic Engineering Journal, April 6, 2023, https://www.eejournal.com/industry_news/safran-to-provide-gnss-simulation-solutions-for-xona-space-systems-low-earth-orbit-constellation-and-navigation-signals/.
120    Sandra Erwin, “SAIC to Develop ‘Software Factory’ for Space Development Agency,” SpaceNews, June 8, 2023, https://spacenews.com/saic-to-develop-software-factory-for-space-development-agency/.
121    US Air Force, “Civil Reserve Air Fleet,” accessed July 4, 2023, https://www.af.mil/About-Us/Fact-Sheets/Display/Article/104583/civil-reserve-air-fleet/.
122    Sandra Erwin, “Space Force to Further Define Details of a ‘Commercial Space Reserve,’” Space News, July 25, 2023, https://spacenews.com/space-force-to-further-define-details-of-a-commercial-space-reserve.
123    50 US Code, §§ 4511 and 4557.
124    See Franklin D. Kramer, Melanie J. Teplinsky, and Robert J. Butler, “We Need a Cybersecurity Paradigm Change,” The Hill, February 15, 2022, https://thehill.com/opinion/cybersecurity/594296-we-need-a-cybersecurity-paradigm-change/.
125    Black, Russia’s War in Ukraine, 39.

The post The sixth domain: The role of the private sector in warfare appeared first on Atlantic Council.

]]>
Warrick quoted in Bloomberg Government https://www.atlanticcouncil.org/insight-impact/in-the-news/warrick-quoted-in-bloomberg-government/ Thu, 21 Sep 2023 18:54:50 +0000 https://www.atlanticcouncil.org/?p=685363 Thomas Warrick discusses the risks of not renewing DHS authorities, which are set to expire given partisan divides in Congress.

The post Warrick quoted in Bloomberg Government appeared first on Atlantic Council.

]]>

On September 21, Forward Defense nonresident senior fellow Thomas Warrick was quoted in Bloomberg Government. He expressed concern about congressional gridlock and its subsequent effects on the expected expiration of several DHS protection measures. Warrick warns that these safeguards are integral to US national security.

What I worry is about the idea that we’re not shoring up our defenses at a time when it’s hard to predict where the next attack or serious threat is going to come from.

Arnold Punaro

Forward Defense, housed within the Scowcroft Center for Strategy and Security, generates ideas and connects stakeholders in the defense ecosystem to promote an enduring military advantage for the United States, its allies, and partners. Our work identifies the defense strategies, capabilities, and resources the United States needs to deter and, if necessary, prevail in future conflict.

The post Warrick quoted in Bloomberg Government appeared first on Atlantic Council.

]]>
The 5×5—Bridging the divide: Cyber conflict in international relations https://www.atlanticcouncil.org/content-series/the-5x5/the-5x5-bridging-the-divide-cyber-conflict-in-international-relations/ Wed, 20 Sep 2023 04:01:00 +0000 https://www.atlanticcouncil.org/?p=672524 Researchers discuss the relationship between the cyber policy and academic communities, and share their advice for those interested in breaking into each community.

The post The 5×5—Bridging the divide: Cyber conflict in international relations appeared first on Atlantic Council.

]]>
This article is part of The 5×5, a monthly series by the Cyber Statecraft Initiative, in which five featured experts answer five questions on a common theme, trend, or current event in the world of cyber. Interested in the 5×5 and want to see a particular topic, event, or question covered? Contact Simon Handler with the Cyber Statecraft Initiative at SHandler@atlanticcouncil.org.

This summer, we drew on insights from across the academic and policymaking communities for two editions of the 5×5 focused on the nature of cyber operations and their role in international relations. In June, we published an edition that featured a panel of scholars whose deliberate works have helped inspire and shape government cyber strategies in recent years. We followed that up with a second edition featuring perspectives from a range of current and former policymakers, whose day-to-day work has had to navigate government politics, processes, and other realities to confront present cyber challenges. 

These two editions can be accessed here: 

The contributions from members of these two communities are valuable in their own rights, but when taken together, provide a fuller picture of the intersection of the theory and practice of cyber conflict. Geopolitics, technologies, and the use of operations in and through the cyber domain are constantly evolving, creating new challenges for understanding cyber conflict. Continued collaboration between the scholarly and policy communities stands to deepen understanding for all involved. 

To reflect on this conversation, we brought together four researchers whose career experiences span both the scholarly and policy worlds, to share their thoughts and advice for those interested in breaking into each community.

#1 What, in your opinion, is the biggest misconception about cyber conflict’s role in international relations theory?

Michael Fischerkeller, researcher, Information, Technology, and Systems Division, Institute for Defense Analyses

“I consider two [misconceptions] as equally important. The first is that independent cyber operations offer de-escalatory offramps in a militarized crisis between nuclear states. There is no empirical evidence to support this view and it is at odds with what crisis bargaining theory suggests. The second is that states’ primary cyber behaviors are best understood as an intelligence contest, vice cyber strategic competition. The intelligence contest argument is too narrow to serve as a guide to policy, as it struggles to account for the wide range of strategic outcomes (gains and losses) that are a consequence of the speed, scope, and scale of cyber campaigns/operations, and needlessly ties the definition of strategic significance to coercion theory.” 

Jackie Kerr, senior research fellow for defense and technology futures, Center for Strategic Research, National Defense University’s Institute for National Security Studies:  

The views expressed in this interview are those of Dr. Kerr and do not reflect the official policy or position of the National Defense University, the Department of Defense, or the US government.  

“For those new to the field, it might be easy to imagine cyber conflict as a very narrow subfield of security studies or the study of military strategy—focused on a specific and rather technical domain. But the biggest challenge in the field, as I see it, is actually the degree of interdisciplinary and cross-silo thinking that is needed. The digital technologies and networks that constitute cyberspace cut across many areas of society and policy, are broadly accessible, and allow for novel emerging innovations.  Understanding the potential areas of conflict, competing interests, and roles of different stakeholders and governance mechanisms—not to mention how to address these in relation to various domestic and international institutions, actors, and levels of contestation—requires a broad range of expertise.”  

Erica Lonergan, assistant professor, School of International and Public Affairs, Columbia University

“An enduring and significant misconception is that cyberspace is a dangerous, escalatory domain and that conflict in cyberspace is likely to spill over into the kinetic realm. This is an assumption that exists at the highest levels of policymaking. Secretary Austin, for example, has described cyberspace as an escalatory environment, and US President Joe Biden has said that if the United States ends up in a conventional conflict, it will likely be because of a cyberattack. However, academic research has revealed little evidence of cyber escalation. The implications of such misconceptions are significant as they continue to shape US cyber strategy and policy.” 

Joshua Rovner, associate professor, School of International Service, American University

“That cyber conflict is akin to war. Cyber conflict is not anything like the bloody business of war, where states use violence to coerce their enemies and wreck their forces. It is about information superiority. States use cyberspace for espionage, deception, and propaganda. Their basic goal is the same: understanding the world better than their rivals.”

#2 What would you like to see scholars and students studying cyber conflict better understanding about policymaking?

Fischerkeller: “[I would like them to understand] that once a theory has set a foundation for strategy, policymakers benefit from what Alexander George and Richard Smoke call ‘contingent generalizations.’ These comprise policy insights that are informed by context—for example, the distinct geopolitical conditions of competition, militarized crisis, and armed conflict; the interactions between nuclear and non-nuclear states; and state versus non-state actors.” 

Kerr: “I think it is important for students and scholars who are interested in policy to gain as much granular familiarity as possible with policymaking processes and institutions relevant to their work. This can provide insights into the silos, bureaucratic frictions, and institutional politics involved—some of the real dilemmas faced by policymakers—all of which can be quite helpful for delineating what kinds of policy recommendations might be most valuable and to whom.”  

Lonergan: “Policymaking is often a messy, complicated, bureaucratic process. As scholars, we like to debate the intellectual merits and substance of various ideas and strategies, carefully examining documents that the government publishes. But the reality is that those documents reflect underlying bureaucratic politics and organizational processes—they are the result of bargaining, logrolling, standard operating procedures, parochial interests and biases, and so on. Therefore, it is important to take the behind-the-scenes processes into account when evaluating strategy and policy.”  

Rovner: “They should start by studying something else. Instead of focusing on cyber, they should start by studying diplomacy, intelligence, and war. They should study the policy process with care, noting especially the ways in which weighty theoretical issues play out in mundane matters like budgets and authorities. Only then should they start thinking about cyber.”

#3 What is a scholarly piece of literature on cyber conflict that you recommend aspiring policymakers read closely and why?

Fischerkeller: “This would be a function of their policy portfolio. If they are interested in cyber organizational development and capacity building, I would recommend Max Smeet’s No Shortcuts: Why States Struggle to Develop a Military Cyber-Force. If they are interested in the nexus of cyber campaigns/operations and the nuclear weapons enterprise, I would recommend Herbert Lin’s Cyber Threats and Nuclear Weapons. If they are interested in national strategy, I would recommend Cyber Persistence Theory: Redefining National Security in Cyberspace. And so on.” 

Kerr: “As a first recommendation for aspiring policymakers, I would actually recommend that they take a step further back and read something on the history of computing, cybernetic theory, the Internet and its governance, and the different ways these have been thought about in connection both to interdependent economic growth and democracy, and to conflict and strategic competition. Norbert Wiener’s Cybernetics would not be a bad starting point. Thomas Rid’s The Rise of the Machines and Laura DeNardis’ The Global War for Internet Governance would also be excellent. I recommend this because this is the larger perspective that will help people entering the policy arena see the connections between more narrowly circumscribed policy debates of this moment and the longer-term evolution and bigger issues at stake.”  

Lonergan: “Aspiring policymakers should read Lennart Maschmeyer’s 2021 International Security article, ‘The Subversive Trilemma: Why Cyber Operations Fall Short of Expectations.’ This piece provides an alternative perspective on cyber conflict that will likely challenge some of the conventional wisdom in policy circles, because Maschmeyer argues that cyber conflict is more like subversion than it is like conflict. The ‘subversive trilemma’ in cyberspace, in which there are tradeoffs between speed, intensity, and control of cyber operations, accounts for the gap between expectations and reality of cyber conflict.”  

Rovner: “Robert Chesney and Max Smeets edited Deter, Disrupt, or Deceive: Assessing Cyber Conflict as an Intelligence Contest, in which contributors debate whether cyber conflict is best seen in terms of intelligence. This debate has important implications for policy. It speaks to several fundamental questions. Which agencies and organizations should be responsible for cyber operations? Who should oversee them? How should they measure success and failure?” 

More from the Cyber Statecraft Initiative:

#4 How has understanding of cyber conflict evolved in the last five years within the cyber policy community and how do you see it evolving in the next five years?   

Fischerkeller: “There has been a notable shift to the recognition that the primary strategic threat in and through cyberspace is from campaigns whose effects are short of armed-attack equivalence but whose cumulative gains are of strategic significance. Additionally, there has been a recognition that cybersecurity is national security. And, unfortunately, states cyber behaviors have demonstrated that extraordinary, explicit efforts to cultivate voluntary, non-binding cyber norms have met with limited success.” 

Kerr: “The last five years have been a productive time for innovative thinking in the field. There have been serious efforts to understand complex issues, including the nature of strategic interactions, different adversary conceptions of the domain, cross-domain interaction and escalation dynamics, the relationships of cyber conflict with intelligence competition and with other cyber-enabled forms of conflict—and the list goes on. While these efforts have led to significant insights, the continuing evolution of global politics, technology, and cyberspace itself keeps pushing forward new challenges for both policy and theory. I think the intersections between thinking on cyber policy, artificial intelligence, and other emerging areas of technology competition and cooperation will be important areas to watch.”  

Lonergan: “A significant inflection point in cyber policy took place five years ago. In 2018, the Defense Department published a new cyber strategy anchored in the concept of Defend Forward and US Cyber Command promulgated its first vision statement guided by the idea of ‘persistent engagement.’ Both define a broader and more assertive role for the US military in cyberspace. But we still lack real metrics that enable experts to evaluate the outcomes of these approaches. Looking ahead over the next five years, I hope the policy community focuses on assessing the implementation of these strategies, with an eye toward gauging how they integrate and are aligned with broader US strategic goals.”  

Rovner: “The policy debate has become more interesting and expansive. New ideas about the logic of cyber conflict, and the nature of different cyber actors, have entered the chat. This has happened in part because scholars have deliberately sought to speak to policy, and their research has nudged the policy community to think harder about the uses and limits of cyberspace operations. It helps that many of these scholars have experience in government, the military, and the intelligence community. The quality of their research—and the clarity of their writing—has probably disabused policymakers of the idea that cyber issues are only comprehensible to technical specialists. The next five years will be interesting, mostly because we will have a huge amount of data on current conflicts to explore. Information from Russia-Ukraine war and the ongoing US-China competition will help put our theories to the test.”

#5 How can scholars and policymakers of cyber conflict better incorporate perspectives from each other’s work?

Fischerkeller: “The military uses the phrase ‘right seat ride’ to describe a process whereby an incoming commander stays at the hip of an outgoing commander to gather in-depth knowledge of the historical, present, and future challenges facing the command. A similar model is equally valuable for policymakers and scholars. Policy shops ought to leverage scholar-in-residence programs or, alternatively, the Intergovernmental Personnel Act that allows for the temporary assignment of skilled personnel between the federal government and state and local governments, colleges and universities, tribal governments, federally funded research and development centers, and other eligible organizations. These approaches are particularly relevant for cyber policy, as much of the background that informs cyber policy cannot be discovered by scholars via open-source research.” 

Kerr: “There are so many areas where mutual learning is possible, and I have seen a lot of this going on that is productive. My first recommendation is to get involved in the communities that have developed to deliberately bridge this gap. People know each other, attend workshops together, read and comment on each other’s work, and really facilitate more innovative thinking for all involved. There also are opportunities for individuals to rotate between scholarly and policymaking roles—whether entering the policy arena temporarily from academia or taking a period off from government service to conduct research at a think tank or university. Going in either direction is a great way to learn.”  

Lonergan: “This challenge is not unique to the field of cyber conflict. Bridging the gap between academics and policymakers is an important and enduring issue in the international relations field. What makes this even more complex in cyberspace is the multistakeholder nature of the cyber domain, which significantly expands the ecosystem of relevant parties, each of which has unique perspectives, interests, and expertise. Therefore, seeking out opportunities to engage with this diverse community—encompassing not just academics and beltway bureaucrats, but also the private sector, non-governmental organizations, big tech, civil society organizations, and so on—will enrich the understandings of all involved.”  

Rovner: “[They can do so] by stepping away from their day jobs, at least for a while. Policymakers who spend a little time in academia get the chance to think about the bigger picture, and to think about how their work fits in. Mid-career master’s degrees are particularly useful here, as are programs with fewer time commitments, like MIT’s Seminar XXI. The opposite is also true. Scholars who routinely interact with policymakers are likely to get a more detailed sense of cyberspace competition. Spending time in government can be illuminating.”

#6 What is one piece of advice you have for scholars interested in making a more direct impact on cyber policymaking?

Fischerkeller: “Write concise, peer-reviewed essays that speak directly to a current or likely future cyber challenge with the intention of submitting those essays to well-established online fora for publication consideration.” 

Kerr: “There are many things you can do here, some of which I have already mentioned. But one of the most important things that I will stress here is that human relationships are key. There is no substitute for getting to know people in the policy world and having regular enough interaction to understand what they are wrestling with and where scholarly research can help. Whether this happens through attending the same conferences, reading and engaging with the same policy-relevant publications, or fellowship stints in government service, academics who get to know and engage regularly with people in the policy community will benefit from learning how policymakers think about the issues, and iteratively contributing to the existing policy debates. For this, they also need to learn where and how to publish output that will be picked up and seen as relevant in the policy circles. This will not always be the same output as is relevant to within-discipline academic prestige or tenure track progression, but the two objectives can also be mutually beneficial.” 

Lonergan: “First and foremost, scholars should familiarize themselves with what is going on in the policymaking realm—just as they would when tackling a new research project. It is important to take care to understand the significant policy work that has already been accomplished, prior efforts that have been less successful and why, and so on. I would also encourage scholars to actively engage in dialogues and venues that bring together scholars and practitioners, like roundtables or other events hosted at think tanks, or find ways of getting involved in track II or track 1.5 dialogues.” 

Rovner: “When you are starting a new project, plan on three products: a peer-reviewed article in a scholarly journal, a policy paper summarizing your research, and an op-ed. Thinking about a project with these goals in mind helps broaden your audience, and it forces you to think about how to get your ideas across to policy professionals who are more or less familiar with cyber issues.”

#7 What is the biggest difference between writing for a scholarly audience vs. writing for a policymaking audience?

Fischerkeller: “Importantly, one difference should not be the quality and depth of research supporting one’s arguments. The format in which those claims are presented differs, however, as many, perhaps most policymakers prefer to read concise presentations rather than twenty-five-page articles with over one hundred footnotes. Additionally, policymakers are often interested in options rather than a definitive argument in support of a single viewpoint.” 

Kerr: “A key element in either type of writing is to really know your audience and know where you can add value. Do not underestimate your audience in either direction.  While scholars bring extensive theoretical, conceptual, and methodological rigor, policymakers often have significantly more first-hand experience and day-to-day knowledge of empirical data or precise processes relevant to the area of inquiry. For a scholarly audience, the goal is often to advance theoretical arguments within an academic discipline, often by publishing long articles or books through lengthy peer review processes. For a policy audience, some of the theory, concepts, and rigor from academia can absolutely be valuable, but they must relate to practical approaches to address fast-moving policy challenges. Writing for a policy audience also should be written in a style, format, and length that can be rapidly absorbed by busy professionals. This writing usually is much shorter and more concise than long-form academic writing, responding quickly to real-world events, and avoiding discipline-specific jargon. It also is important to write for outlets that are known and read within policy communities.” 

Lonergan: “The biggest difference lies in the ‘so what’ question. For scholarly writing, researchers usually aim to formulate and answer research questions that speak to, build on, or challenge core theoretical and empirical issues in the discipline; the ‘so what’ is a function of how that research engages with a robust academic body of work. For policy writing, the ‘so what’ is entirely different—even if the main insights may stem from academic research. What matters in this area is how a research question or topic directly informs or speaks to questions of policy.” 

Rovner: “[The biggest difference is] length. Good scholarship is a conversation with the past, as the saying goes. This means scholars need to spend time situating their work in a broader field, footnoting everything, criticizing one another’s work, and proposing new questions to encourage new arguments. Research articles and books are long and sometimes quite dense. Engaging scholarly work takes time. Because policymakers do not have the luxury of time, good policy pieces are shorter. They get to the point, eschewing the paraphernalia of academic writing in favor of the bottom line. Scholars who write for policy are ruthless about chopping up their research into digestible portions. Especially good scholars keep all the background in mind, just in case an interested policymaker wants to do a deeper dive.” 

Simon Handler is a fellow at the Atlantic Council’s Cyber Statecraft Initiative within the Digital Forensic Research Lab (DFRLab). He is also the editor-in-chief of The 5×5, a series on trends and themes in cyber policy. Follow him on Twitter @SimonPHandler.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post The 5×5—Bridging the divide: Cyber conflict in international relations appeared first on Atlantic Council.

]]>
Sleight of hand: How China weaponizes software vulnerabilities https://www.atlanticcouncil.org/in-depth-research-reports/report/sleight-of-hand-how-china-weaponizes-software-vulnerability/ Wed, 06 Sep 2023 13:00:00 +0000 https://www.atlanticcouncil.org/?p=677634 China's new vulnerability management system mandates reporting to MIIT within 48 hours, restricting pre-patch publication and POC code. This centralized approach contrasts with the US voluntary system, potentially aiding Chinese intelligence. MIIT shares data with the MSS, affecting voluntary databases as well. MSS also fund firms to provide vulnerabilities for their offensive potential.

The post Sleight of hand: How China weaponizes software vulnerabilities appeared first on Atlantic Council.

]]>
Table of contents

Executive summary

The Cyberspace Administration of China (CAC), the Ministry of Public Security (MPS), and the Ministry of Industry and Information Technology (MIIT) published the “Regulations on the Management of Network Product Security Vulnerabilities” (RMSV) in July 2021. Even before the regulations were implemented in September 2021, analysts had issued warnings about the new regulation’s potential impact.1 At issue is the regulations’ requirement that software vulnerabilities—flaws in code that attackers can exploit—be reported to the MIIT within forty-eight hours of their discover by industry (Article 7 Section 2).2 The rules prohibit researchers from: publishing information about vulnerabilities before a patch is available, unless they coordinate with the product owner and the MIIT; publishing proof-of-concept code used to show how to exploit a vulnerability; and exaggerating the severity of a vulnerability.3 In effect, the regulations push all software-vulnerability reports to the MIIT before a patch is available. Conversely, the US system relies on voluntary reporting to companies, with vulnerabilities sourced from researchers chasing money and prestige, or from cybersecurity companies that observe exploitation in the wild. 

Software vulnerabilities are not some mundane part of the tech ecosystem. Hackers often rely on these flaws to compromise their targets. For an organization tasked with offensive operations, such as a military or intelligence service, it is better to have more vulnerabilities. Critics consider this akin to stockpiling an arsenal.4 When an attacker identifies a target, they can consult a repository of vulnerabilities that enable their operation. Collecting more vulnerabilities can increase operational tempo, success, and scope. Operators with a deep bench of tools work more efficiently, but companies patch and update their software regularly, causing old vulnerabilities to expire. In a changing operational environment, a pipeline of fresh vulnerabilities is particularly valuable. 

This report details the structure of the MIIT’s new vulnerability databases, how the new databases interact with older ones, and the membership lists of companies participating in these systems. The report produces four key findings.

  1. The RMSV (Article 7, Section 3) requires the MIIT’s new database to share vulnerability and threat data with the National Computer Network Emergency Response Technical Team/Coordination Center of China (CNCERT/CC) and Ministry of Public Security (MPS). Sharing these data with CNCERT/CC allows them to reach organizations with offensive missions. CNCERT/CC’s partners can access vulnerability reports through its own China National Vulnerability Database (CNVD). The CNVD’s Technology Collaboration Organizations with access to reports submitted to MIIT include: the Beijing office of the Ministry of State Security’s (MSS) 13th Bureau (Beijing ITSEC, 北京信息安全测评中心), Beijing Topsec—a known People’s Liberation Army (PLA)-contractor connected to the hack of Anthem Insurance, and a research center responsible for “APT [advanced persistent threat] attack and defense” at Shanghai Jiao Tong University, which houses a cybersecurity school tied to PLA hacking campaigns.5 The vulnerability sharing with the MSS 13th Bureau’s Beijing office is particularly concerning. Experts note that the bureau spent the last twenty years getting early access to software vulnerabilities.6
  2. There are likely bureaucratic issues involved in implementing the RMSV among relevant entities. Mandatory disclosure of vulnerabilities to MIIT undercuts other, government-run, voluntary databases in China. CNVD disclosed fewer vulnerabilities after the regulation went into effect, and its publication of vulnerabilities for industrial control systems ground to a halt in 2022. This decline is likely the result of CNVD waiting for a patch before publishing. With no reporting requirement, and the inability to publish without a patch, the value of the voluntary database is unclear. One benefit may be collection. CNCERT/CC has incident-response contracts with thirty-one countries.7 It is unclear if these contracts allow CNCERT/CC to collect vulnerability information.
  3. Besides just collecting software vulnerabilities, the MIIT is funding their discovery through research grants to improve product security standards. 
  4. An MSS vulnerability database requires its private-sector partners to produce software vulnerabilities. These 151 cybersecurity companies provide software vulnerabilities to the MSS 13th Bureau. This report finds that these companies employ at least 1,190 software vulnerability researchers. Each year the researchers provide at least 1,955 software vulnerabilities to the MSS, at least 141 of which are “critical” severity. Once received by the MSS, they are almost certainly evaluated for offensive use.

The mandates to disclose vulnerabilities to the Ministry of Industry and Information Technology, not to publish vulnerability information without also simultaneously releasing a patch, not to release proof-of-concept code, and not to hype up the severity of a vulnerability, among other things, stands in stark contrast to the United States’ decentralized, voluntary reporting system. 

Introduction

Software vulnerabilities are like raspberries—they go bad fast.8 For intelligence services and militaries that seek to hack an adversary’s systems, having vulnerabilities on hand is key. Software vulnerabilities are flaws that allow an attacker to exploit the software and achieve a desired effect. Knowing which software vulnerabilities operators will need in advance is challenging, so having many on hand is incredibly useful to support operational tempo. But because companies are usually quick to patch their products, a trove of software vulnerabilities, however well-stocked, is quickly rendered useless if not replenished regularly.9

Over the last six years, China has taken significant steps to collect more vulnerabilities. China’s Ministry of Public Security prohibited cybersecurity experts from traveling to foreign software security competitions in 2017, where they would burn vulnerabilities in commonly used tech for hundreds of thousands of dollars.10 Preventing researchers from attending international competitions that made everyday products more secure was not only a loss for defenders; China explicitly gained more vulnerabilities for offensive use. One company, Beijing Chaitin, told a media outlet that it would prioritize submitting vulnerabilities to the MSS-run CNNVD database instead of participating in foreign competitions.11 China’s top cybersecurity policymakers and corporate executives share the “collect them all” attitude. The chief executive officer (CEO) of Qihoo360 remarked in the same year that software vulnerabilities are “important strategic resources” that “should stay in China.”12 Also in 2017, China launched a series of competitions to promote the development of technology that could automate the discovery, exploitation, and patching of software vulnerabilities.13 The same Qihoo360 CEO called the technology an “assassin’s mace”—or in Department of Defense (DOD) jargon, a strategic offset.14 Together, the policies prevented China’s strategic resource from being leaked overseas and invested money in technology to make finding vulnerabilities more efficient. Still, the government could only ever receive vulnerabilities that were voluntarily provided to it. 

China developed a system to collect software vulnerabilities that previously escaped its reach. Under the old system, the government did not collect vulnerabilities found by, or reported to, companies. Companies often find vulnerabilities in their own products. Many firms also receive external reports from researchers, sometimes in exchange for money. The 2021 RMSV—written by the CAC, the MPS, and the MIIT—expanded the government’s collection to include these sources. The new rules require companies doing business in China to report software vulnerabilities in their products or products they use to the MIIT within forty-eight hours of discovery. The regulations stop independent researchers from publishing information about vulnerabilities without coordinating a patch with the company, releasing proof-of-concept code that shows how to exploit a vulnerability, and hyping up the severity of a vulnerability. The requirement to coordinate the vulnerability disclosure with the business pushes the vulnerability into the MIIT’s new system, because the company must report it within two days. At some point after the Cybersecurity Threat and Vulnerability Information Sharing Platform receives the vulnerability, MIIT shares it with the MPS and CNCERT/CC. The 2021 regulations are aligned with People’s Republic of China (PRC) policymakers’ attitudes toward software vulnerabilities, which began coalescing in 2017.

Three earlier reports contour China’s software vulnerability ecosystem. Combined, they demonstrate a decrease in software vulnerabilities being reported to foreign firms and the potential for these vulnerabilities to feed into offensive operations. 

First, the Atlantic Council’s Dragon Tails report demonstrates that China’s software vulnerability research industry is a significant source of global vulnerability disclosures, and that US legislation prior to China’s disclosure requirements significantly decreased the reporting of vulnerabilities from specific foreign firms added to the US entities list, removing an important source of security research from the ecosystem.15

Second, Microsoft’s “Digital Defense Report 2022” showed a corresponding uptick in the number of zero-days deployed by PRC-based hacking groups. Microsoft explicitly attributes the increase as a “likely” result of the RMSV.16 Although less than a year’s worth of data do not make a trend, both reports gesture at the impact of the regulation in expected ways, based on China’s past behavior of weaponizing the software vulnerability disclosure pipeline.17

Third, Recorded Future published a series reports in 2017 with evidence indicating that critical vulnerabilities reported to China’s National Information Security Vulnerability Database (CNNVD, run by the MSS) were being withheld from publication for use in offensive operations.18

This report adds to these findings. Specifically, we find that the 2021 RMSV allows the PRC government, and subsequently the Ministry of State Security, to access vulnerabilities previously uncaptured by past regulatory regimes and policies. In some cases, the regulations also facilitate access to some companies’ internal code repositories.

China’s software vulnerability disclosure ecosystem

The following graphic illustrates the relationships within China’s government-run software vulnerability ecosystem. This report does not cover actors on the black market or their impact on this system. 

A complete concept map of China’s government vulnerabilities databases. Source: Sleight of Hand, Cary and Del Rosso

Before the RMSV

Source: Atlantic Council, Sleight of Hand, Cary and Del Rosso

China National Vulnerability Database

The CNVD, run by CNCERT/CC, is meant to help defend computer networks in China and other nations, like the US National Vulnerability Database (NVD).19 CNCERT/CC maintains joint incident-response contracts with at least thirty-one other national community emergency-response teams (CERTs), though which countries participate is unknown.20 CNVD users receive advanced warning of software vulnerabilities from the database.21 These vulnerabilities are collected from voluntary reporting by individuals or companies, or from a partnering vulnerability database. 

The CNVD collects some of its data from three CNVD partner vulnerability databases, each with its own list of contributors: the Higher-Ed Vulnerability Database, Vulbox, and the Bu Tian Vulnerability Database.22

Shanghai Jiao Tong University operates the Higher-Ed Vulnerability Database.23 The database collects vulnerability reports on products used by institutions under the Ministry of Education. Researchers, professors, and students voluntarily submit vulnerabilities. The products have a variety of national origins and are not just education-related software. The university organization operating the vulnerability database also teaches defense-industry and government employees “secrets theft and anti-secrets theft” skills on another platform.24 SJTU has supported PLA hacking campaigns and is home to a center that conducts research on “APT attack and defense.”25

The other two databases feeding the CNVD rely on the private sector. Vulbox is a for-profit vulnerability disclosure marketplace.26 Like similar companies in the United States, it connects white-hat hackers to companies looking to secure their products. Companies pay researchers who find vulnerabilities and submit them through the platform. Vulbox shares these vulnerabilities, paid for by corporate incentive, with CNVD. Qi An Xin, a premier cybersecurity firm, maintains the other database.27 The Bu Tian Vulnerability Database is a forum for white-hat hackers to discuss software vulnerabilities. Users can share vulnerability reports, help other users recover from attacks, and join an annual competition.28 Both databases draw on unique sources of vulnerabilities: Vulbox from software security researchers cashing in on their work; Bu Tian from researchers discussing new findings, recovering from incidents, or discovering new vulnerabilities at the annual Bu Tian Cup software competition. 

The CNVD also receives voluntary vulnerability reporting from researchers and cybersecurity companies. Under the 2021 regulations, these companies must also report the vulnerabilities to the MIIT’s new database (see discussion below). CNCERT’s 2020 annual report graded the capabilities of its technical supporting organizations, ranking their capabilities to collect, analyze, and discovery software vulnerabilities.29 The criteria to produce the evaluation and rankings are not in the report, but they can be found online.30 This report reproduces that table below and flags (with an asterisk) seventeen of the twenty-six companies that also support the MSS-run CNNVD with an asterisk. Under the RMSV, CNVD now receives software vulnerabilities from the MIIT’s new database.31 

Source: CNCERT/CC 2020 Annual Report32

CNVD distributes vulnerability data to its technology collaboration organizations.33 These organizations are meant to integrate the data into cybersecurity services they provide to customers. Some organizations and companies may use this vulnerability distribution to support offensive operations. These organizations include the Beijing regional office of the MSS 13th Bureau (北京信息安全测评中心), a known PLA contractor tied to the hack of Anthem Insurance called Beijing TopSec, and other prominent government-servicing cybersecurity firms such as Qi An Xin, which runs its own Cybersecurity Military-Civil Fusion Innovation Center (网络空间安全军民融合创新中心).34 Even the Shanghai Jiao Tong University Center (上海交通大学网络信息中心) responsible for “advanced persistent threat attack and defense” research makes this list of integrators.

Separately, another group of thirty-eight CNVD user-support organizations help defenders integrate vulnerability data into their network defenses.35 The role of international partners in vulnerability sharing, collection, use, and defense is unclear. A list of CNCERT/CC’s international partners is not available, nor are the contracts that underpin their relationship. CNCERT/CC responded to a request for comment by pointing to the organization’s press-release website.

CNVD also maintains four databases for software vulnerabilities, but these appear to be maintained as a single database, do not require separate accounts to access, and are subdomains of the CNVD website.36 This structure merely sorts the CNVD system, rather than distributing vulnerabilities into unique repositories. 

Data from one of these four databases—the industrial control systems (ICS) vulnerability database—make clear how significantly the RMSV decreased public vulnerability disclosure. While a few hundred vulnerabilities were disclosed by the ICS database each year from 2018 to 2020, 2022 saw just ten vulnerabilities published in this system. In the same year, the US Cybersecurity and Infrastructure Security Agency (CISA) recorded 113 exploited ICS vulnerabilities.37

Source: Sleight of Hand, Cary and Del Rosso https://ics.cnvd.org.cn/bugEcharts

The near total drop-off in publicly reported ICS vulnerabilities was accompanied by a significant decrease in the total vulnerabilities disclosed by CNVD.

The data suggest there is a significant gap between actual and disclosed ICS vulnerabilities. If researchers find a number of ICS vulnerabilities similar to the number before the regulations, and they report them to the new MIIT database for ICS vulnerabilities, then the vulnerabilities would still show up in CNVD data, albeit with a delay. Although the MIIT database does not publish vulnerabilities publicly, the 2021 regulations require the MIIT to pass them along to the CNVD. If the MIIT had reported the vulnerabilities to the vendors, then CNVD would have published the vulnerability data when the company released the corresponding patch—but the data do not show this. Instead, the data suggest that companies, at least ICS companies, are not receiving vulnerability reports from the MIIT. 

China National Vulnerability Database of Information Security

China has, in the past, weaponized software vulnerabilities provided to its CNNVD, which is run by the MSS. Statistical analysis by Recorded Future in 2017 demonstrated that the intelligence service likely passed high-criticality vulnerabilities to its hacking teams and delayed their public disclosure.38 After its operations were burned when another entity publicly disclosed the vulnerability, the MSS would disclose them as well and move on. The reports by Recorded Future made clear the operational value the CNNVD offered to China’s offensive hacking teams. 

Unlike the CNVD, the number of CNNVD published vulnerabilities continues to trend upward. This is not because of the goodwill of the MSS. Each vulnerability reported by CNNVD can be tied to other public data, like a GitHub repository or a company’s website, meaning the database is not offering new information. In effect, CNNVD data just reflect what is publicly observable. In a humorous twist, monthly reports from the CNNVD used to compile the chart below stop in November 2017, the same month Recorded Future researchers published their report. After being caught, the MSS wiped its website of historical data and started fresh.

Source: CNNVD

Based on the requirements for firms supporting the MSS-run CNNVD, the exploitation of vulnerabilities provided to the database seems to continue today.39 This assessment is unsurprising, given the MSS 13th Bureau oversight of the database. The clarity of the requirements for members to produce vulnerabilities that can be used in hacking campaigns is surprising, however. 

CNNVD technical support units, as the private-sector member companies are called, must meet several requirements. Criteria for tier-one partnership include employing at least twenty software vulnerability researchers, annually submitting at least thirty-five software vulnerabilities—at least five of which are critical according to the CVSS system, responding quickly to CNNVD requests for help or information, and providing early warning of at least ten critical vulnerabilities the company observes being exploited in the wild to the CNNVD.40 Companies in tiers two and three each have corresponding, though less intensive, requirements. Data on CNNVD’s website do not attribute any vulnerabilities to the firms listed below, instead citing public information. This approach makes it impossible to determine how many vulnerabilities the technical support units supply to the MSS.

Requirements for companies applying to join the CNNVD’s Technical Support Units. 
Source: CNNVD Handbook, Translation by Dakota Cary
Annual requirements for companies accepted to the CNNVD’s Technical Support Units. 
Source: CNNVD Handbook, Translation by Dakota Cary

Based on these requirements, the technical support unit companies employ at least 1,190 researchers dedicated to software vulnerability discovery, and these researchers provide a minimum of 1,955 software vulnerabilities—at least 141 of which are of critical severity—to the MSS each year.

CNNVD’s team of technical support units grew from just fifteen companies in 2016 to 151 companies in 2023.41 A full list of each tier’s membership is available in Appendix A.

China’s New Vulnerability Management System under the RMSV: the NVDB

Source: Atlantic Council, Sleight of Hand, Cary and Del Rosso

The MIIT’s Cybersecurity Threat Intelligence Sharing Platform is operated by the MIIT’s Cybersecurity Management Bureau. The platform also receives oversight from four organizations: China Academy of Information and Communication Technology (CAICT), China ICS CERT (国家工业信息安全发展研究中心), China Software Testing Center (CSTC) (中国软件评测中心), and China Automotive Technology and Research Center (中国汽车技术研究中心).42 Notably, the China Software Testing Center, a center under the MIIT, works to advance military-civil fusion (likely by testing civilian hardware and software for security vulnerabilities before adoption by the military), is certified by the MSS 13th Bureau as tier 1 for Security Engineering and hosts a “special laboratory” (特种实验室) whose website cannot be accessed.43 CSTC publishes books on security testing for many kinds of systems, including intelligent manufacturing, smart cars, and ICS systems. CSTC’s website makes clear the center is home to immense software security talent. At the bottom of its homepage, the China Software Testing Center lists the Ministry of State Security as one of its many government customers.44 It is the only government agency whose name does not also appear in English.

The Cybersecurity Threat Intelligence Sharing Platform includes one organizing platform (the Cybersecurity Threat and Vulnerability Information Sharing Platform; abbreviated NVDB) with five downstream databases. These databases share the same authorization system, and some of the database’s login pages redirect to the NVDB. While each has its own website, some share the same contact information. These findings suggest the five databases are not fully separate from one another, though this may change over time as the bureaucratic structure matures. 

The five databases each cover a specific area of technology: general network product devices, industrial-control systems, “innovative information technology” (PRC-made products) used by the government, internet-connected vehicles, and mobile applications. This paper was able to obtain membership lists for four of the five databases. Forty-eight of the 103 companies identified on these membership lists contribute to the MSS-run CNNVD. The full list of companies can be found in Appendix B.

The NVDB does not publish software vulnerabilities, but it shares them with the MPS’ National Cyber and Information Security Information Notification Center and CNCERT/CC (the administrator for CNVD).45 The Ministry of Public Security conducts offensive hacking on targets within China, suggesting the shared vulnerabilities could be used for law enforcement and surveillance—incident reports could start law-enforcement actions, too.46

Access to China’s NVDB is limited to PRC nationals with a Chinese telephone number between 8 a.m. and 8 p.m. Beijing time, so information about the internal functions of the platform is incomplete. However, a “how-to” section of the NVDB website offers information about the platform’s functionality.47 The resource documents how users can report malicious links, Internet Protocol (IP) addresses, file hashes, and file incident reports—among many other capabilities. The following screenshot shows how users reporting a software vulnerability can indicate the type of vulnerability discovered, and whether the bug is already public.

Source: MIIT “The Information Sharing Platform of Cybersecurity Threat”48

Other components of the “how-to” page indicate that some vulnerability reports are available to some users. The permissions required to access such reports are unclear. The reports are issued using a custom naming convention, which does not appear to match any other public naming conventions. The reports are deemed sensitive enough that only the example in the first row is not blurred out.

Source: MIIT “The Information Sharing Platform of Cybersecurity Threat”49

This CSVD vulnerability tag in the first column is only found in one other government procurement document found online. In 2019, CAICT—one of the four organizations that oversees the NVDB—contracted two cybersecurity companies to produce a system now built into the MIIT’s Cybersecurity Threat Intelligence Sharing Platform.50 LegendSec (网神信息技术) received funding to build a system that would notify users of others’ reports of cybersecurity incidents. EverSec (恒安嘉新)—the company responsible for much of the cloud-computing capabilities at China’s National Cybersecurity Talent and Innovation Base—created a CSVD classification book to automatically validate and score the severity of software vulnerabilities. It is this CSVD scoring-and-tagging system that likely adorns software vulnerabilities submitted to the MIIT databases. The MIIT likely created this new naming convention simply because the use of any other convention (CVE, CNVD, or CNNVD) would require the involvement of an outside organization. A full technical-specifications document for the procurement of the CSVD system is available online.51

The NVDB’s downstream databases offer companies support services. The mobile-application database hosts a team that helps companies remediate software vulnerabilities. The tripwires that cause these MIIT groups to help a firm patch its vulnerable software are unclear.52 This support mechanism aligns well with PRC technology-development policies that aim to improve security. Other researchers have noted how poor IT security is for many PRC tech companies.53 These remediation services will likely raise the floor of performance.

The MIIT’s supporting role suggests that PRC tech companies are subject to far more in-depth oversight, however occasional, at the software-development level than previously known. This raises questions about how, when, and to what extent the state is involved in a company’s code base, and whether such oversight extends beyond mobile applications and PRC-based companies.54

The MIIT’s mission to create new, better technology standards incidentally leads it to fund the discovery of software vulnerabilities in foreign products. For example, the MIIT launched the Internet of Vehicles Identity Authentication and Safety Trust Pilot Project in 2021.55 CAICT oversees a committee for the project.56 The pilot project funded at least sixty-one research contracts for improving the security, safety, and trustworthiness of internet-connected vehicles. Qi An Xin, operator of a CNVD partner database mentioned above, hosts the Xingyu Internet of Vehicles Laboratory (星與车联网实验室).57 Researchers from the lab are likely funded by six of its contracts from the MIIT to improve internet connected vehicles’ security.58 Researchers shared many of their successful attack methodologies online—some posts include foreign brands, such as Tesla.59 Researchers submit new vulnerabilities to the MIIT’s Internet Connected Vehicles Vulnerability Database (CAVD)—a subset of the NVDB—as required by the RMSV. Xingyu Lab is even listed among the CAVD’s Vulnerability Analysis Experts Working Group in Appendix B. Some of the vulnerabilities are also reported back to the vehicle’s manufacturer—though these data are patchy and disclosure to the manufacturer is voluntary. Although industrial policy is not the focus of this report, it is clear that some of the MIIT’s other work results in software vulnerabilities reported back to its own databases.

Unfortunately, cooperation with the NVDB by foreign firms appears to be to their detriment. The NVDB’s ICS database lists companies that submit vulnerabilities for their own products.60 This list includes a handful of foreign firms complying with the PRC regulation. At least one foreign firm submitting to the database said it was not receiving reciprocal reports of vulnerabilities in its own products found by other researchers, while, at the same time, it saw a significant decrease in vulnerabilities reported from China.61 In effect, this company lost visibility into vulnerability research published in China, and was simultaneously submitting its own internally discovered bugs to the MIIT without any benefit to the firm besides RMSV compliance. This anecdotal evidence supports the analysis in the CNVD section above regarding missing ICS vulnerabilities. It is unclear whether the firm is proactively submitting its vulnerabilities to other governments.62

Few good options 

In contrast to China’s vulnerability collection system, the United States’ disclosure system seems less organized, and its voluntary nature makes the government’s aperture smaller. 

In the United States, software vulnerabilities are issued a Common Vulnerabilities and Exposure (CVE) ID by a MITRE-approved CVE Numbering Authority.63 When one of the nearly three hundred CVE Numbering Authorities—ranging from cybersecurity firms to device manufacturers—issues a CVE, the vulnerability is automatically connected to the National Vulnerability Database run by the National Institute of Standards and Technology (NIST).64 Software vulnerabilities can, thus, be reported to any number of companies, at any time the researcher chooses, for compensation or for free, before the CVE Numbering Authority verifies the vulnerability and issues a CVE, thus making the vulnerability public. Like China’s MIIT, CISA also offers services to support vulnerability patching and mitigation.65 Most significantly, there are no means to compel companies or researchers to submit vulnerabilities to CVE Numbering Authorities—regardless of whether they participate in China’s system. 

Policymakers’ instinct may be to copy some parts of China’s system—say, requiring that firms that provide vulnerability information to China’s government also provide it to CISA. These types of reforms are unnecessary and, ultimately, useless. Nothing is gained if companies are required to submit vulnerabilities to CISA when they submit to the MIIT. The US government does not have the remit, nor capability, to defend private computer networks. The short amount of time between vulnerabilities being reported and patched—just nine days in 2018–2019 according to Mandiant (now part of Google Cloud)—emphasizes the limited value of collection for defensive purposes.66 Without the ability or time to operationalize knowledge about vulnerabilities reported to the PRC for defensive purposes, mandating they be reported to CISA has no clear value. 

The lack of value in policy changes to the US system for defensive purposes raises questions about China’s own motives. 

The time between vulnerability discovery and patching in China is unknown. It may well be longer than the nine days suggested by Mandiant data on US-issued CVEs (which includes US and many foreign products), but this discounts the considerable talent employed at China’s leading technology companies. Companies may not be prioritizing vulnerability remediation at the pace policymakers prefer, but—in a tech sector with a twelve-hour workday, a six-days-per-week work culture, and significant state emphasis on security—it seems unlikely. Surely the Chinese researchers who dominated Pwn2Own and other international vulnerability competitions before they were blocked from leaving the country are still quite good, and are able to secure companies’ products in China.

However, the MIIT’s Cybersecurity Threat and Vulnerability Information Sharing Platform, which operates the NVDB, likely improves China’s collective cyber-defense capabilities in a different way. Rather than instigating firms to patch known vulnerabilities, the NVDB likely improves cybersecurity companies’ ability to detect cyberattacks by increasing visibility of known vulnerabilities. If cybersecurity firms can access all vulnerability reports submitted into the database—which this report cannot confirm—then the result would be improved cybersecurity. Companies could take these reports and integrate them into their operations, creating new detection rules that make exploiting the vulnerabilities harder even before a patch is available. 

This system would offer significant defensive advantages over the US ecosystem. Currently, cybersecurity companies become aware of software vulnerabilities when they are given a CVE by a CVE Numbering Authority, or when the company observes the zero-day vulnerability being exploited on its customer’s systems. This process creates a dynamic in which cybersecurity companies can only defend against what they have observed as a company. Firms with more customers have greater visibility and, if efficient, detect more vulnerabilities being exploited before they are issued CVEs. Mandiant was able to produce its report on the timeline of vulnerability patching precisely because of its visibility into attacks against its customers. 

What to do? Creating a database like the one operated by the MIIT could erode cybersecurity companies’ competitive advantages over one another. Hard-fought market share, keen threat intelligence, competitive pricing, and satisfied customers allow companies to compete in the market. Forcing the aggregation of these companies’ data on software vulnerabilities being exploited and intrusions against customers (as the MIIT’s NVDB collects) would significantly upend the cybersecurity market. Besides upsetting the market, creating a shared database of vulnerabilities and intrusions against customers would also create a significant target for foreign intelligence services. The data it held would be valuable counterintelligence information—letting foreign governments know which operations are being tracked by defenders. 

A better path would be for policymakers to implore CVE Numbering Authorities to verify vulnerabilities and assign them a CVE quicker. This would push reported vulnerabilities into the public view, and allow for all firms to update defenses without compromising the privacy of their customers or creating a target for intelligence collection. A public leaderboard of all CVE Numbering Authorities with each organization’s average time to complete validation and naming of vulnerabilities could instigate progress. Although a negative externality of this progress may be an increase in vulnerabilities for attackers to exploit (data do show many hackers quickly target vulnerabilities after they are published or after a patch has been released), the spike might be short lived.67 Companies would need to respond to the pressure of public disclosure by ramping up efforts to patch software—a positive outcome for everyone. 

Conclusion

After changes to policy in 2017, cybersecurity researchers from China were prohibited from traveling abroad to participate in software security competitions. The new rules aligned with China’s thought leaders at the time. The prevailing idea that such vulnerabilities are a “national resource” remains unchanged. China then took steps to set up its own software vulnerability competitions, such as Tianfu Cup. The competition has attracted attention for the number and quality of vulnerabilities furnished by researchers each year.68 Also in 2017, China began hosting competitions to spur progress on technologies to automate the discovery, patching, and exploitation of software vulnerabilities.69 This system, however, did not allow the state to collect all vulnerabilities discovered in China. The security services were still only receiving voluntary reports from companies electing to participate in their databases. CNVD seemingly attempted to improve collection by adding its partner databases from higher education and the private sector, but this was only a half measure. 

China’s system for collecting software vulnerabilities is now all encompassing. The PRC system has evolved from incentivizing voluntary disclosure to security services and encouraging disclosure to private-sector firms into mandating vulnerability disclosure to the state.

The 2021 RMSV increased the aperture of China’s vulnerability collection. Companies doing business in China are required to submit notice of a software vulnerability within forty-eight hours of being notified of it. Our report shows at least some foreign firms are complying with the regulations—though our limited visibility likely deflates the true number of companies adhering to the rules. Independent researchers, while not expressly required to disclose vulnerabilities to the MIIT, are prohibited from publishing information about vulnerabilities except to the company that owns the product—the same companies required to report the vulnerability to the government. The result is near total collection of software vulnerabilities discovered in China.

Researchers and organizations are now subject to dual reporting structures—mandatory disclosure to NVDB and continued, voluntary disclosure to CNVD or, in the case of technology support units, submission thresholds to meet membership requirements for the MSS CNNVD. A graphic from the “about us” section of the NVDB encourages reporting to the old CNVD and CNNVD databases concurrently.70 Indeed, many of the CNNVD technology support units are also listed on the MIIT databases’ respective membership lists. The parallel existence of these databases and competition between legal requirements to submit into the new system and incentives to voluntary disclosure into the old systems suggests some squabbling over turf between bureaucracies. For the companies involved in multiple databases, there is a clear incentive to act as an intermediary across bureaucratic boundaries.

This report demonstrates that the mandated vulnerability and threat-intelligence sharing from the MIIT’s new database to the CNCERT/CC’s CNVD facilitates access to reporting by a regional MSS office, a known PLA contractor, and a university research center with ties to PLA hacking campaigns and which conducts offensive and defense research. These organizations with ties to offensive hacking activities would be negligent if they did not utilize their access to CNVD vulnerability reports to equip their operators. The observable increase in the number of zero-days used by PRC hacking teams, as indicated by the 2022 Microsoft “Digital Defense Report,” suggests that these organizations’ access is resulting in vulnerabilities being used by offensive teams.

Other countries’ security services often rely on their own tools to discover, purchase, and observe software vulnerabilities for offense and defense. China’s security services do all of those things, too, but the new pipeline established by the RMSV provides it a clear advantage in accessing software vulnerabilities discovered by the private sector. 

Key recommendations

  1. Policymakers should seek to decrease the time between software vulnerability report submissions to CVE Numbering Authorities and the time it takes for them to be validated, named, and published. Efforts to recreate China’s system in the United States would not succeed in any meaningful sense, and would likely be met with opposition by industry. Instead, improving industry performance within the current system is the best approach. Many of the approximately 300 CVE Numbering Authorities are companies with their own products. Vulnerabilities in products from companies that are not numbering authorities are slower to be validated and published. This creates a bottleneck of unverified and unvalidated vulnerabilities.
  2. Policymakers should seek to improve US government vulnerability intelligence.71 Vulnerability intelligence uses the precursors to vulnerability discovery—namely, the people, companies and organizations, their technical competencies and niches, the tools and kit they purchase, and any vulnerabilities they make public—to estimate who might be working to discover vulnerabilities and in which systems. Crucially, this research would create insights into the kinds of vulnerabilities that foreign researchers are discovering, but not publishing. Data found in this report, such as lists of companies in the appendices, could be used to enable further research and collection on this topic.

About the authors

Dakota Cary is a nonresident fellow at the Atlantic Council’s Global China Hub and a consultant at Krebs Stamos Group. He focuses on China’s efforts to develop its hacking capabilities.

Kristin Del Rosso works at Sophos as a product manager focusing on Incident Response, Threat Intelligence, and the SecOps ecosystem. She enjoys threat hunting and learning about new forms of security research.

Editors: Chris Rohlf, Kitsch Liao, Colleen Cottle, Winnona DeSombre, Jonathan Reiter, Stewart Scott, Devin Thorne, and Ian Roos.

Appendix A

Appendix B

Technology Member List72
Product Member List73
Technology Support Units74
Vulnerability Analysis Experts Working Group75
Technology Support Units76

Report

Sep 14, 2022

Dragon tails: Preserving international cybersecurity research

By Stewart Scott, Sara Ann Brackett, Yumi Gambrill, Emmeline Nettles, Trey Herr

A quantitative study on whether legal context can impact the supply of vulnerability research with detrimental effects for cybersecurity writ large through the coordinated vulnerability disclosure process (CVD), using recent regulations in China as a case study.

China Cybersecurity
1    Dakota Cary, “China’s New Software Policy Weaponizes Cybersecurity Research,” The Hill, July 22, 2021, https://thehill.com/opinion/cybersecurity/564318-chinas-new-software-policy-weaponizes-cybersecurity-research; Brad D. Williams, “China’s New Data Security Law Will Provide It Early Notice of Exploitable Zero Days,” Breaking Defense, September 1, 2021, https://breakingdefense.com/2021/09/chinas-new-data-security-law-will-provide-it-early-notice-of-exploitable-zero-days.
2    It seems that when researchers discover vulnerabilities in other companies’ codebases, they are also required to share that information with the MIIT. Jonathan Greig, “Chinese Regulators Suspend Alibaba Cloud over Failure to Report Log4j Vulnerability,” ZDNet, December 22, 2021, https://www.zdnet.com/article/log4j-chinese-regulators-suspend-alibaba-partnership-over-failure-to-report-vulnerability.
3    宋海新 and 张功俐. “敲黑板 !《网络安全漏洞管理规定》逐条解读-中伦律师事务所.” archive.ph, February 8, 2023. https://archive.ph/xzbZq.
4    Brad Smith, “The Need for Urgent Collective Action to Keep People Safe Online: Lessons from Last Week’s Cyberattack,” Microsoft on the Issues, May 14, 2017, https://blogs.microsoft.com/on-the-issues/2017/05/14/need-urgent-collective-action-keep-people-safe-online-lessons-last-weeks-cyberattack.
5    Ellen Nakashima, “Security Firm Finds Link between China and Anthem Hack,” Washington Post, February 27, 2015, https://www.washingtonpost.com/news/the-switch/wp/2015/02/27/security-firm-finds-link-between-china-and-anthem-hack; Dakota Cary, “Academics, AI, and APTs: How Six Advanced Persistent Threat-Connected Chinese Universities are Advancing AI Research,” Center for Security and Emerging Technology, March 2021,  https://cset.georgetown.edu/publication/academics-ai-and-apts.
6    China’s Cyber Capabilities: Warfare, Espionage, and Implications for the United States, testimony before the U.S.-China Economic and Security Review Commission hearing. Statement by Adam Kozy, CEO and founder, SinaCyber, former FBI and CrowdStrike, 2022, https://www.uscc.gov/sites/default/files/2022-02/Adam_Kozy_Testimony.pdf.
7    Xinhua. “Full Text: Jointly Build a Community with a Shared Future in Cyberspace” archive.ph, May 23, 2023. https://archive.ph/AqhdW.
8    Thanks to Chris Rohlf for this metaphor.
9    Kathleen Metrick, Jared Semrau, and Shambavi Sadayappan, “Think Fast: Time Between Disclosure, Patch Release and Vulnerability Exploitation—Intelligence for Vulnerability Management, Part Two,” Mandiant, April 13, 2020, https://www.mandiant.com/resources/blog/time-between-disclosure-patch-release-and-vulnerability-exploitation.
10    Lucian Armasu, “Pwn2Own 2018: Focus Changes to Kernel Exploits as Browsers Get Harder to Hack,” Tom’s Hardware, March 16, 2018, https://www.tomshardware.com/news/pwn2own-2018-kernel-exploits-focus,36679.html; Violet Blue, “When China Hoards Its Hackers Everyone Loses,” Engadget, March 16, 2018, https://www.engadget.com/2018-03-16-chinese-hackers-pwn2own-no-go.html.
11    Yingzhi Yang, “China Discourages Its Hackers from Foreign Competitions so They Don’t Help Others,” South China Morning Post, March 21, 2018, https://www.scmp.com/tech/article/2138114/china-discourages-its-cybersecurity-experts-global-hacking-competitions.
12    Karen Chiu, “Chinese Hackers Break into Chrome, Microsoft Edge, and Safari in Competition,” South China Morning Post, November 19, 2019, https://www.scmp.com/abacus/tech/article/3038326/chinese-hackers-break-chrome-microsoft-edge-and-safari-competition.
13    Dakota Cary, “Robot Hacking Games,” Center for Security and Emerging Technology, September 2021, https://cset.georgetown.edu/publication/robot-hacking-games.
14    网络传播杂志, “360: 自觉担当责任维护网络安全,” 中共中央网络安全和信息化委员会办公室, November 6, 2018, https://perma.cc/ENA2-WZ3F
15    Stewart Scott, et al., Dragon Tails: Preserving International Cybersecurity Research, Atlantic Council, 2022, https://www.atlanticcouncil.org/wp-content/uploads/2022/09/AC_DRAGON_TAILS_LAY4_WEB3.pdf.
16    “Microsoft Digital Defense Report 2022,” Microsoft, 2022, https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE5bUvv.
17    Cary, “China’s New Software Policy Weaponizes Cybersecurity Research”; Williams, “China’s New Data Security Law Will Provide It Early Notice of Exploitable Zero Days.”
18    Priscilla Moriuchi and Dr. Bill Ladd, “China’s Ministry of State Security Likely Influences National Network Vulnerability Publications,” Recorded Future, November 16, 2017, https://www.recordedfuture.com/chinese-mss-vulnerability-influence; Dr. Bill Ladd, “The Dragon Is Winning: U.S. Lags Behind Chinese Vulnerability Reporting,” Recorded Future, October 19, 2017, https://www.recordedfuture.com/chinese-vulnerability-reporting; Priscilla Moriuchi, “China Altered Public Data to Conceal MSS Influence,” Recorded Future, March 9, 2018, https://www.recordedfuture.com/chinese-vulnerability-data-altered
19    “《国家信息安全漏洞共享平台章程》全文 – 安全内参 | 决策者的网络安全知识库.” https://web.archive.org/web/20220714014641/https:/www.secrss.com/articles/7474.
20    “Full Text: Jointly Build a Community with a Shared Future in Cyberspace,” Xinhua, November 7, 2022, https://www.chinadaily.com.cn/a/202211/07/WS63687246a3105ca1f2274748.html.
21    “《国家信息安全漏洞共享平台章程》全文 – 安全内参 | 决策者的网络安全知识库.” https://web.archive.org/web/20220714014641/https:/www.secrss.com/articles/7474.
22    国家计算机网络应急技术处理协调中心. “2020 Annual Report,” page 22, September 27, 2022. https://web.archive.org/web/20220927052510/https:/www.cert.org.cn/publish/main/upload/File/2020%20Annual%20Report.pdf.
23    “关于教育漏洞报告平台” 教育漏洞报告平台 In Archive.vn, 2023. https://archive.vn/f4BHI.
24    “From Coercion to Invasion: The Theory and Execution of China’s Cyber Activity in Cross-Strait Relations,” Recorded Future, November 23, 2022, https://www.recordedfuture.com/from-coercion-to-invasion-the-theory-and-execution-of-china-cyber-activity.
25    Cary, “Academics, AI, and APTs.”
26    “项目大厅 – 漏洞盒子.” https://archive.ph/rCpey.
27    Jamie Tarabay and Sarah Zheng, “Chinese Firm That Accused NSA of Hacking Has Global Ambitions,” Bloomberg, May 31, 2022, https://www.bloomberg.com/news/articles/2022-05-31/chinese-firm-that-accused-nsa-of-hacking-has-global-ambitions#xj4y7vzkg.
28    archive.ph. “奇安信创新服务及研究团队,” May 31, 2023. https://archive.ph/ARSxa.
29    国家计算机网络应急技术处理协调中心. “2020 Annual Report,” page 222, September 27, 2022. https://web.archive.org/web/20220927052510/https:/www.cert.org.cn/publish/main/upload/File/2020%20Annual%20Report.pdf.
30    “国家信息安全漏洞共享平台,” archive.ph. May 28, 2023. https://archive.ph/3xCQ4.
31    工信部联网安. “工业和信息化部国家互联网信息办公室公安部关于印发网络产品安全漏洞管理规定的通知-中共中央网络安全和信息化委员会办公室.” http://www.cac.gov.cn/2021-07/13/c_1627761607640342.htm; “Provisions on the Management of Network Product Security Vulnerabilities,” China Law Translate, July 14, 2021, https://www.chinalawtranslate.com/en/product-security-vulnerabilites.
32    “关于国家计算机网络应急技术 处理协调中心,” September 27, 2022. https://web.archive.org/web/20220927052510/https:/www.cert.org.cn/publish/main/upload/File/2020%20Annual%20Report.pdf.
33    “国家信息安全漏洞共享平台章程” 决策者网络安全知识库. Article 2, Section 8; Article 3, Sections 10, 11, 13 https://web.archive.org/web/20220714014641/https:/www.secrss.com/articles/7474.
34    Nigel Inkster attributes 中国信息安全测评中心 to the Ministry of State Security (Jon R. Lindsay, Tai Ming Cheung, and Derek S. Reveron, China and Cybersecurity: Espionage, Strategy, and Politics in the Digital Domain, Oxford, UK: Oxford University Press, 2015); Other analysts have tied its provincial bureaus to APTs (“China’s Cybersecurity Law Gives the Ministry of State Security Unprecedented New Powers Over Foreign Technology,” Recorded Future, August 31, 2017, https://www.recordedfuture.com/china-cybersecurity-law; “奇安信创新服务及研究团队,” May 31, 2023. www.archive.ph/ARSxa.
35    国家计算机网络应急技术处理协调中心. “2020 Annual Report,” page 220-221, September 27, 2022. https://web.archive.org/web/20220927052510/https:/www.cert.org.cn/publish/main/upload/File/2020%20Annual%20Report.pdf.
36    China National Vulnerability Database. “电信行业漏洞.” https://telecom.cnvd.org.cn/; “国家区块链漏洞库” https://bc.cnvd.org.cn/; “移动互联网行业漏洞.” https://mi.cnvd.org.cn/.
37    “Known Exploited Vulnerabilities Catalog,” Cybersecurity and Infrastructure Security Agency, last visited July 19, 2023, https://www.cisa.gov/known-exploited-vulnerabilities-catalog.
38    Moriuchi and Ladd, “China’s Ministry of State Security Likely Influences National Network Vulnerability Publications.”
39    中国信息安全测评中心. “国家信息安全漏洞库(CNNVD)技术支撑单位计划指南,” January 24, 2023. https://web.archive.org/web/20230124204541/https:/www.cnnvd.org.cn/static/download/CNNVD_technical_support_unit_plan_guide.pdf.
40    “国家标准《信息安全技术 网络安全漏洞分类分级规范》”, https://www.tc260.org.cn/file/2018-12-26/0a12e974-9a15-4c64-b62a-5eacfa93c53b.docx.
41    “关于CNNVD新增‘技术支撑单位’的公告 – 安全牛.” 国家信息安全漏洞库, https://archive.ph/O9rkW#selection-363.0-363.20;“国家信息安全漏洞库 – 技术支撑单位.”  国家信息安全漏洞库, https://archive.is/Ql46z.
42    中国政府网 “工业和信息化部网络安全威胁和漏洞信息共享平台正式上线运行_部门政务_中国政府网,” May 10, 2023. https://archive.ph/NS5xf.
43    “中心简介-评测中心.” https://archive.ph/Ckq32; “国家信息安全测评信息安全服务资质证书(安全工程类一级)-评测中心.” https://archive.ph/h2TvM ; “基础能力实验室-评测中心.” https://web.archive.org/web/20230528191032/https:/www.cstc.org.cn/sdsys1/jcnlsys.htm. The website’s links to the “special lab” do not work and there are few mentions of the lab elsewhere on the internet.
45    “Provisions on the Management of Network Product Security Vulnerabilities.” 工业和信息化部 http://www.cac.gov.cn/2021-07/13/c_1627761607640342.htm
46    China’s Cyber Capabilities: Warfare, Espionage, and Implications for the United States, testimony by John Chen, lead analyst, Center for Intelligence Research and Analysis, Exovera, before the U.S.-China Economic and Security Review Commission hearing (2022). https://www.uscc.gov/sites/default/files/2022-02/John_Chen_Testimony.pdf.
47    工业和信息化. “威胁报送.” https://archive.ph/f0mvi.
48    工业和信息化. “威胁报送.” https://archive.ph/f0mvi.
49    工业和信息化. “威胁报送.” https://archive.ph/f0mvi.
50    archive.ph. “中国信息通信研究院通信网络安全管理系统网络安全应急能力子平台原型研究与设计项目中标公告,” February 8, 2023. https://archive.ph/vgFdy.
52    If reported vulnerabilities are not addressed in a timely fashion, the supporting functions of the MIIT Mobile Application Database will work to support the private company. It is unclear what counts as “too long.”
53    Devin Thorne, “China’s Vulnerability Disclosure Regulations Put State Security First,” Australian Strategic Policy Institute, August 31, 2021, https://www.aspistrategist.org.au/chinas-vulnerability-disclosure-regulations-put-state-security-first.
54    In fairness to the MIIT, China’s mobile application stores are notoriously laden with bad software.
55    Perma | 车联网身份认证和安全信任试点工作启动会召开 齐向东详解车联网安全关键因素_财经_中国网
56    “车联网身份认证和安全信任试点” 车联网身份认证和安全信任工作专家委员会, December 6, 2022. https://web.archive.org/web/20221206200600/http:/www.caict.ac.cn/xwdt/ynxw/202109/P020210924326795055693.pdf
57    “车联网身份认证和安全信任试点工作启动会召开 齐向东详解车联网安全关键因素.” 中国网财经. https://perma.cc/57HE-E8ZE.
58    奇安信. “2021国家网安周:刘勇谈‘四轮驱动’构建车联网安全体系-奇安信.”  https://perma.cc/E4NH-BHQ6.
59    “车联网身份认证和安全信任试点工作启动会召开 齐向东详解车联网安全关键因素.” 中国网财经. https://perma.cc/57HE-E8ZE.
“Twitter Archive Https://Twitter.Com/Kevin2600/Status/1442860693573668866.” https://perma.cc/N5FP-HEJ9; “车联网安全之侠盗猎车 : 玩转固定码 (上) – FreeBuf网络安全行业门户.” https://perma.cc/6ZC3-DZ8K; “星舆实验室 — ADAS自动驾驶欺骗- FreeBuf网络安全行业门户.” https://perma.cc/4MU8-W8FE.
60    国家工业信息安全发展研究中心. “通知|国家工业信息安全漏洞库(CICSVD)2022年度成员单位名单公示.” https://archive.ph/Yr8HX#selection-105.14-105.50.
61    Proprietary insight from authors’ work.
62    Microsoft’s MAPP does this. Partners from many countries receive advance warning of significant vulnerabilities to be patched by Microsoft so the patches can be rolled out quickly. “Microsoft Active Protections Program.” https://www.microsoft.com/en-us/msrc/mapp
63    “CVE List Home,” CVE, last visited July 19, 2023, https://cve.mitre.org/cve.
64    “List Of Partners,” CVE, last visited July 19, 2023, https://www.cve.org/PartnerInformation/ListofPartners.
65    “Coordinated Vulnerability Disclosure Process,” Cybersecurity and Infrastructure Security Agency, last visited July 19, 2023, https://www.cisa.gov/coordinated-vulnerability-disclosure-process.
66    Metrick, et al., “Think Fast.”
67    Ibid; “Microsoft Digital Defense Report 2022,” 39.
68    J. D. Work, “China Flaunts Its Offensive Cyber Power,” War on the Rocks, October 22, 2021, https://warontherocks.com/2021/10/china-flaunts-its-offensive-cyber-power.
69    Cary, “Robot Hacking Games.”
70    “一图读懂《网络产品安全漏洞管理规定》.” 工业和信息化部网络安全威胁和漏洞信息共享平台  https://archive.ph/QvFgF.
71    Thanks to Chris Rohlf for this recommendation.
72    国家工业信息安全发展研究中心. “通知|国家工业信息安全漏洞库(CICSVD)2022年度成员单位名单公示.” https://archive.ph/Yr8HX#selection-105.14-105.50.
73    国家工业信息安全发展研究中心. “通知|国家工业信息安全漏洞库(CICSVD)2022年度成员单位名单公示.” https://archive.ph/Yr8HX#selection-105.14-105.50.
74    国家工业信息安全发展研究中心. “关注|‘信创漏洞库’第二批技术支撑单位评审合格单位名单公示.” archive.ph, January 24, 2023. https://archive.ph/51Jn4.
75    车联网产品安全漏洞专业库. “关于新增‘车联网漏洞分析专家工作组’专家成员的公示.” https://archive.ph/ncHPu#selection-749.1-749.26.
76    工业和信息化部软件与集成电路促进中心. “工信部移动互联网APP产品安全漏洞库技术支撑单位新增七家.” https://archive.ph/ixFwo.

The post Sleight of hand: How China weaponizes software vulnerabilities appeared first on Atlantic Council.

]]>
Kepe featured in National Defence Magazine on NATO’s Strategic Concept and NATO’s response to cyber threats https://www.atlanticcouncil.org/insight-impact/in-the-news/kepe-featured-in-national-defence-magazine-on-natos-strategic-concept-and-natos-response-to-cyber-threats/ Thu, 31 Aug 2023 16:36:00 +0000 https://www.atlanticcouncil.org/?p=696130 On August 31, Transatlantic Security Initiative nonresident senior fellow Marta Kepe was interviewed by National Defence Magazine on NATO’s Strategic Concept and NATO’s response to cyber threats.

The post Kepe featured in National Defence Magazine on NATO’s Strategic Concept and NATO’s response to cyber threats appeared first on Atlantic Council.

]]>
original source

On August 31, Transatlantic Security Initiative nonresident senior fellow Marta Kepe was interviewed by National Defence Magazine on NATO’s Strategic Concept and NATO’s response to cyber threats.

The Transatlantic Security Initiative, in the Scowcroft Center for Strategy and Security, shapes and influences the debate on the greatest security challenges facing the North Atlantic Alliance and its key partners.

The post Kepe featured in National Defence Magazine on NATO’s Strategic Concept and NATO’s response to cyber threats appeared first on Atlantic Council.

]]>
McCarthy tapped to lead the US Head of Delegation and Lead Negotiator https://www.atlanticcouncil.org/insight-impact/in-the-news/mccarthy-tapped-to-lead-the-us-head-of-delegation-and-lead-negotiator/ Mon, 21 Aug 2023 20:19:00 +0000 https://www.atlanticcouncil.org/?p=696310 On August 21, it was announced that Transatlantic Security Initiative nonresident senior fellow Deborah McCarthy was designated as the US Head of Delegation and Lead Negotiator for the sixth negotiating session of the Ad Hoc Committee (AHC) to elaborate a UN cybercrime convention.  

The post McCarthy tapped to lead the US Head of Delegation and Lead Negotiator appeared first on Atlantic Council.

]]>
original source

On August 21, it was announced that Transatlantic Security Initiative nonresident senior fellow Deborah McCarthy was designated as the US Head of Delegation and Lead Negotiator for the sixth negotiating session of the Ad Hoc Committee (AHC) to elaborate a UN cybercrime convention.  

The Transatlantic Security Initiative, in the Scowcroft Center for Strategy and Security, shapes and influences the debate on the greatest security challenges facing the North Atlantic Alliance and its key partners.

The post McCarthy tapped to lead the US Head of Delegation and Lead Negotiator appeared first on Atlantic Council.

]]>
The 5×5—Cloud risks and critical infrastructure https://www.atlanticcouncil.org/content-series/the-5x5/the-5x5-cloud-risks-and-critical-infrastructure/ Mon, 21 Aug 2023 04:01:00 +0000 https://www.atlanticcouncil.org/?p=671064 Experts share their perspectives on the challenges facing cloud infrastructure and how policy can encourage better security and risk governance across this critical sector.

The post The 5×5—Cloud risks and critical infrastructure appeared first on Atlantic Council.

]]>
This article is part of The 5×5, a monthly series by the Cyber Statecraft Initiative, in which five featured experts answer five questions on a common theme, trend, or current event in the world of cyber. Interested in the 5×5 and want to see a particular topic, event, or question covered? Contact Simon Handler with the Cyber Statecraft Initiative at SHandler@atlanticcouncil.org.

In June 2023, the US Department of State discovered Chinese cyber espionage activity relying on a fundamental vulnerability in Microsoft’s cloud technology that enabled hackers to forge identity authentication tokens. The vulnerability enabled the compromise of sensitive email (and other service) accounts, including that of Secretary of Commerce Gina Raimondo. 

This incident illustrates some of the risks associated with cloud computing’s many benefits. While much of the discussion around cloud computing is centered around these benefits—this infrastructure bears consideration as well. Just like other critical infrastructure sectors—such as energy, water, financial services, the defense industrial base, and more—disruptions to major cloud services could have material effects on economic and national security. The cloud’s centrality to critical infrastructure is the basis of the Atlantic Council’s recent report, “Critical Infrastructure and the Cloud: Policy for Emerging Risk,” which seeks to raise awareness of the seriousness of potential cloud disruptions and increase efforts toward bolstering cloud security and resilience across critical infrastructure. 

To examine these risks, we brought together a group to share their perspectives on the challenges facing cloud infrastructure and how policy can encourage better security and risk governance across this critical sector. 

#1 Are the challenges facing cloud infrastructure security well-defined and understood by providers? What’s the biggest question you see as unresolved in cloud security? 

Maia Hamin, associate director, Cyber Statecraft Initiative, Digital Forensic Research Lab (DFRLab), Atlantic Council

“The hyperscale Infrastructure-as-a-Service providers—AWS, Microsoft Azure, Google Cloud—understand many questions about the security of the cloud; they have enough reason to. Then again, many hard problems remain hard—the recent Microsoft compromise is a reminder that identity and access management is crucial to the whole premise of cloud security, and something that even well-resourced providers get wrong. The biggest unresolved questions that I see are those of interdependence and systemic risk. Where are there particular widely used technologies inside of a single provider—like identity and access management—where a software vulnerability or error could lead to compromise or outages across users (and availability zones cannot save you)? Where are there widely used technologies across providers—widely deployed superscalar processors like those from Intel, for example—that might be found vulnerable en masse and create impacts across cloud providers? Big cloud service providers are not necessarily well set up to solve some of these risks since they bridge across companies and there are a lot of incentives toward business secrecy.” 

Jim Higgins, chief information security officer, Snap Inc

“I think the challenges are well known to the cloud service providers themselves, but not to the public. We could use a lot more transparency as to what the cloud service providers feel are the large security issues and see if they are aligned with the expertise of their own customers.” 

Chris Hughes, chief information security officer and co-founder, Aquia

“On one hand, I am inclined to say yes, because the three largest Infrastructure-as-a-Service providers by market share—AWS, Microsoft Azure, and Google Cloud—are the only cloud service providers that are operating at such scale and scope. That said, they face specific challenges as providers upon which the entire modern Internet and digital infrastructure has become dependent, ushering in unseen levels of systemic risk across the ecosystem. The biggest question I see as unresolved in cloud security is how cloud service providers and regulatory bodies should work together to address that systemic risk and ensure that critical dependencies do not have devastating downstream impacts on thousands of companies and millions of individuals in nearly every industry vertical, including critical infrastructure and economic and national security. How do we fix transparency gaps that impede our ability to fully understand and address these systemic risks, while not stifling innovative cloud services in the marketplace?” 

Rich Mogull, analyst and chief executive officer, Securosis

“First, we need to accept that there are material differences between cloud providers. At one end are the hyperscale providers—AWS, Microsoft Azure, and Google Cloud. Of those, I think the companies understand the security concerns but do not necessarily prioritize them to the same degree. The recent Microsoft issue is one example. Other providers are not even playing the same game—especially Software-as-a-Service providers. It is the Wild West, and only some providers understand the security challenges and take them seriously. There really are not unresolved questions, but providers must do the work and stay on top of things. Right now, my biggest area of concern is Microsoft’s Entra ID (formerly Azure Active Directory).” 

Marc Rogers, chief technology officer, nbhd.ai:  

“While I believe the Infrastructure-as-a-Service providers have a better handle on their challenges than their customers do, the gaps are large and lead to incidents that blindside defenders. The risk that concerns me most is visibility and transparency, especially for the consumers of Infrastructure-as-a-Service. Attackers are already several steps ahead on understanding chains of trust, cross system exposure, and the building blocks like open-source software.”

#2 If cloud service providers are struggling to engineer critical services to the level of reliability that current threats demand—as demonstrated in the latest Microsoft cloud compromise—what role could policy play to help address this gap?

Hamin: “Understanding what went wrong would be a good start. There are several big, open questions about how a failure like this could be allowed to happen, and few satisfactory answers. A better understanding of real-world cloud compromises would help us understand why these failures occur and help drive solutions for problems ranging from underinvestment to unsafe designs. Cloud service providers should have more of an obligation to work with the government in the wake of a major incident, and government should have more tools (and drive) to translate those insights into public accountings and policy prescriptions.” 

Higgins: “At this point, I feel that it is time to bring a cloud focused version of FedRamp to help move the cloud service providers into a stricter reporting framework.” 

Hughes: “While major cloud service providers may be struggling to engineer critical cloud-native services to the level of reliability that the current threats demand, there is not a viable alternative aside from returning to on-premises legacy infrastructure, which is not an option in the era of digital modernization. Policies and regulations can play a role in governing the cloud as they do for other critical infrastructure sectors on which society relies. As evident in a recent Atlantic Council report, cloud computing is now pervasive in nearly every aspect of society that touches software. Policy can also help, as discussed in the National Cybersecurity Strategy, by bringing some rationalization to bespoke, disparate, and duplicative frameworks and bolstering those that help properly manage risk in the era of cloud computing. Policies should require hyperscale cloud service providers to provide sufficient transparency for security incidents and disruptions to both regulators, federal entities, and customers. Transparency breeds trust, but right now we exist in an opaque ecosystem of limited insight from cloud service providers.” 

Mogull: “This was a Microsoft issue, and I do not think the other hyperscale providers face the same struggle. That said, I see buying power as more capable of moving the needle than policy could be. The Trustworthy Computing initiative came about because the Defense Department told Microsoft that it would not purchase Microsoft products without massive security improvements. Right now, neither government agencies nor large companies are prioritizing security in their buying decisions, which means that there is not enough pressure on cloud service providers to improve security. Policy absolutely has a place, but I think cloud security could be improved more quickly and effectively if the government prioritized security in provider selection.” 

Rogers: “I see several opportunities for policy to support security without being overly burdensome. The Software Bill of Materials is already in flight and offers a way to shine a light on the ingredients of complex stacks. Clearing up the balance of liability is a motivator that would keep companies including Infrastructure-as-a-Service providers honest. A minimum set of tools, resources, and processes would lead to standardization and availability during critical moments. Minimum security features like logging should always be a default, not a profit center.”

#3 What is the difference between a software flaw and an architectural flaw in the cloud? How does policy address one vs. the other?

Hamin: “Software bugs are errors in written code that enable exploitation, such as unsanitized inputs used unsafely, unsafe use of memory, or incorrect permissions-checks. Architectural flaws are deeper flaws emerging from the design of complex software systems, such as inappropriate connections between services that should not be talking to each other, or concentrated reliance on a few brittle dependencies. Policy can mandate procedures (though often incomplete!) for how organizations should write code and train developers to avoid common vulnerable software patterns. But I think policy is just starting to think about architectural risk in software systems and does not have an evolved toolkit for addressing it yet.” 

Higgins: “The question is too general. Both can lead to equally widespread, negative impacts. Most architectures are software these days anyway.” 

Hughes: “Many may argue that these are one in the same, or increasingly similar, in an era in which we have software defined perimeters, architectures and computational resources provisioned declaratively through Infrastructure-as-Code languages. That debate aside, a software flaw would typically relate to written software in various programming languages and could escalate into a vulnerability, often tracked in a vulnerability database with a correlating identification. An architectural flaw on the other hand is not a vulnerability in software but in how a system is configured. We have seen these run rampant in the cloud with customer misconfigurations that lead to incidents, but also in fundamental ways with how the cloud is architected that lead to scenarios such as system outages or even exploitation by malicious actors.” 

Mogull: “Software flaws are basically coding errors and vulnerabilities. Architectural flaws are more design decisions. For example, look at how AWS handles regions (highly segregated) compared to the competition. Policy cannot help here. Policy should demand a secure outcome and not define either software or architectural decisions. If lawmakers focus on the highly variable technical and architectural options that will change from year to year and day to day, they will never be able to keep up. Penalties for preventable security failures will force the right architectural decisions. We know what needs to be done to improve security, but prioritizing those actions is the issue.” 

Rogers: “Software flaws are easier to manage and support through policy by ensuring good practice such as the use of memory-safe languages or the implementation of widely understood Secure Development Guidelines in a well-developed software development life cycle. Architectural flaws are much more complicated. The low-hanging fruit can be addressed in a similar way to software with a mature software development life cycle, good testing practices, and industry guidance, such as the deprecation of known vulnerable configurations or methods. However, the more complex end gets much harder. Issues like logic flaws, interconnection with legacy infrastructure, and unintended contextual risks can be hard to eliminate completely and hard to draft policy for without chilling innovation or even making migrations impossible. My suggestion is to focus on the baseline software development life cycle and require high standards in testing and transparency.” 

More from the Cyber Statecraft Initiative:

#4 Why does transparency in cloud services and infrastructure matter for cloud users? What are some examples of what meaningful transparency looks like?

Hamin: “The shared responsibility model for cloud services has a lot of advantages, including outsourcing complexity to large Infrastructure-as-a-Service providers and letting small organizations take advantage of the benefits afforded by the cloud. But this model also breaks some important elements of how we think about risk management, especially with respect to data. Cloud customers are most often the ones with specific legal and contractual obligations to protect their data or to ensure operational continuity. However, customers often do not have visibility into what is behind the veil that separates their responsibility from that of their cloud service provider to understand the other half of that equation. Policy needs to adapt to make sure there are real mechanisms to propagate requirements for data protection, transparency, and trust for cloud service providers that provide computing infrastructure for healthcare or banking institutions, for example.” 

Higgins: “Transparency builds accountability and trust; it is that simple. Meaningful cloud service provider transparency may include: 1) Software Bills of Materials or some kind of accountability to show what software contains; 2) root access numbers to show how many employees have access to data under normal circumstances; and 3) logs of security incidents involving the cloud over a period of time, indicating response capabilities and whether incidents repeatedly share the same root causes.” 

Hughes: “Transparency in cloud services and infrastructure is paramount for cloud users, especially on the cybersecurity front. Cloud computing is fundamentally built on a shared responsibility model, which has implied assumptions of various responsibilities across the cloud provider and consumer. Without transparency, the assurance around those responsibilities being fulfilled by the provider is inherently called into question by the consumer, which threatens the entire cloud paradigm. Even an implied lack of transparency can rattle trust. Meaningful transparency would entail cloud service providers being forthcoming with details of incidents, how they were identified, confirmed and potential ramifications, and meaningful actions consumers can take to mitigate risk. Being opaque with incident details or providing them slower than other security researchers and vendors, for example, neither bolsters the cloud service provider’s reputation nor the community’s trust.” 

Mogull: “Transparency allows customers to make both informed buying decisions and technical decisions. Meaningful transparency is seen in the AWS incident reports that the company releases after major public outages or issues. Lack of transparency is exemplified by how the company does not always disclose the scope of security incidents.” 

Rogers: “One of the greatest risks around cloud services is the fact that they include a tradeoff. You trade a significant amount of visibility and operational control for ease of implementation, access to mature technology, and reduced cost. That lack of visibility can strip away the ability of anyone but the provider to manage risks, understand the blast radius of an incident, or even know when an incident has occurred. A sensible amount of transparency—Software Bills of Materials, useful logs, transparent joint architecture reviews, and so on—can help mitigate the lack of visibility.”

#5 How can policymakers encourage cloud adoption in a way that supports security and does not create new sources of risk?

Hamin: “There is a reason we have critical infrastructure sectors over which the government performs more oversight than it does for other sectors. These are places where the risk of getting it wrong is too high to tolerate, and the major cloud service providers should be considered in that category already. Cloud service providers should be coming to the table and working with policymakers on risk and threat models, architecture reviews, and the like. That said, there is a lot of more mundane, we-know-it-already stuff that we need to get right for secure cloud adoption too. Organizations still fail to configure and use the cloud securely, and credential theft and phishing are still huge threats. These might be cases where government can lead the way in pushing known best practices and ensuring that sector-specific security regulations are up to date with the evolving needs of cloud-based systems.” 

Higgins: “No clue. We play whack-a-mole in the security world, meaning that when we fix one area of security, it causes another vulnerability to arise. Policy should drive awareness to risk rather than trying to reduce actual risk.” 

Hughes: “Policymakers can encourage secure cloud adoption by harmonizing and bolstering applicable frameworks, as well as providing more robust oversight and governance of these cloud service providers that are now dubbed ‘too big to fail’ and ‘critical infrastructure’ by some industry leaders. Encouragement around adoption should also involve educating consumers, as the role of consumers is important, as demonstrated in countless cloud security incidents. Policymakers should avoid hyperbole and spreading fear, uncertainty, and doubt related to the cloud, while instead raising valid concerns grounded in data. On premises infrastructure is not infallible and has been breached or impacted by security incidents historically as well. That said, such breaches to on-premises infrastructure generally impacted a single organization or small group of customers, as opposed to the society-wide impact that cloud risks can bring.” 

Mogull: “Policymakers can encourage secure cloud adoption through transparency requirements on security incidents, mandated vulnerability disclosures, mandated customer notifications for security incidents, and buying pressure to steer agencies towards platforms that demonstrate a stronger security posture. If security issues continue, some providers may need to be classified as systemically vital as we do with systemically important financial institutions. That would put cloud service providers under a microscope. I would prefer we not get to that point, but some providers seem to be doing their best to drive that outcome.” 

Rogers: “First and foremost, while the cloud is a fantastic tool, it is a not panacea. Policymakers should use the levers of government to level the playing field, and use purchasing levers to ensure government business goes toward providers that help keep this playing field level, help ensure risk is controlled and, most importantly, empower their customers to handle a wide range of risk scenarios as standard practice.” 

Simon Handler is a fellow at the Atlantic Council’s Cyber Statecraft Initiative within the Digital Forensic Research Lab (DFRLab). He is also the editor-in-chief of The 5×5, a series on trends and themes in cyber policy. Follow him on Twitter @SimonPHandler.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post The 5×5—Cloud risks and critical infrastructure appeared first on Atlantic Council.

]]>
Ukraine’s vibrant tech ecosystem is a secret weapon in the war with Russia https://www.atlanticcouncil.org/blogs/ukrainealert/ukraines-vibrant-tech-ecosystem-is-a-secret-weapon-in-the-war-with-russia/ Thu, 17 Aug 2023 17:31:59 +0000 https://www.atlanticcouncil.org/?p=673390 Ukraine’s secret weapon in the war against Russia is a vibrant and sophisticated tech ecosystem including around 300,000 IT professionals and hundreds of defense tech startups, writes Mykhailo Fedorov.

The post Ukraine’s vibrant tech ecosystem is a secret weapon in the war with Russia appeared first on Atlantic Council.

]]>
Russia’s full-scale invasion of Ukraine has unleashed the most technologically advanced war the world has ever seen. On land and sea, in the air and in cyberspace, both Russia and Ukraine are deploying rapidly advancing technologies on an almost daily basis. While Russia enjoys overwhelming advantages in terms of conventional military might, manpower, and resources, Ukraine can call upon a vibrant and sophisticated tech sector including around 300,000 IT professionals, and also benefits from a digital culture that is deeply rooted throughout Ukrainian society. This tech ecosystem is proving to be Ukraine’s secret weapon in the war against Russia.

Unsurprisingly, Ukraine’s current priority is the development of defense technologies that can help secure victory over Russia. This sector will likely remain at the heart of Ukraine’s tech ecosystem long after Russia is defeated, and has immense potential to shape the future growth of the country’s entire digital economy. With Ukraine currently serving as a testing ground for many of the world’s most advanced defense technologies, the country is already becoming an innovation hub and has every chance of establishing itself as a world leader in the defense tech sector.

Alongside this focus on defense technologies, Ukraine’s IT industry also continues to expand. Indeed, the IT sector is perhaps the only segment of the wartime Ukrainian economy that has remained on a growth trajectory since February 2022, creating new jobs, implementing new projects, and attracting investment. At Ukraine’s Ministry of Digital Transformation, we are seeking to do everything possible to help IT businesses not only survive but thrive, despite the unique challenges created by the Russian invasion. This includes the development of favorable tax and legal conditions via the Diia.City platform, the launch of free developer training opportunities, efforts to advance the reform of IT education, and enhanced backing for Ukrainian startups.

Stay updated

As the world watches the Russian invasion of Ukraine unfold, UkraineAlert delivers the best Atlantic Council expert insight and analysis on Ukraine twice a week directly to your inbox.

Launched just two weeks before the start of Russia’s full-scale invasion, Diia.City aims to offer some of the most attractive tax conditions in the world for companies operating in the IT and tech sectors. When we developed the architecture for the Diia.City platform, our goal was to demonstrate that lower tax rates can actually stimulate greater tax revenues for the state budget. This has now been confirmed by the most recent annual data from Ukraine’s State Tax Service. In 2022, Diia.City resident companies paid more than UAH 4.1 billion in taxes, which represents a 22.5% increase compared to the previous year. Today, more than 600 companies are residents of Diia.City. The list includes young Ukrainian companies and large international players such as Samsung, Visa, Nokia, Ajax Systems, and Global Logic.

Another current priority is support for Ukrainian startups. Even amid the horrors and upheavals of the Russian invasion, Ukrainians continue to innovate. Indeed, over the past eighteen months of war, there have been numerous examples of Ukrainians creating innovative new products that are helping the country defend itself. In order to encourage this trend, we are working to develop a dynamic venture capital investment ecosystem. Since December 2022, the Ministry of Digital Transformation has begun overseeing the Ukrainian Startup Fund, which has become the country’s largest angel investor. The fund has already supported over 350 startups and has now pivoted toward defense tech projects.

Ukraine’s flagship defense tech platform is the BRAVE1 initiative, a tech cluster for the development of the country’s defense tech industry. BRAVE1 was launched in spring 2023 by Ukraine’s Ministry of Digital Transformation, Ministry of Defense, Ministry of Economy, Ministry of Strategic Industries, the National Security and Defense Council, and the General Staff of the Armed Forces of Ukraine. The main goal of BRAVE1 is to create a fast track for innovation in the defense and security sectors. Any company or defense tech startup can find partners and gain assistance through the cluster. The overall objective is to build a system that will streamline the launch of defense tech projects. By midsummer 2023, BRAVE1 had registered approximately 400 projects, with almost 200 having also undergone military testing. The projects currently under development have been prioritized by Ukraine’s military leadership and include drones, robotic systems, electronic warfare, artifical intelligence tools, cybersecurity, communications, and information security management systems.

Making the most of Ukraine’s tech potential requires international investment. While security concerns inevitably cast a long shadow over the Ukrainian investment climate, there are some tentative signs of progress amid rising awareness of Ukraine’s tech potential. The European Commission’s European Innovation Council announced in May 2023 that it would allocate €20 million toward the development of Ukrainian startups and innovative projects. Meanwhile, the Seeds of Bravery project, in collaboration with the Ukrainian Innovation Development Fund, has been selected as a key partner of the EU, creating opportunities for hundreds of domestic startups to receive financial assistance and growth opportunities. This will include mentoring programs with top tier figures from the international tech industry. The Ukrainian authorities are also actively supporting the participation of Ukrainian startups at international trade events such as London Tech Week, Viva Tech, TechCrunch Disrupt, and others. This already helped Ukrainian startups to raise more that $10 million in additional funding in 2022.

Consolidating Ukraine’s tech ecosystem will require greater direct international involvement in addition to investment. With this in mind, Ukraine plans to launch an e-residency initiative by the end of 2023. This will allow citizens of other countries to run their tech businesses in Ukraine and benefit from favorable conditions including attractive tax rates. At the pilot phase of the project, e-residency will be made available to citizens of Slovenia, India, Pakistan, and Thailand. This list will be expanded as the program is rolled out. The initiative aims to offer e-residents a fast, convenient, fully automized experience without the need to interact directly with any state officials. The first stage is expected to involve around 1000 e-residents, which could generate $1 million for the Ukrainian budget. More importantly, it will introduce Ukraine to a new generation of tech professionals from around the world and elevate the country’s profile as an emerging global tech hub.

The current war has served to underline the strategic importance of a strong tech sector. Luckily, Ukraine has been moving in this direction for some years and has therefore been able to adapt rapidly to wartime conditions. Beyond the existential challenge of defeating Russia and securing Ukrainian statehood, I am now more convinced than ever that the tech sector will be the main engine of Ukraine’s future GDP growth. The Ukrainian economy will become increasingly digital in the years ahead, and the country will bolster its reputation not only as an exporter of IT services but as home to a wide range of globally competitive tech brands. To help make this happen, we will continue to create favorable conditions for international tech companies to scale up their businesses and open offices in Ukraine, as both Palantir and SpaceX have done since the start of Russia’s full-scale invasion.

Over the past eighteen months, Ukraine has demonstrated that it has the capacity to implement new ideas in the tech sector with remarkable creativity and efficiency. Wartime conditions are accelerating evolutionary processes within the country’s tech sector that have been underway for more than a decade; this is propelling Ukraine toward the status of digital superpower. It is now clear that Ukraine has the potential to become one of the world’s top ten innovation-driven economies. This will play a key role in the country’s future prosperity and will also help keep Ukrainians safe.

Mykhailo Fedorov is Ukraine’s Vice Prime Minister for Innovations and Development of Education, Science, and Technologies, and Minister of Digital Transformation.

Further reading

The views expressed in UkraineAlert are solely those of the authors and do not necessarily reflect the views of the Atlantic Council, its staff, or its supporters.

The Eurasia Center’s mission is to enhance transatlantic cooperation in promoting stability, democratic values and prosperity in Eurasia, from Eastern Europe and Turkey in the West to the Caucasus, Russia and Central Asia in the East.

Follow us on social media
and support our work

The post Ukraine’s vibrant tech ecosystem is a secret weapon in the war with Russia appeared first on Atlantic Council.

]]>
The 5×5—Cyber conflict in international relations: A policymaker’s perspective https://www.atlanticcouncil.org/content-series/the-5x5/the-5x5-cyber-conflict-in-international-relations-a-policymakers-perspective/ Thu, 03 Aug 2023 16:30:17 +0000 https://www.atlanticcouncil.org/?p=667402 Current and former policymakers address cyber conflict’s fundamental place in international relations, their recommended readings, and ideas for how policymakers and scholars can more effectively engage one another.

The post The 5×5—Cyber conflict in international relations: A policymaker’s perspective appeared first on Atlantic Council.

]]>
This article is part of The 5×5, a monthly series by the Cyber Statecraft Initiative, in which five featured experts answer five questions on a common theme, trend, or current event in the world of cyber. Interested in the 5×5 and want to see a particular topic, event, or question covered? Contact Simon Handler with the Cyber Statecraft Initiative at SHandler@atlanticcouncil.org.

In last month’s edition of the 5×5, we featured a group of leading scholars to share their views on cyber conflict in international relations. Contributors discussed the important interplay between the scholarly community and the policymaking sphere, as scholarly debate over cyber conflict’s place in international relations has driven seminal government strategies. For instance, key underpinnings of US Cyber Command’s 2018 decision to shift its strategy away from a deterrence-based approach and toward the concepts of Defend Forward and Persistent Engagement—which has improved effectiveness since—can be traced back to a series of scholarly articles embodied in a recent book by Michael Fischerkeller, Emily Goldman (featured below), and Richard Harknett (featured in last month’s 5×5).

This time around, we brought together a group of distinguished individuals with past and present cyber policy experience across a range of government organizations to share their perspective on the topic. They address cyber conflict’s fundamental place in international relations, some of their recommended readings for aspiring policymakers, disconnects between scholars and policymakers, and ideas for how both communities can more effectively engage one another.

#1 What, in your opinion, is the biggest misconception about cyber conflict’s role in international relations theory?

John Costello, principal, WestExec Advisors; former chief of staff and principal architect, Office of the National Cyber Director; former deputy executive director, Cyberspace Solarium Commission; former deputy assistant secretary of intelligence and security, Department of Commerce

“There are many. The history of cyber as a public policy topic has been (forgivably) dominated by the application of frameworks or theory ultimately ill-suited to the domain. The use of nuclear or conventional deterrence theory and related escalation dynamics are chief among them. It is understandable that familiar concepts would be used and take so long to shake. Cyber is at its heart espionage and sabotage. Perceptions of strength and advantage are hard to define and quantify. Though it can have physical effects, as any covert action can, cyber fundamentally relies on secrecy and an unacknowledged but accepted gray space wherein states tolerate—within limits—each other’s intelligence operations. Deterrence as a strategy, escalation dynamics, prescriptive international norms, and usefulness of cyber capabilities for tactical effect and compellence are all concepts that have been borrowed from prevailing scholarship and adapted into cyber. It is a misread of the core dynamics governing cyber operations, which look and act far more like intelligence operations than anything else.” 

Emily Goldman, strategist, US Cyber Command; former cyber advisor to the director of policy planning, US Department of State: 

The views expressed by Dr. Goldman do not reflect the official policy or position of the Department of Defense or the US government.  

“It is analytically and practically useful to talk of ‘operations and campaigns in and through cyberspace’ rather than ‘cyber conflict.’ The latter term conflates ‘means’ with geopolitical ‘condition’ (e.g., competition, militarized crisis, armed conflict). All conflict today has some cyber element. With this reframing, the biggest misconception in international relations theory is treating operations and campaigns in and through cyberspace as substitutes for conventional and nuclear forces, and therefore misapplying concepts and tools of deterrence, coercion, signaling, escalation management, and offense-defense advantage.” 

Nina Kollars, associate professor, Cyber and Innovation Policy Institute, US Naval War College

“The biggest misconception is about the role of state and military leadership as the primary drivers of effects. Because international relations theory tends to leverage states and militaries as the lead agents in its theories, it struggles to provide a useful understanding of the primary agents in cyberspace, wherein states and militaries are working on the edges.” 

Heli Tiirmaa-Klaar, director, Digital Society Institute, ESMT Berlin; former ambassador at large for cyber diplomacy and director general of the cyber diplomacy department, Estonian Ministry of Foreign Affairs

“Quite often, cyber technology is compared to nuclear technology. But information and communications technology and cyber tools are mostly dual-use technologies, civilian technologies and applications that can be weaponized. This is making analysis and conceptualization of cyber threats more complex, requiring a thorough understanding of the actual cyber tools used when analyzing the impact of the cyber operation.” 

Gavin Wilde, senior fellow, Technology and International Affairs Program, Carnegie Endowment for International Peace; former director for Russia, Baltic, and Caucasus affairs, National Security Council

“[The biggest misconception is] the notion that cyber conflict might be mapped nicely onto predominant international relations theories—particularly in the same way nuclear weapons were in previous eras. Thus far, offensive cyber operations lack the speed, precision, scope, and impact of kinetic weapons, while states hardly maintain a monopoly over cyber capabilities or the means of mitigating their effects. This makes realism and liberalism faulty lenses through which to analyze cyber conflict. Conversely, as cyber capabilities introduce more chaos into geopolitics, constructivism may yet have its moment: ‘anarchy is what we make of it.’”

#2 What would you like to see scholars and students studying cyber conflict better understanding about policymaking?

Costello: “Budget, not policy, is the most authentic indicator of a state’s priorities. The significant divide between a state’s apparent policy priorities and its budgetary outlay is a terminal condition for achieving the objectives and aspirations it has set. In the United States, though the constitution and judiciary have assigned the president a preeminent prerogative for national security, his latitude and reach are always constrained by conflicting and disparate congressional interests in the budget process. Legislative inaction has often compelled the president to adapt executive power in its place—initiatives that are essentially unfunded mandates until blessed by congressional appropriators. This has become a significant problem in cyber and technology policy. Often the most effective tools at the United States’ disposal to mitigate risk and create advantages in strategic competition lie with departments or agencies that do not sport the budgetary flexibility and heft of our traditional national security agencies. Policy is shaped by politics, no doubt, but these institutional issues and competing interests can be just as significant in shaping the practical contours of policy and strategy for cyber and technology. Policymakers and scholars would do well to understand them.” 

Goldman: “The range of cyber topics occupying policymakers. Many scholars focus on the independent coercive impact of destructive cyber options. Policymakers are interested in how campaigning in and through cyberspace generates insights, opportunities, and effects—both technical and cognitive—that cumulatively produce strategic impact over time. Policymakers are also interested in how cyber plays out across the geopolitical conditions of competition, militarized crisis, and armed conflict, and particularly the transitions between them.” 

Kollars: “Policymaking at the corporate level is almost never discussed in generalized theories of international relations. But corporate policies, such as ‘How many characters in a tweet?’ ‘Who has a check mark?’ and ‘Can my account be anonymous?’—have fundamental effects on how the Internet is used.” 

Tiirmaa-Klaar: “I would recommend that students always to look at cyber elements in any given conflict as part of a larger political-strategic picture. It is unusual for conflict to remain in the cyber domain and for cyber tools to be used without other tools. We have seen in both hybrid and kinetic conflicts how cyber tools were used to facilitate political goals of warring parties. For example, Russia used cyber operations to create confusion among the population during its 2007 Estonian hybrid operation and as a means of facilitating its battlefield operational goals in Ukraine beginning in 2022.” 

Wilde: “[I would like scholars and students to better understand] that the dominant force working against policymaking on cyber issues is often bureaucratic inertia. The status quo, however tenuous, is often more preferable to key stakeholders than an uncertain shift in funding, authorities, or public visibility. Insofar as policies on cyber issues must address the competing structural incentives of the private sector and civil society, savvy decision makers will also recognize the need to examine those same dynamics within the government itself to be successful.”

#3 What is a scholarly piece of literature on cyber conflict that you recommend aspiring policymakers read closely and why?

Costello: “The work from Richard Harnett and Joshua Rovner, US Cyber Command’s scholars-in-residence over the past few years, is worth reading in-depth. Rovner’s work on viewing cyber as an intelligence contest influenced the Cyberspace Solarium Commission’s approach; we looked to strategies and measures adopted by counterintelligence for lessons that could be applied to cyber. This understanding prompted, in part, the shift in principal focus to defense and resilience. Though not stated outright but heavily implied, the Solarium’s heavy focus on cyber defense as the missing element of deterrence was an intentional counter to the prevailing approaches that prioritized cost imposition and offense. Though important, these tools are less impactful in an intelligence contest than credible resilience.” 

Goldman: “Operators on the edge, diplomats, and military leaders internationally tell me that Cyber Persistence Theory is illuminating and persuasive. It resonates with their experience in ways other academic treatises have not. Max Smeets’ volume No Shortcuts focuses on building military cyber capacity, with implications for which actors are likely to wield sophisticated cyber capabilities whether in competition, crisis or conflict. His analysis is important for tempering policymakers’ fears and expectations about exquisite military-grade cyberspace operations.” 

Kollars:Semi-State Actors in Cyber Security by Florian Egloff really broke ground here.” 

Tiirmaa-Klaar: “The real textbook for cyber conflict policymakers has not been written yet and should be written soon. I recommend books, such as Thomas Rid’s Active Measures, that give broader strategic and intelligence assessments of some key players. 

Wilde: “I would highly recommend a great 2012 piece by Dr. Myriam Dunn Cavelty at ETH Zurich, entitled ‘The Militarisation of Cyberspace: Why Less May Be Better.’ In light of the Biden administration’s recently released National Cyber Strategy, her piece was rather prescient for the time about the need for ‘governments and military actors [to] acknowledge that their role in cyber security can only be a limited one, even if they consider cyberattacks to be a major national security threat. Cybersecurity is and will remain a shared responsibility between public and private actors.’” 

More from the Cyber Statecraft Initiative:

#4 How has understanding of cyber conflict evolved in the last five years within the cyber policy community and how do you see it evolving in the next five years?  

Costello: “The past five years have seen a slow-growing fundamental rethink of many of the assumptions in the US approach to governing cyberspace, cybersecurity, and cyber conflict. Chief among these is the Department of Defense’s shift towards Defend Forward, underpinned by the Persistent Engagement theory of cyber competition. Accompanying the shift is growing skepticism of cyber operations as a decisive strategic deterrent or its usefulness alongside or in place of conventional tactical operations in conflict. The type of widespread, disruptive cyberattacks against critical infrastructure often predicted in military theory was conspicuously absent in Russia’s invasion of Ukraine in 2022. It is unclear whether Ukrainian and US cybersecurity efforts limited Russian options or tactical and strategic considerations caused Russia to withhold using these capabilities. For all cyber’s advantages, it is less attractive when missiles are on the table. Time will tell, of course, but it is another data point to consider when trying to understand the practical usefulness of cyber capabilities in conflict. The Defend Forward construct has reshaped the Defense Department’s own conception of its role in fundamental ways, most prominently in extending and expanding its supportive role to international partners, the private sector, and other agencies. It is a de facto acknowledgement that the Defense Department is not always the best tool in the gray zone-defined cyber conflict, but one with significant resources and capabilities. More broadly, the optimism of truly borderless, global internet has been completely dashed. The past five years have seen an increasingly unwieldy fracturing of global cyberspace into different internet ecosystems and markets—each with their own priorities, laws, and norms. Disentanglement with China, European regulation, and overt preference for domestic firms have each contributed to this dynamic. This disentanglement and fracturing will likely contribute to new instability—or the likelihood that cyber conflict spills over into the physical world in ways that are disruptive. I do not see this trend reversing.” 

Goldman: “The biggest shift in the last five years is away from deterrence as the dominant strategic approach and from expectations that cyberspace operations in competition will escalate to militarized crisis and armed conflict. The next five years will be shaped by insights from the Russia-Ukraine conflict (hopefully not overly so) and by the integration of cyberspace campaigns with information operations—in turn shaped by adoption of disruptive technologies like AI.” 

Kollars: “The Russia-Ukraine war will provide fertile ground for scholars and policy makers alike. The sheer volume of the cyber dynamics, associated sensors, and corporate involvement will reshape what we think effective policy is when it comes to cyber conflict and information use.” 

Tiirmaa-Klaar: “First of all, many more people are in the cyber policy community today than there were five years ago. We also have understood the complexities of cyber conflict and learned from actual scenarios. The war in Ukraine has provided a good lesson in understanding how adversaries are planning to use cyber elements during conventional conflict. The other lesson is to develop cyber capabilities among NATO and other democratic states to create more robust cyber defense among likeminded states. In the next five years, I see serious deficiencies when it comes to cyber preparedness for most states to face serious cyber threats. In future conflicts, many states might experience cyber hostilities but will not be able to respond in a timely manner due to lack of capabilities and coordination. International cyber capacity assistance is still very limited and there are too few programs to advance cyber resilience of countries beyond technologically advanced states. Among Western states, I predict steady growth of cyber capabilities and expertise that helps them become more resilient and prepared for future conflicts.” 

Wilde: “The conversation has slowly expanded beyond state-centric, highly sophisticated threat actors and broken conceptual frameworks centered on prevention and deterrence. The democratization of sophisticated cyber capabilities and the acknowledgement that disruptions—intentional or otherwise—are likely unavoidable has made resilience the organizing principle for policy. In this regard, the next five years will hopefully see market and regulatory pressures on industry to take a more ‘secure-by-design’ approach, more funding for non-military cyber capacity at the state and local levels, and more privacy protections for consumers.”

#5 How can scholars and policymakers of cyber conflict better incorporate perspectives from each other’s work?

Costello: “The scholars and public policy hands who are most effective in communicating their message often tailor it in form and substance to meet the needs of their audience—policymakers and their staff. Succinct, incisive, engaging, and accessible are watchwords. Secondly, they answer the ‘what’s next’ element of policymaking in ways that are useful—namely, accounting for and tailoring their policy recommendations to political needs, institutional or budgetary limitations, identifying supporting or opposing constituencies, and giving weight and consideration to feasibility and practical means of achieving progress. In other words, they develop ‘battle ready’ recommendations. It is easier for lawmakers and staff to translate them into action if they answer pressing policy problems, while having already thought through basic questions and vexing particulars.” 

Goldman: “Working side-by-side is an incredibly powerful way to bridge the scholar-policy gap, accelerate mutual learning, and generate convergent insights that drive innovation in policy and scholarship. US Cyber Command’s Scholar in Residence program and its Academic Engagement Network are paying huge dividends and are models for other organizations to adopt.” 

Kollars: “I think we are seeing a gradual maturation of the cyber conflict literature in academia as moving from the hyper-theoretical toward the practical and heavily empirical. In this sense, scholars are moving ever closer to investigating cyber conflict by examining variables over which policymakers can have effects. To accelerate that pace, academics can start thinking about what are causal variables that are useful to policymakers. Policymakers can help by signaling their span of control, and what is outside of their control given cyberspace. A theory of multipolarity and cyber conflict is important, but harder for policymakers to find levers to pull.” 

Tiirmaa-Klaar: “One field that needs specific scholarly and policy attention is determining how to enforce a framework for responsible state behavior. We have a UN framework, but its implementation has been suboptimal and, therefore, many states doubt the viability for international law to apply in cyberspace and for norms to be helpful. I think this is a dangerous path, because we need to strengthen the normative elements in cyberspace and make sure more states adhere to the norms. Otherwise, we will face a future that is characterized by growing threats and cyber anarchy—this is what we must avoid. If scholars can come up with good recommendations on how to improve this situation, policymakers would gladly welcome these novel ideas.” 

Wilde: “[Scholars and policymakers can better incorporate perspectives from each other’s work] by acknowledging each other’s limitations. As an academic field, cyber conflict remains in relative infancy, highly theoretical with many unknown (and unknowable) aspects. Meanwhile, much of what is known about the practice of cyber conflict by militaries and intelligence services will likely remain classified. Scholars should therefore not profess cyber conflict to be a settled science, nor should policymakers presume to be operating on one.” 

Simon Handler is a fellow at the Atlantic Council’s Cyber Statecraft Initiative within the Digital Forensic Research Lab (DFRLab). He is also the editor-in-chief of The 5×5, a series on trends and themes in cyber policy. Follow him on Twitter @SimonPHandler.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post The 5×5—Cyber conflict in international relations: A policymaker’s perspective appeared first on Atlantic Council.

]]>
Ukraine’s digital revolution is proving vital for the country’s war effort https://www.atlanticcouncil.org/blogs/ukrainealert/ukraines-digital-revolution-is-proving-vital-for-the-countrys-war-effort/ Thu, 27 Jul 2023 19:47:05 +0000 https://www.atlanticcouncil.org/?p=668050 Ukraine's remarkable resilience amid the biggest European war since World War II owes much to the country's ongoing digital revolution, writes Ukrainian Minister for Digital Transformation Mykhailo Fedorov.

The post Ukraine’s digital revolution is proving vital for the country’s war effort appeared first on Atlantic Council.

]]>
What is it like living through the nightmare of war in the twenty-first century? For most people, this would conjure up images of devastated cities, collapsing economies, and a desperate rush for survival. However, Ukraine’s experience over the past year-and-a-half is proof that life can go on, even in the most challenging of circumstances. Despite suffering the horrors and trauma of Russia’s ongoing invasion, Ukrainians continue to open new businesses, get married, pay taxes, and apply for financial assistance from the state. This remarkable resilience has been possible in large part thanks to Ukraine’s digital revolution.

Digital tools like Uber and Booking have long since become part of our everyday lives, providing basic services with maximum convenience. Imagine if all your interactions with the state could be as efficient and unintrusive; imagine if you could pay taxes or register a business in a matter of seconds via your smartphone. In Ukraine, this is already reality.

Launched in September 2019, Ukraine’s Diia app is a core element of the country’s digital infrastructure that has dramatically enhanced Ukrainian society’s ability to withstand the Russian invasion. More than 19.2 million Ukrainians currently use Diia, which is installed on approximately 70% of the smartphones in the country. In regions of Ukraine under Russian occupation, it is often the only way for Ukrainians to receive assistance or access services provided by the Ukrainian government.

International audiences have already noticed the effectiveness of Diia. Estonia, which has long been a global leader in the field of digital government, is currently implementing a similar app based on Diia. Countries in Latin America and Africa are next in line to develop their own versions. This recognition for Ukraine’s innovative e-governance app should come as no surprise; after all, it has now proven itself in the toughest of wartime conditions.

Stay updated

As the world watches the Russian invasion of Ukraine unfold, UkraineAlert delivers the best Atlantic Council expert insight and analysis on Ukraine twice a week directly to your inbox.

Diia is not just a wartime success story, of course. In the almost four years since it was first launched, the app has managed to slowly but surely transform Ukrainian attitudes toward innovation while shaping perceptions regarding the role of digital technologies in daily life. Following the introduction of digital passport and driver’s license functions, Diia became a routine tool for millions of Ukrainians. The realities of the Russian invasion have now further embedded the app into the country’s everyday existence.

This familiarity has helped build growing levels of public trust in the digital state. Indeed, most Ukrainians have overcome any initial fears related to the pace of digitalization in the country and now openly embrace Ukraine’s digital revolution. People are no longer paranoid about the possibility of personal data leakages and have stopped worrying about how elderly relatives will cope with smartphone technologies.

It has also been some time since I last encountered by personal favorite: What will happen if the state has all my personal information? In reality, of course, the state has always had access to the information that is now digitally available via Diia. The only difference is users can finally access this information themselves in a highly convenient and transparent manner.

Since 2019, Ukrainians have learned that digital tools can offer high levels of security in addition to efficiency. Indeed, security has always been at the heart of Ukraine’s digital transformation. A number of important decisions were made on the eve of Russia’s full-scale invasion that underlined this commitment to prioritizing the safety of users and their data, including steps to move the entire Diia infrastructure to cloud format.

We created our own in-house team of hackers to probe the Ukrainian government’s digital systems for weaknesses, and also worked with partners to engage hackers around world with the task of penetrating the Diia platform. They found no security vulnerabilities. Security has been further enhanced by the introduction of the Diia.Signature function, which provides an extra layer of protection for particularly sensitive e-services such as changing place of residence or registering a car.

One of the last remaining arguments against digitalization is the claim that an over-reliance on digital technologies will backfire in the event of restricted access to electricity or the internet. Ukrainians have now debunked this myth by maintaining high degrees of digital connectivity throughout the fall and winter months of the past year, despite the blackout conditions created by Russia’s large-scale bombing campaign against Ukraine’s civilian energy infrastructure. Businesses continued to operate and online services remained available as the nation adapted from October 2022 onward to the uncertainty of regular power cuts. Satellite internet, Starlink, and power generators all played crucial roles in this process, proving that digital solutions do not require perfect conditions in order to improve quality of life.

For the past eighteen months, Ukrainians have demonstrated that the digital state is both safe and convenient. They have done so in incredibly testing conditions amid the largest European war since World War II, which is also widely recognized as the world’s first full-scale cyberwar. This achievement should dispel any lingering doubts that governments across the world will all eventually go digital. As they embrace digitalization, other countries can look to Ukraine as a model and as a source of inspiration.

Mykhailo Fedorov is Ukraine’s Vice Prime Minister for Innovations and Development of Education, Science, and Technologies, and Minister of Digital Transformation.

Further reading

The views expressed in UkraineAlert are solely those of the authors and do not necessarily reflect the views of the Atlantic Council, its staff, or its supporters.

The Eurasia Center’s mission is to enhance transatlantic cooperation in promoting stability, democratic values and prosperity in Eurasia, from Eastern Europe and Turkey in the West to the Caucasus, Russia and Central Asia in the East.

Follow us on social media
and support our work

The post Ukraine’s digital revolution is proving vital for the country’s war effort appeared first on Atlantic Council.

]]>
Pakistan needs to press pause on its data overhaul https://www.atlanticcouncil.org/blogs/new-atlanticist/pakistan-needs-to-press-pause-on-its-data-overhaul/ Wed, 26 Jul 2023 22:35:56 +0000 https://www.atlanticcouncil.org/?p=667665 Islamabad appears poised to push through onerous data regulations that will put the country's tech industry under strain—and raise concerns for consumers.

The post Pakistan needs to press pause on its data overhaul appeared first on Atlantic Council.

]]>
Pakistan’s government appears poised to push through onerous data regulations that will put the country’s tech industry under strain—and raise concerns for consumers. On July 26, Pakistani media reported that the country’s cabinet had approved the draft Personal Data Protection Bill 2023, signaling that parliament is likely to pass the legislation before its term ends in August. In addition, the cabinet approved the E-Safety Bill 2023, which is also expected to move through parliament in the coming days. While the data protection legislation has been in the works for several years, the sudden movement of the draft legislation has caught both industry and civil society by surprise, especially since there has been little engagement with stakeholders on the legislation in the past few weeks.

Earlier this year, our team at the Atlantic Council’s Pakistan Initiative discussed these issues with stakeholders in Pakistan’s technology sector, including members of digital rights groups, local technology businesses and industry associations, and global technology companies operating in Pakistan. 

As I wrote in April, a key takeaway from these conversations was that Islamabad should abandon “arbitrary actions and rulemaking” and follow a “more constructive, transparent, and collaborative approach to drafting legislation.” Unfortunately, Pakistan’s government has not followed such a process, with the US Chamber of Commerce saying on July 21 that “industry did not receive an invitation for comments” on the latest draft that the cabinet just approved. This sentiment is shared by digital rights advocates and representatives of domestic technology companies in Pakistan whom I have spoken with in the last day.

Pakistan’s technology sector is a bright spot within a rather bleak economic outlook, but passing legislation like the current draft of the bill will only weaken the sector…

Another key recommendation from stakeholders in our April issue brief was that “policy must balance the need to regulate with avoiding the imposition of onerous compliance costs on the technology ecosystem.” This is especially true in the context of domestic technology companies, in particular those looking to scale and export their services, thereby helping earn scarce foreign exchange for Pakistan. The draft legislation ignores this recommendation as well, potentially increasing compliance costs and generating headwinds that will slow down the growth of the country’s burgeoning technology sector.

Some of the specific concerns voiced by stakeholders whom I have spoken with around the latest draft of the legislation revolve around broad definitions for categories of data, including critical and sensitive personal data. In addition, the scope and applicability of the legislation has also raised concerns, and the requirements for cross-border data flows and data localization are expected to create significant challenges for the technology ecosystem. Finally, the composition and powers of the National Commission for Personal Data Protection has also raised eyebrows.

Commenting on Section 32 of the draft legislation, which provides for mandatory access to “sensitive personal data” by the Government of Pakistan, the US Chamber of Commerce wrote that the “requirement is inconsistent with the Government of Pakistan’s goal of protecting the personal data of individuals and guaranteeing their fundamental right to privacy under Article 14 of the Constitution of Pakistan.” In addition, including this section is likely to “lead to the conclusion under global privacy law norms that the [Personal Data Protection Bill] is not adequate and therefore likely to hamper data transfers into Pakistan.”

Data localization requirements are also expected to create challenges for both domestic and international technology companies operating in Pakistan. Jeff Paine, managing director of the Asia Internet Coalition, stated on July 26 that this requirement “will limit Pakistanis’ access to many global digital services.” In addition, Paine stated that the legislation “creates unnecessary complexities that will increase the cost of doing business and dampen foreign investment.”

Stakeholders have long agreed that legislation and regulation governing the technology ecosystem in Pakistan is necessary. The current draft legislation, however, falls significantly short of stakeholders’ expectations.

Pakistan’s technology sector is a bright spot within a rather bleak economic outlook, but passing legislation like the current draft of the bill will only weaken the sector—and, by extension, the broader economy. Lawmakers ought to reconsider their plans to approve this legislation and, instead, hit the pause button. Then, they should seek to address pressing concerns and make necessary amendments to this draft in an inclusive and transparent manner. Such a process will ensure that once elections are held in Pakistan in the coming months, a new government can debate and pass legislation that has the buy-in of key stakeholders, especially those from within the country’s technology ecosystem.


Uzair Younus is the director of the Pakistan Initiative at the Atlantic Council’s South Asia Center.

The post Pakistan needs to press pause on its data overhaul appeared first on Atlantic Council.

]]>
Ukraine’s tech sector is playing vital wartime economic and defense roles https://www.atlanticcouncil.org/blogs/ukrainealert/ukraines-tech-sector-is-playing-vital-wartime-economic-and-defense-roles/ Thu, 20 Jul 2023 16:35:49 +0000 https://www.atlanticcouncil.org/?p=665702 The Ukrainian tech industry has been the standout performer of the country’s hard-hit economy following Russia’s full-scale invasion and continues to play vital economic and defense sector roles, writes David Kirichenko.

The post Ukraine’s tech sector is playing vital wartime economic and defense roles appeared first on Atlantic Council.

]]>
The Ukrainian tech industry has been the standout sector of the country’s hard-hit economy during the past year-and-a-half of Russia’s full-scale invasion. It has not only survived but has adapted and grown. Looking ahead, Ukrainian tech businesses will likely continue to play a pivotal role in the country’s defense strategy along with its economic revival.

While Ukraine’s GDP plummeted by 29.1% in 2022, the country’s tech sector still managed to outperform all expectations, generating an impressive $7.34 billion in annual export revenues, which represented 5% year-on-year growth. This positive trend has continued into 2023, with IT sector monthly export volumes up by nearly 10% in March.

This resilience reflects the combination of technical talent, innovative thinking, and tenacity that has driven the remarkable growth of the Ukrainian IT industry for the past several decades. Since the 2000s, the IT sector has been the rising star of the Ukrainian economy, attracting thousands of new recruits each year with high salaries and exciting growth opportunities. With the tech industry also more flexible than most in terms of distance working and responding to the physical challenges of wartime operations, IT companies have been able to make a major contribution on the economic front of Ukraine’s resistance to Russian aggression.

Stay updated

As the world watches the Russian invasion of Ukraine unfold, UkraineAlert delivers the best Atlantic Council expert insight and analysis on Ukraine twice a week directly to your inbox.

Prior to the onset of Russia’s full-scale invasion in February 2022, the Ukrainian tech sector boasted around 5,000 companies. Ukrainian IT Association data for 2022 indicates that just two percent of these companies ceased operations as a result of the war, while software exports actually grew by 23% during the first six months of the year, underlining the sector’s robustness. Thanks to this resilience, the Ukrainian tech sector has been able to continue business relationships with its overwhelmingly Western clientele, including many leading international brands and corporations. According to a July 2022 New York Times report, Ukrainian IT companies managed to maintain 95% of their contracts despite the difficulties presented by the war.

In a world where digital skills are increasingly defining military outcomes, Ukraine’s IT prowess is also providing significant battlefield advantages. Of the estimated 300,000 tech professionals in the country, around three percent are currently serving in the armed forces, while between 12 and 15 percent are contributing to the country’s cyber defense efforts. Meanwhile, Ukraine’s IT ecosystem, hardened by years of defending against Russian cyber aggression, is now integral to the nation’s defense.

A range of additional measures have been implemented since February 2022 to enhance Ukrainian cyber security and safeguard government data from Russian attacks. Steps have included the adoption of cloud infrastructure to back up government data. Furthermore, specialized teams have been deployed to government data centers with the objective of identifying and mitigating Russian cyber attacks. To ensure effective coordination and information sharing, institutions like the State Service for Special Communications and Information Protection serve as central hubs, providing updates on Russian activities and the latest threats to both civilian and government entities.

Today’s Ukraine is often described as a testing ground for new military technologies, but it is important to stress that Ukrainians are active participants in this process who are in many instances leading the way with new innovations ranging from combat drones to artillery apps. This ethos is exemplified by initiatives such as BRAVE1, which was launched by the Ukrainian authorities in 2023 as a hub for cooperation between state, military, and private sector developers to address defense issues and create cutting-edge military technologies. BRAVE1 has dramatically cut down the amount of time and paperwork required for private sector tech companies to begin working directly with the military; according to Ukraine’s defense minister, this waiting period has been reduced from two years to just one-and-a-half months.

One example of Ukrainian tech innovation for the military is the Geographic Information System for Artillery (GIS Arta) tool developed in Ukraine in the years prior to Russia’s 2022 full-scale invasion. This system, which some have dubbed the “Uber for artillery,” optimizes across variables like target type, position, and range to assign “fire missions” to available artillery units. Battlefield insights of this nature have helped Ukraine to compensate for its significant artillery hardware disadvantage. The effectiveness of tools like GIS Arta has caught the attention of Western military planners, with a senior Pentagon official saying Ukraine’s use of technology in the current war is a “wake-up call.”

Alongside intensifying cooperation with the state and the military, members of Ukraine’s tech sector are also taking a proactive approach on the digital front of the war with Russia. A decentralized IT army, consisting of over 250,000 IT volunteers at its peak, has been formed to counter Russian digital threats. Moreover, the country’s underground hacktivist groups have shown an impressive level of digital ingenuity. For example, Ukraine’s IT army claims to have targeted critical Russian infrastructure such as railways and the electricity grid.

Ukraine’s tech industry has been a major asset in the fightback against Russia’s invasion, providing a much-needed economic boost while strengthening the country’s cyber defenses and supplying the Ukrainian military with the innovative edge to counter Russia’s overwhelming advantages in manpower and military equipment.

This experience could also be critical to Ukraine’s coming postwar recovery. The Ukrainian tech industry looks set to emerge from the war stronger than ever with a significantly enhanced global reputation. Crucially, the unique experience gained by Ukrainian tech companies in the defense tech sector will likely position Ukraine as a potential industry leader, with countries around the world eager to learn from Ukrainian specialists and access Ukrainian military tech solutions. This could serve as a key driver of economic growth for many years to come, while also improving Ukrainian national security.

David Kirichenko is an editor at Euromaidan Press, an online English language media outlet in Ukraine. He tweets @DVKirichenko.

Further reading

The views expressed in UkraineAlert are solely those of the authors and do not necessarily reflect the views of the Atlantic Council, its staff, or its supporters.

The Eurasia Center’s mission is to enhance transatlantic cooperation in promoting stability, democratic values and prosperity in Eurasia, from Eastern Europe and Turkey in the West to the Caucasus, Russia and Central Asia in the East.

Follow us on social media
and support our work

The post Ukraine’s tech sector is playing vital wartime economic and defense roles appeared first on Atlantic Council.

]]>
Jeglinskas featured in delfi.lt on potential cyber ​​threats and NATO https://www.atlanticcouncil.org/insight-impact/in-the-news/jeglinskas-featured-in-delfi-lt-discussing-cyber-threats-and-nato/ Sat, 08 Jul 2023 02:53:00 +0000 https://www.atlanticcouncil.org/?p=702023 On July 7, Transatlantic Security Initiative nonresident senior fellow Giedrimas Jeglinskas was interviewed in delfi.lt on cyber ​​threats and NATO (text in Lithuanian). 

The post Jeglinskas featured in delfi.lt on potential cyber ​​threats and NATO appeared first on Atlantic Council.

]]>
original source

On July 7, Transatlantic Security Initiative nonresident senior fellow Giedrimas Jeglinskas was interviewed in delfi.lt on cyber ​​threats and NATO (text in Lithuanian). 

The Transatlantic Security Initiative, in the Scowcroft Center for Strategy and Security, shapes and influences the debate on the greatest security challenges facing the North Atlantic Alliance and its key partners.

The post Jeglinskas featured in delfi.lt on potential cyber ​​threats and NATO appeared first on Atlantic Council.

]]>
Global Strategy 2023: Winning the tech race with China https://www.atlanticcouncil.org/content-series/atlantic-council-strategy-paper-series/global-strategy-2023-winning-the-tech-race-with-china/ Tue, 27 Jun 2023 13:00:00 +0000 https://www.atlanticcouncil.org/?p=655540 The United States and the People’s Republic of China (PRC) are engaged in a strategic competition surrounding the development of key technologies. Both countries seek to out-compete the other to achieve first-mover advantage in breakthrough technologies, and to be the best country in terms of the commercial scaling of emerging and existing technologies.

The post Global Strategy 2023: Winning the tech race with China appeared first on Atlantic Council.

]]>
Table of contents

As strategic competition between the United States and China continues across multiple domains, the Scowcroft Center for Strategy and Security in partnership with the Global China Hub, has spent the past year hosting a series of workshops aimed at developing a coherent strategy for the United States and its allies and partners to compete with China around technology. Based on these workshops and additional research, we developed our strategy for the US to retain its technological advantage over China and compete alongside its allies and partners.

Strategy Paper Editorial board

Executive editors

Frederick Kempe
Alexander V. Mirtchev

Editor-in-chief

Matthew Kroenig

Editorial board members

James L. Jones
Odeh Aburdene
Paula Dobriansky
Stephen J. Hadley
Jane Holl Lute
Ginny Mulberger
Stephanie Murphy
Dan Poneman
Arnold Punaro

Executive summary

The United States and the People’s Republic of China (PRC) are engaged in a strategic competition surrounding the development of key technologies. Both countries seek to out-compete the other to achieve first-mover advantage in breakthrough technologies, and to be the best country in terms of the commercial scaling of emerging and existing technologies.

Until recently, the United States was the undisputed leader in the development of breakthrough technologies, and in the innovation and commercial scaling of emerging and existing technologies, while China was a laggard in both categories. That script has changed dramatically. China is now the greatest single challenger to US preeminence in this space. 

For the United States, three goals are paramount. The first is to preserve the US advantage in technological development and innovation relative to China. The second is to harmonize US strategy and policy with those of US allies and partners, while gaining favor with nonaligned states. The third is to retain international cooperation around trade in technology and in scientific research and exploration.

The strategy outlined in these pages has three major elements: the promotion of technologically based innovation; the protection of strategically valuable science and technology (S&T) knowhow, processes, machines, and technologies; and the coordination of policies with allies and partners. The shorthand for this triad is “promote, protect, and coordinate.”

On the promotion side, if the United States wishes to remain the leading power in scientific research and in translating that research into transformative technologies, then the US government—in partnership with state and local governments, the private sector, and academia—will need to reposition and recalibrate its policies and investments. On the protect side, a coherent strategy requires mechanisms to protect and defend a country’s S&T knowledge and capabilities from malign actors, including trade controls, sanctions, investment screening, and more. Smartly deploying these tools, however, is exceedingly difficult and requires the United States to hone its instruments in a way that yields only intended results. The coordination side focuses on “tech diplomacy,” given the need to ensure US strategy and policy positively influence as many allies, partners, and even nonaligned states as possible, while continuing to engage China on technology-related issues. The difficulty lies in squaring the interests and priorities of the United States with those of its allies and partners, as well as nonaligned states, and even China itself. 

This strategy assumes that China will remain a significant competitor to the United States for years to come. It also assumes that relations between the United States and China will remain strained at best or, at worst, devolve into antagonism or outright hostility. Even if a thaw were to reset bilateral relations entirely, the US interest in maintaining its advantage in technological development would remain. 

Any successful long-term strategy will require that the US government pursue policies that are internally well coordinated, are based on solid empirical evidence, and are flexible and nimble in the short run, while being attentive to longer-run trends and uncertainties. 

There are two major sets of risks accompanying this strategy. Overreach is one because decoupling to preserve geopolitical advantages can be at odds with economic interests. A second involves harms to global governance including failure to continue cooperation surrounding norms and standards to guide S&T research, and failure to continue international science research cooperation focused on solving global-commons challenges such as pandemics and climate change. 

The recommendations that follow from this analysis include the following, all directed at US policymakers.

  1. Restore and sustain public research and development (R&D) funding for scientific and technological advancement.
  2. Improve and sustain STEM (science, technology, engineering, and math) education and skills training across K–12, university, community college, and technical schools.
  3. Craft a more diverse tech sector.
  4. Attract and retain highly skilled talent from abroad.
  5. Support whole-of-government strategy development.
  6. Ensure private-sector firms remain at the cutting edge of global competitiveness. 
  7. Improve S&T intelligence and counterintelligence.
  8. Ensure calibrated development and application of punitive measures. 
  9. Build out and sustain robust multilateral institutions.
  10. Engage with China, as it cannot be avoided.

Back to top

A 2033 What If…

Imagine that it is the year 2033. Imagine that China has made enormous strides forward in the technology arena at the expense of the United States and its allies and partners. Suppose that this outcome occurred because, between 2023 and 2033, China’s economy not only does not weaken substantially but instead goes from strength to strength, including (importantly) increasing its capabilities in technological development and innovation. Suppose, too, that the US government failed to craft and maintain the kinds of investments and policies that are needed to sustain and enhance its world-leading tech-creation machine—its “innovation ecosystem”—to stay ahead of China. Suppose that the US government also failed to properly calibrate the punitive measures designed to prevent China from acquiring best-in-class technologies from elsewhere in the world—where calibration means the fine-tuning of policies to achieve prescribed objectives without spillover consequence. Finally, suppose that the United States and its allies and partners around the world failed to align with one another in terms of strategies and policies regarding how to engage China and, just as critically, about alignment of their own ends. What might that world look like?

Looking at that world from the year 2033, a first observation is that US scientific and technological (S&T) advantage, a period that lasted from 1945 to the 2020s, has come to an end. In its place is a world where China’s government labs, universities, and firms are often the first to announce breakthrough scientific developments and the first to turn them into valuable technologies.

For the US government and for allies and partners in the Indo-Pacific region, the strategic consequences are severe, as China has not only closed much of the defense spending gap by 2033 but is able to employ weaponry as advanced, and in some cases more advanced, than those of the United States and its allies.1 Military planners from Washington to New Delhi watch China’s rising capabilities with much anxiety, given the geostrategic leverage that such changes have given Beijing across the region.

Nor is this problem the only headache for the United States and its coalition of partners in 2033. For a variety of reasons, many of China’s tech firms are outcompeting those elsewhere in the world, including some of the United States’ biggest and most important firms. Increasingly, the world looks as much to Shenzhen as to Silicon Valley for the latest tech-infused products and services.

China’s long-standing ambition to give its tech firms an advantage has paid off. The Chinese state has successfully pursued its strategy of commercial engagement with other countries, one that has been well known for decades and is characterized by direct and indirect financial and technical aid for purchases of Chinese hardware and software. This approach, while imperfect, drove adoption of Chinese technology abroad, with much of that adoption happening in the Global South.2 Across much of Africa, Latin America, South and Southeast Asia, and the Middle East, China has grown into the biggest player in the tech space, with its technologies appealing both to consumers and to many governments looking for financial assistance in upgrading their tech infrastructure. Moreover, China’s tech assistance has aided authoritarian governments seeking the means to control access to information, especially online, and the desire to surveil citizens and suppress dissent.3 China’s efforts have been a major reason why the internet has fractured in many countries around the world. The ideal of the internet as an open platform is largely gone, replaced by a system of filtered access to information—in many instances, access that is controlled by authoritarian and illiberal states.

In 2033, even the biggest US-based tech firms struggle to keep pace with Chinese firms, as do tech firms based in Europe, Asia, and elsewhere. Although still formidable, Western firms find themselves at a disadvantage in both domestic and foreign markets. China’s unfair trading practices have continued to give its firms an edge, even in markets in mature economies and wealthy countries. China has continued its many unfair trading practices, including massive direct and indirect state subsidies and regulatory support for its firms, suspect acquisition—often outright theft—of intellectual property (IP) from firms abroad, and requiring that foreign firms transfer technology to China in exchange for granting access to its enormous domestic consumer market, in 2033 the biggest in the world.4 When added to the real qualitative leaps that China has made in terms of the range and sophistication of its tech-based products and services, foreign firms are often on the back foot even at home. In sector after sector, China is capturing an increasingly large share of global wealth.

Nor is this all. China’s rising influence means that the democratic world has found it impossible to realize its preferences concerning the global governance of technology. This problem extends beyond China’s now significant influence on technical-standards development within the range of international organizations that are responsible for standards.5 The problem is much larger than even that. Since the early 2020s, because of decreasing interest in scientific cooperation, the United States, China, and Europe have been unable to agree on the basic norms and principles that should guide the riskiest forms of advanced tech development. As a result, big gaps have appeared in how the major players approach such development. This patchwork, incomplete governance architecture has meant that countries, firms, and even individual labs have forged ahead without common ethical-normative frameworks to guide research and development. In such fields as artificial intelligence (AI), China has increased its implementation of AI-based applications that have eroded individual rights and privacies—for example, AI-driven facial-recognition technologies used by the state to monitor individual activity—not only within China, but in parts of the world where its technologies have been adopted.6

Nor is even this long list all that is problematic in the year 2033. Scientific cooperation between the United States and China—and, by extension, China and many US allies and partners—has declined precipitously since 2023. Cross-national collaboration among the world’s scientists has always been a proud hallmark of global scientific research, delivering progress on issues ranging from cancer treatments to breakthrough energy research. Collaboration between China on the one hand, and Western states on the other, used to be a pillar of global science. Now, unfortunately, much of that collaboration has disappeared, given the rising suspicions and antagonism and the resulting policies that were implemented to limit and, in some cases, even block scientific exchange.7

From the perspective of developments that led to this point in the year 2033, the United States and its allies and partners failed to pursue a coherent, cooperative, and united strategy vis-à-vis strategic competition with China. Policymakers were unable to articulate, and then implement, policies that were consistent over time and across national context. Various international forums were created for engagement on strategy and policy questions, but they proved of low utility as policy harmonization bodies or tech trade-dispute mechanisms.

Opening session of US-China talks at the Captain Cook Hotel in Anchorage, Alaska, US March 18, 2021. REUTERS/Frederic J. Brown

Back to top

Strategic context

The above scenario, which sketches a world in 2033 where China has gained the upper hand at the expense of the United States and its allies and partners, is not inevitable. As this strategy paper articulates, there is much that policymakers in the United States and elsewhere can do to ensure that more benign futures, from their perspectives, are possible. However, as this strategy paper also articulates, their success is far from a given.

The United States and the People’s Republic of China (PRC) are engaged in a strategic competition surrounding the development of key technologies, including advanced semiconductors (“chips”), AI, advanced computing (including quantum computing), a range of biotechnologies, and much more. Both countries seek to out-compete the other to achieve first-mover advantage in breakthrough technologies, and to be the best country at the commercial scaling of emerging and existing technologies.

These two capabilities—the first to develop breakthrough technologies and the best at tech-based innovation—overlap in important respects, but they are not identical and should not be regarded as the same thing. The first country to build a quantum computer for practical application (such as advanced decryption) is an example of the former capability; the country that is best at innovating on price, design, application, and functionality of electric vehicles (EVs) is an example of the latter capability. The former will give the inventing country a (temporary) strategic and military advantage; the latter will give the more innovative country a significant economic edge, indirectly contributing to strategic and military advantage. The outcome of this competition will go a long way toward determining which country—China or the United States—has the upper hand in the larger geostrategic competition between them in the coming few decades.

For China, the primary goal is to build an all-encompassing indigenous innovation ecosystem, particularly in sectors that Chinese leadership has deemed critical. Beijing views technology as the main arena of competition and rivalry with the United States, with many high-level policies and strategy documents released under Xi Jinping’s tenure emphasizing technology across all aspects of society. Under Xi’s direction, China has intensified its preexisting efforts to achieve self-sufficiency in key technology sectors, centering on indigenous innovation and leapfrogging the United States. 

On the US side, the Joe Biden administration and Congress have emphasized the need to maintain leadership in innovation and preserve US technological supremacy. Although there are many similarities between the Donald Trump and Biden administrations’ approaches to competition with China, one of the primary differences has been the Biden administration’s focus on bringing allies and partners onboard and trying to make policies as coordinated and multilateral as possible. While a laudable goal, implementation of a seamless allies-and-partners coordination is proving difficult.

Until recently, the United States was the undisputed leader in the development of breakthrough technologies, and in the innovation and commercial scaling of emerging and existing technologies. Until recently, China was a laggard in both categories, falling well behind the United States and most, if not all, of the world’s advanced economies in both the pace of scientific and technological (S&T) development and the ability to innovate around technologically infused products and services.

That script has changed dramatically as a result of China’s rapid ascension up the S&T ladder, starting with Deng Xiaoping’s reforms in the 1970s and 1980s and continuing through Xi Jinping’s tenure.8

Although analysts disagree about how best to measure China’s current S&T capabilities and its progress in innovating around tech-based goods and services, there is no dispute that China is now the greatest single challenger to US preeminence in this space. In some respects, China may already have important advantages over the United States and all other countries—for example, in its ability to apply what has been labeled “process knowledge,” rooted in the country’s vast manufacturing base, to improve upon existing tech products and invent new ones.9

Chinese President Xi Jinping speaks at the military parade marking the 70th founding anniversary of People’s Republic of China, on its National Day in Beijing, China October 1, 2019. REUTERS/Jason Lee

This competition represents a new phase in the two countries’ histories. The fall of the Berlin Wall and the decade that followed saw US leadership seek to include China as a member of the rules-based international order. In a March 2000 speech, President Bill Clinton spoke in favor of China’s entry into the World Trade Organization (WTO), arguing that US support of China’s new permanent normal trade relations (PNTR) status was “clearly in our larger national interest” and would “advance the goal America has worked for in China for the past three decades.”10 China’s leadership returned the favor, with President Jiang Zemin later stating that China “would make good on [China’s] commitments…and further promote [China’s] all-directional openness to the outside world.”11

Despite some US concerns, the period from 2001 through most of the Barack Obama administration saw Sino-American relations at their best.12 The lure of the Chinese market was strong, with bilateral trade in goods exploding from less than $8 billion in 1986 to more than $578 billion in 2016.13 People-to-people exchanges increased dramatically as well, with tourism from China increasing from 270,000 in 2005 to 3.17 million in 2017, and the number of student F-visas granted to PRC students increasing tenfold, from approximately 26,000 in 2000 to nearly 250,000 in 2014.14 US direct investment in China also grew significantly after 2000, as US companies saw the vast potential of the Chinese market and workforce. Notably, overall US investment in China continued to grow even after the COVID-19 pandemic.15

So what changed? In a 2018 essay titled “The China Reckoning,” China scholars Ely Ratner and Kurt Campbell—now both members of the Biden administration—described how the US plan for China and its role in the international system had not gone as hoped. 

Neither carrots nor sticks have swayed China as predicted. Diplomatic and commercial engagement have not brought political and economic openness. Neither US military power nor regional balancing has stopped Beijing from seeking to displace core components of the US-led system. And the liberal international order has failed to lure or bind China as powerfully as expected. China has instead pursued its own course, belying a range of American expectations in the process.

Campbell and Ratner, “The China Reckoning.”

These sentiments were shared by many others in Washington. Many felt like China was taking advantage of the United States as the Obama administration transitioned to its “pivot to Asia.” For example, in 2014 China sent an uninvited electronic-surveillance ship alongside four invited naval vessels to the US-organized Rim of the Pacific (RIMPAC) military exercises, damaging what had appeared to be improving military-to-military relations.16 On the economic side, despite the two sides signing an agreement in April 2015 not to engage in industrial cyber espionage, it soon became clear that China did not plan to uphold its side of the bargain. In 2017, the US Department of Justice indicted three Chinese nationals for cyber theft from US firms, including Moody’s Analytics, Siemens AG, and Trimble.17

Within China, political developments were also driving changes in the relationship. Xi Jinping assumed power in November 2012, and most expected him to continue on his predecessors’ trajectory. However, in 2015 a slew of Chinese policies caught the eye of outside observers, especially the “Made in China 2025” strategy that caused a massive uproar in Washington and other global capitals, given its explicit focus on indigenization of key sectors, including the tech sector. 

On the US side, when President Trump was elected in 2017, the bilateral economic relationship came under further fire, sparked by growing concerns surrounding China’s unfair trade practices, IP theft, and the growing trade deficit between the two countries. First the first time, frustration over these issues brought about strong US policy responses, including tariffs on steel, aluminum, soybeans, and more, a Section 301 investigation of Chinese economic practices by the US trade representative, and unprecedented export controls on the Chinese firms Huawei and ZTE. On the Chinese side, a growing emphasis on self-reliance, in conjunction with narratives surrounding the decline of the West, has dominated the conversation at the highest levels of government. In many instances, some of these statements—like China’s relatively unachievable indigenization goals in the semiconductor supply chain—have pushed the US policy agenda closer toward one centering on zero-sum tech competition.

In 2023, the Biden administration continued some Trump-era policies toward China, often reaching for export controls as a means to prevent US-origin technology from making its way to China. The Biden administration is even considering restricting outbound investment into China, stemming from concerns around everything from pharmaceutical supply chains to military modernization. The bottom line is that US-China competition is intense, and is here to stay for the foreseeable future. 

Back to top

Goals

There are three underlying goals for policymakers in the United States to consider when developing a comprehensive strategy. 

  1. Preserve the US advantage in technological development and innovation relative to China. Although the United States has historically led the world in the development of cutting-edge technologies, technological expertise, skills, and capabilities have proliferated worldwide and eroded this advantage. Although the United States arguably maintains its first position, it can no longer claim to be the predominant global S&T power across the entire board. As a result, US leadership will have to approach this issue with a clear-eyed understanding of US capabilities and strengths, as well as weaknesses. 

    Further, it is impractical to believe that the United States alone can lead in all critical technology areas. US policymakers must determine (with the help of the broader scientific community) not only which technologies are critical to national security but also how these technologies are directly relevant in a national security context. This point suggests the need for aligning means with ends—what is the US objective in controlling or promoting a specific technology? Absent strong answers to this question, technology controls or promotion efforts will likely yield unintended results, both good and bad. 

    Further, it is impractical to believe that the United States alone can lead in all critical technology areas. US policymakers must determine (with the help of the broader scientific community) not only which technologies are critical to national security but also how these technologies are directly relevant in a national security context. This point suggests the need for aligning means with ends—what is the US objective in controlling or promoting a specific technology? Absent strong answers to this question, technology controls or promotion efforts will likely yield unintended results, both good and bad. 

    Further, the United States’ capacity to transform basic research into applications and commercial products is an invaluable asset that has propelled its innovation ecosystem for decades. In contrast, Chinese leadership is keenly aware of its deficiencies in this area. 

    First-mover advantage in laboratory scientific research is not the same thing as innovation excellence. A country needs both if it seeks predominance. A country can have outstanding scientific capabilities but poor innovation capacity (or vice versa). Claims that China is surpassing the United States and other advanced countries in critical technology areas are premature, and often fail to consider how metrics to assess innovative capacity interact with one another (highly cited publications, patents, investment trends, market shares, governance, etc.).18 Assessing a country’s ability to preserve or maintain its technological advantage requires a holistic approach that takes all of these factors into account.
  2. Harmonize strategy and policy with allies and partners, while gaining favor with nonaligned states. With respect to strategic competition vis-a-vis China, the interests of the United States are not always identical to those of its allies and partners. Any strategy designed to compete in the tech space with China needs to align with the strategies and interests of US allies and partners. Simultaneously, US strategy should offer benefits to nonaligned states within the context of this strategic competition with China, so as to curry favor with them.

    This goal is especially important, given that the United States relies on and benefits from a network of allies and partners, whereas China aspires to self-sufficiency in S&T development. To preserve the United States’ advantage, US leadership must first recognize that its network is one of the strongest weapons in the US arsenal.

    US allies and partners, of which there are many, want to maintain and strengthen their close diplomatic, security, and economic ties to the United States. The problem is that most also have substantial, often critical, economic relationships with China. Hence, they are loath to jeopardize their relationships with either the United States or China. 

    This strategic dilemma has become a significant one for US allies in both the transpacific and transatlantic arenas. As examples, Japan and South Korea, the two most advanced technology-producing countries in East Asia, are on the front lines of this dilemma. Their challenging situation owes to their geographic proximity to China on the one hand—and, hence, proximity to China’s strategic ambitions in the East and South China Seas, as well as Taiwan—and to their close economic ties to both China and the United States on the other.19 Although both have been attempting an ever-finer balancing act between the United States and China for years, the challenge is becoming more difficult.20 In January 2023, Japan reportedly joined the United States and the Netherlands to restrict sales of advanced chipmaking lithography machines to China, despite the policy being against its clear economic interests.21 In April and May 2023, even before China banned sales of chips from Micron Technology, a US firm, the US government was urging the South Korea government to ensure that Micron’s principal rivals, South Korea’s Samsung Electronics and SK Hynix, did not increase their sales in China.22

    For nonaligned states, many of which are in the Global South, their interests are manifold and not easily shoehorned into a US-versus-China bifurcation. Many states in this category have generalized concerns about a world that is dominated by either Washington or Beijing, and, as such, are even more interested in hedging than are the closest US allies and partners. Their governments and business communities seek trade, investment, and access to technologies that can assist with economic development, while their consumers seek affordable and capable tech. Although China has made enormous strides with respect to technological penetration of markets in the Global South, there also is much opportunity for the United States and its allies and partners, especially given widespread popular appetite for Western ideals, messaging, and consumer-facing technologies.23
  3. Retain cooperation around trade and scientific exploration. One of the risks that is inherent in a fraught Sino-American bilateral relationship is that global public-goods provision will be weakened. Within the context of rising tensions over technological development, there are two big concerns: first, that global trade in technologically based goods and services will be harmed, and second, that global scientific cooperation will shrink. 

    An open trading system has been an ideal of the rules-based international order since 1945, built on the premise that fair competition within established trading rules is best for global growth and exchange. The US-led reforms at the end of the World War II and early postwar period gave the world the Bretton Woods system, which established the International Monetary Fund (IMF), plus the Marshall Plan and the General Agreement on Tariffs and Trade (GATT). Together, these reforms enabled unprecedented multi-decade growth in global trade.24 China’s accession to the WTO in 2001, which the US government supported, marked a high point as many read into China’s entry its endorsement of the global trade regime based on liberal principles. However, since then—and for reasons having much to do with disagreements over China’s adherence to WTO trading rules—this global regime has come under significant stress. In 2023, with few signs that the Sino-American trade relationship will improve, there is significant risk of damage to the global trading system writ large.25

    Any damage done to the global trading system also risks harm to trade between the two countries, which is significant given its ongoing scale (in 2022, bilateral trade in goods measured a record $691 billion).26. Tech-based trade and investment remain significant for both countries, as illustrated by the February 2023 announcement of a $3.5-billion partnership between Ford Motor Company and Contemporary Amperex Technology Limited (CATL) to build an EV-battery plant in Michigan using CATL-licensed technology.27 A priority for US policymakers should be to preserve trade competition in tech-infused goods and services, at least for those goods and services that are not subject to national security-based restrictions and where China’s trade practices do not result in unfair advantages for its firms. 

    Beyond trade, there are public-goods benefits resulting from bilateral cooperation in the S&T domain. These benefits extend to scientific research that can hasten solutions to global-commons challenges—for example, climate change. China and the United States are the two most active countries in global science, and are each other’s most important scientific-research partner.28 Any harm done to their bilateral relationship in science is likely to decrease the quality of global scientific output. Further, the benefits from cooperation also extend to creation and enforcement of international norms and ethics surrounding tech development in, for example, AI and biotechnology.
A worker conducts quality-check of a solar module product at a factory of a monocrystalline silicon solar equipment manufacturer LONGi Green Technology Co, in Xian, Shaanxi province, China December 10, 2019. REUTERS/Muyu Xu

Back to top

Major elements of the strategy

The strategy outlined in these pages has three major elements: the promotion of technologically based innovation, sometimes labeled “running faster”; the protection of strategically valuable S&T knowhow, processes, machines, and technologies; and the coordination of policies with allies and partners. This triad—promote, protect, and coordinate—is also shorthand for the most basic underlying challenge facing strategists in the US government and in the governments of US allies and partners. In the simplest terms, strategists should aim to satisfy the “right balance between openness and protection,” in the words of the National Academies of Sciences, Engineering, and Medicine.29 This strategic logic holds for both the United States and its allies and partners.

  1. Promote: The United States has been the global leader in science and tech-based innovation since 1945, if not earlier. However, that advantage has eroded, in some areas significantly, in particular since the end of the Cold War. If the United States wishes to remain the leading power in scientific research and in translating that research into transformative technologies (for military and civilian application), then the US government, in partnership with state and local governments, the private sector, and academia, will have to reposition and recalibrate its policies and investments.

    The preeminence of America’s postwar innovation ecosystem resulted from several factors, including: prewar strengths across several major industries; massive wartime investments in science, industry, and manufacturing; and even larger investments made by the US government in the decades after the war to boost US scientific and technological capabilities. The 1940s through 1960s were especially important, owing to the whole-of-society effort behind prosecuting World War II and then the Cold War. The US government established many iconic S&T-focused institutions, including the National Science Foundation (NSF), Defense Advanced Research Projects Agency (DARPA), the National Aeronautics and Space Administration (NASA), most of the country’s national laboratories (e.g., Sandia National Laboratories and Lawrence Livermore National Laboratories), and dramatically boosted funding for science education, public-health research, and academic scientific research.30

    This system, and the enormous investments made by the US government to support it, spurred widespread and systematized cooperation among government, academic science, and the private sector. This cooperation led directly to a long list of breakthrough technologies for military and civilian purposes, and to formation of the United States’ world-leading tech hubs, Silicon Valley most prominent among them.31

    The trouble is that after the Cold War ended, “policymakers [in the US government] no longer felt an urgency and presided over the gradual and inexorable shrinking of this once preeminent system,” in particular through allowing federal spending on research and development (R&D) and education to flatline or even atrophy.32 From a peak of around 2.2 percent of national gross domestic product (GDP) in the early 1960s, federal R&D spending has declined since, reaching a low of 0.66 percent in 2017 before rebounding slightly to 0.76 percent in 2023.33

    Today, US competitors, including China, have figured out the secrets to growing their own innovation ecosystems (including the cultural dimensions that historically have been key to separating the United States from its competition) and are investing the necessary funding to do so. For example, several countries, especially China, have outpaced the United States in R&D spending. Between 1995 and 2018, China’s R&D spending grew at an astonishing 15 percent per annum, about double that of the next-fastest country, South Korea, and about five times that of the United States. By 2018, China’s total R&D spending (from public and private sources) was in second place behind the United States and had surpassed the total for the entire European Union.34 From the US perspective, other metrics are equally concerning. A 2021 study by Georgetown University’s Center for Security and Emerging Technology (CSET) projected that China will produce nearly twice as many STEM PhDs as the United States by 2025 (if counting only US citizens graduating with a PhD in STEM, that figure would be three times as much). This projection is based, in part, on China’s government doubling its investment in STEM higher education during the 2010s.35

    The United States retains numerous strengths, including the depth and breadth of its scientific establishment, number and sizes of its Big Tech firms, robust startup economy and venture capital to support it, numerous world-class educational institutions, dedication to protection of intellectual property, relatively open migration system for high-skilled workers, diverse and massive consumer base, and its still-significant R&D investments from public and private sources.36

    In addition, over the past few years there have been encouraging signs of a shift in thinking among policymakers, away from allowing the innovation model that won the Cold War to further erode and toward increased bipartisan recognition that the federal government has a critical role to play in updating that system. As was the case with the Soviet Union, this newfound interest in strengthening the US innovation ecosystem owes much to a recognition that China is a serious strategic competitor to the United States in the technology arena.37 The Biden administration’s passage of several landmark pieces of legislation, including the CHIPS and Science Act, the Inflation Reduction Act (IRA), and the Infrastructure Investment and Jobs Act (IIJA), increased the amount of federal government spending on S&T, STEM education and skills training, and various forms of infrastructure (digital and physical), all of which are concrete evidence of the degree to which this administration and much of Congress recognize the stiff challenge from China.
  2. Protect: A coherent strategy requires mechanisms to protect and defend a country’s S&T knowledge and capabilities from malign actors. Policy documents and statements from US officials over the past decade have called out the many ways in which the Chinese state orchestrates technology transfer through licit and illicit means, ranging from talent-recruitment programs and strategic mergers and acquisitions (M&A) to outright industrial espionage via cyber intrusion and other tactics.38

    On the protect side, tools include trade controls, sanctions, investment screening, and more. On the export-control side, both the Trump and Biden administrations have relied on dual-use export-control authorities to both restrict China’s access to priority technologies and prevent specific Chinese actors (those deemed problematic by the US government) from accessing US-origin technology and components.39 Investment screening has also been a popular tool; in 2018, Congress passed the bipartisan Foreign Investment Risk Review Modernization Act (FIRRMA) that strengthened and modernized the Committee on Foreign Investment in the United States (CFIUS)—an interagency body led by the Treasury Department that reviews inbound foreign investment for national security risks.40 Under the Biden administration, a new emphasis on the national security concerns associated with US outbound investment into China has arisen, with an executive order focused on screening outbound tech investments in the works for almost a year.41 On sanctions, although the United States has so far been wary of deploying them against China, the Biden administration has, in conjunction with thirty-eight other countries, imposed a harsh sanctions regime on Russia and Belarus following Russia’s unprovoked invasion of Ukraine.42

    Trade controls can be effective tools, but they need to be approached with a clear alignment between means and ends. For decades, an array of export controls and other regulations have worked to prevent rivals from accessing key technologies. However, historical experience (such as that of the US satellite industry) shows that, with a clear alignment between means and ends, trade controls can have massive implications for the competitiveness of US industries and, by extension, US national security.43

    Before deploying these tools, it is critical for policymakers to first identify what China is doing—both within and outside its borders—in its attempts to acquire foreign technology, an evaluation that should allow the United States to hone more targeted controls that can yield intended results. Trade controls that are too broad and ambiguous tend to backfire, as they create massive uncertainties that lead to overcompliance on the part of industry, in turn causing unintended downside consequences for economic competitiveness.

    Understanding China’s strategy for purposes of creating effective trade controls is not as difficult as it once appeared. For instance, a 2022 report from CSET compiled and reviewed thirty-five articles on China’s technological import dependencies.44 This series of open-source articles, published in Chinese in 2018, provides specific and concrete examples of Chinese S&T vulnerabilities that can be used by policymakers to assess where and how to apply trade controls. Other similar resources exist. Although the Chinese government appears to be systematically tracking and removing these as they receive attention, there are ways for US government analysts and scholars to continue making use of these materials that preserve the original sources.
  3. Coordinate: The final strategy pillar is outward facing, focused on building and sustaining relationships with other countries in and around the tech strategy and policy space. This pillar might be labeled “tech diplomacy,” given the need to ensure US strategy and policy positively influences as many allies, partners, and even nonaligned states as possible, while continuing to engage China on technology-related issues. As with the other two pillars, this pillar is simple to state as a priority, but difficult to realize in practice.

    In a May 2022 speech, US Secretary of State Antony Blinken said that the administration’s shorthand formula is to “invest, align, [and] compete” vis-a-vis China.45 Here, he meant “invest” to refer to large public investments in US competitiveness, “align” to closer coordination with allies and partners on tech-related strategy and policy, and “compete” largely to geostrategic competition with China over Taiwan, the East and South China Seas, and other areas.

    Blinken’s remarks underscore the Biden administration’s priority for allies and partners to view the United States as a trusted interlocutor. When it comes to technology policy on China, the trouble lies in the execution—in particular, overcoming the tensions inherent within the “invest, align, compete” formula. After Blinken’s speech, for example, the IRA became law, which triggered a firestorm of protest among the United States’ closest transpacific and transatlantic allies. Viewing the IRA’s ample support for domestic production and manufacturing of electric vehicles and renewable-energy technologies—designed to boost the US economy and tackle climate change while taking on China’s advantages in these areas—the protectionist European Union (EU) went so far as to formulate a Green Deal Industrial Plan, widely seen as an industrial policy response to the IRA.46 Much of the row over the IRA resulted from the perception—real or not—that the United States had failed to properly consider allies’ and partners’ interests while formulating the legislation. In the words of one observer, “amid the difficult negotiations at home on the CHIPS Act and the IRA, allies and partners were not consulted, resulting in largely unintended negative consequences for these countries.”47

    Long-term investment by US policymakers in multilateral institutions focused on technology will be a critical aspect of any potential victory. The Biden administration is already making strides on this front through several multilateral arrangements, including the resurrection of the Quadrilateral Security Dialogue (the Quad) and the establishment of the US-EU Trade and Technology Council (TTC) and AUKUS trilateral pact. All three of these arrangements have dedicated time and resources to specific technological issues in both the military/geopolitical and economic spheres, and all three have the potential to be massively impactful in terms of technology competition.

    However, history has shown that these types of arrangements are only effective as long as high-level political leadership remains involved and dedicated to the cause. Cabinet officials and other high-level leaders from all participating countries—especially the United States—will have to demonstrate continued interest in and commitment to these arrangements if they want them to produce more than a handful of documents with broad strategic visions.

Back to top

Assumptions

The strategy outlined in these pages rests on two plausible assumptions. First, this strategy assumes that China will not follow the Soviet Union into decline, collapse, and disintegration anytime soon, which, in turn, means that China should remain a significant competitor to the United States for a long time to come.

China’s leadership has studied the collapse of the Soviet Union closely and learned from it, placing enormous weight on delivering economic performance through its brand of state capitalism while avoiding the kind of reforms that Mikhail Gorbachev instituted during the 1980s, which included freer information flows, freer political discourse, and ideological diversity within the party and state—all of which Chinese leadership believes to have been key to the Soviet Union’s undoing.48 China also does not have analogous centrifugal forces that threaten an internal breakup along geographic lines as did the Soviet Union, which had been constructed from the outset as a federation of republics built upon the contours of the tsarist empire. (The Soviet Union, after all, was a union of Soviet Socialist republics scattered across much of Europe and Asia).49

These factors weigh against an assessment that China will soon collapse. Nicholas Burns, the US ambassador to China, has said recently that China is “infinitely stronger” than the Soviet Union ever was, “based on the extraordinary strength of the Chinese economy” including “its science and technology research base [and] innovative capacity.” He concluded that the Chinese challenge to the United States and its allies and partners “is more complex and more deeply rooted [than was the Soviet Union] and a greater test for us going forward.”50

A more realistic long-term scenario is one in which the United States and its allies and partners would need to manage a China that will either become stronger or plateau, rather than one that will experience a steep decline. Both variants of this scenario are worrisome, and both underscore the need to hew to the strategy outlined in this paper. A stronger China brings with it obvious challenges. A plateaued China is a more vexing case, owing to the very real possibility that Chinese leadership might conclude that, as economic stagnation portends a future decline and fall, the case for military action (e.g., against Taiwan) is more, rather than less, pressing. The strategist Hal Brands, for example, has suggested that a China that has plateaued will become more dangerous than it is now, requiring a strategy that is militarily firm, economically wise (including maintenance of the West’s advantages in the tech-innovation space), and diplomatically flexible.51

Second, the strategy outlined here assumes that relations between the United States and China will remain strained at best or, at worst, devolve into antagonism or outright hostility. In 2023, the assumption of ongoing strained relations appears wholly rational, based on a straightforward interpretation of all available diplomatic evidence.

How this strategy should shift if the United States and China were to have a rapprochement would depend greatly on the durability and contours of that shift. Even if a thaw were to reset bilateral relations to where they were at the beginning of the century (an unlikely prospect), the US interest in maintaining a first-mover advantage in technological development would remain. As reviewed in this paper, there was a long period during which the United States and China traded technologically based goods and services in a more open-ended trading regime than is currently the case. During that period, the United States operated on two presumptions: that China’s S&T capabilities were nowhere near as developed as its own, and that the US system could stay ahead owing to its many strengths compared with China’s.

The trouble with returning to this former state is that both presumptions no longer hold. China has become a near-peer competitor in science and technological development, and its innovative capabilities are considerable.

If China and the United States were to thaw their relationship, the policy question would concern the degree to which the United States would reduce its “protect” measures—the import and export restrictions, sanctions, and other policies designed to keep strategic technologies and knowhow from China, while protecting its own assets from espionage, sabotage, and other potential harms.

Back to top

Guidelines for implementation

As emphasized throughout this paper, any successful long-term strategy will require that the US government pursue policies that are internally well coordinated, are based on solid empirical evidence, and are flexible and nimble in the short run, while being attentive to longer-run trends and uncertainties. The government will need to improve its capabilities in three areas.

  1. Improved intelligence and counterintelligence: The US government will need to reassess, improve, and extend its intelligence and counterintelligence capabilities about tech development. The intelligence community will need to be able to conduct ongoing, comprehensive assessments of tech trends and uncertainties of relevance to the strategic competition with the United States. To properly gauge the full range of relevant and timely information about China’s tech capabilities, the Intelligence Community’s practice of relying on classified materials will need to be augmented by stressing unclassified open-source material. Classified sources, which the Intelligence Community always has prioritized, do not provide a full picture of what is happening in China. Patent filings, venture-capital investment levels and patterns, scientific and technical literature, and other open sources can be rich veins of material for analysts looking to assess where China is making progress, or seeking to make progress, in particular S&T areas. The US government’s prioritization of classified material contrasts with the Chinese government’s approach. For decades, China has employed “massive, multi-layered state support” for the “monitoring and [exploitation] of open-source foreign S&T.”52 There is recognition that the US government needs to upgrade its capabilities in this respect. In 2020, the House Permanent Select Committee on Intelligence observed that “open-source intelligence (OSINT) will become increasingly indispensable to the formulation of analytic products” about China.53

    An intelligence pillar will need a properly calibrated counterintelligence element to identify where China might be utilizing its means and assets—including legal, illegal, and extralegal ones—to obtain intellectual property in the United States and elsewhere (China has a history of utilizing multiple means, including espionage, to gain IP that is relevant to their S&T development).54 Here, “properly calibrated” refers to how counterintelligence programs must ensure that innocent individuals, including Chinese nationals who are studying or researching in the United States, are not brought under undue or illegitimate scrutiny. At the same time, these programs must be able to identify, monitor, and then handle as appropriate those individuals who might be engaging in industrial espionage or other covert activities. The Trump administration’s China Initiative was criticized both for its name (it implied that Chinese nationals and anyone of East Asian descent were suspect) and the perception of too-zealous enforcement (the program resulted in several high-profile cases ending in dismissal or exoneration for the accused). In 2022, the Biden administration shuttered this initiative and replaced it with “a broader strategy aimed at countering espionage, cyberattacks and other threats posed by a range of countries.”55
  2. Improved foresight: Strategic-foresight capabilities assist governments in understanding and navigating complex and fast-moving external environments. Foresight offices in government and the private sector systematically examine long-term trends and uncertainties and assess how these will shape alternative futures. These processes often challenge deeply held assumptions about where the world is headed, and can reveal where existing strategies perform well or poorly.

    This logic extends to the tech space, where the US government should develop a robust foresight apparatus to inform tech-focused strategies and policies at the highest levels. The purpose of this capability would be to enhance and deepen understanding of where technological development might take the United States and the world. Such a foresight capability within the US government would integrate tech-intelligence assessments, per above, into comprehensive foresight-based scenarios about how the world might unfold in the future. The US government has impressive foresight capabilities already, most famously those provided by the National Intelligence Council (NIC). However, for a variety of reasons, including distance from the center of executive power, neither the NIC nor other foresight offices within the US government currently perform a foresight function described here. The US government should institutionalize a foresight function within or closely adjacent to the White House—for example, within the National Security Council or as a presidentially appointed advisory board. Doing so would give foresight the credibility and mandate to engage the most critical stakeholders from across the entire government and from outside of it, a model followed by leading public foresight offices around the world.56 This recommendation is consistent with numerous others put forward by experts over the past decade, which stress how the US government needs to give foresight more capabilities while bringing it closer to the office of the president.57
  3. Improved S&T strategy and policy coordination: One of the major challenges facing the US government concerns internal coordination around S&T strategy and policy. As technology is a broad and multidimensional category, the government’s activities are equally broad, covered by numerous statutes, executive orders, and administrative decisions. One of many results is a multiplicity of departments and agencies responsible for administering the many different pieces of the tech equation, from investment to development to monitoring, regulation, and enforcement. In just the area of critical technology oversight and control, for example, numerous departments including Commerce, State, Defense, Treasury, Homeland Security, and Justice, plus agencies from the Intelligence Community, all have responsibilities under various programs.58

    Moreover, the US government’s approach to tech oversight tends to focus narrowly on control of specific technologies, which leads to an underappreciation of the broader contexts in which technologies are used. A report issued in 2022 by the National Academies of Sciences, Engineering, and Medicine argued that the US government’s historic approach to tech-related risks is done through assessing individual critical technologies, defining the risks associated with each, and then attempting to restrict who can access each type of technology. Given that technologies now are “ubiquitous, shared, and multipurpose,” the National Academies asserted, a smarter approach would be to focus on the motives of bad-faith actors to use technologies and then define the accompanying risks.59 This approach “requires expertise that goes beyond the nature of the technology to encompass the plans, actions, capabilities, and intentions of US adversaries and other bad actors, thus involving experts from the intelligence, law enforcement, and national defense communities in addition to agency experts in the technology.”60

Back to top

US Secretary of State Antony Blinken meets with Chinese President Xi Jinping in the Great Hall of the People in Beijing, China, June 19, 2023. REUTERS

Major risks

There are two major sets of risks accompanying this strategy, both of which involve the potential damage that might result from failure to keep the strategic competition within acceptable boundaries. 

  1. Decoupling run amok: Overreach is one of the biggest risks associated with this strategy. Geopolitical and economic goals contradict, and it can be difficult to determine where to draw the line. As such, reconciling this dilemma will be the hardest part of a coherent and effective competition strategy.

    Technology decoupling to preserve geopolitical advantages can be at odds with economic interests, which the United States is currently experiencing in the context of semiconductors. The October 7, 2022, export controls were deemed necessary for geopolitical reasons, as the White House’s official rationale for the policy centered around the use of semiconductors for military modernization and violation of human rights. However, limiting the ability of US companies like Nvidia, Applied Materials, KLA, and Lam Research to export their products and services to China, in addition to applying complex compliance burdens on these firms, has the potential to affect these firms’ ability to compete in the global semiconductor industry. 

    In addition, the continued deployment of decoupling tactics like export controls can put allies and partners in a position where they feel forced to choose sides between the United States and China. On the October 7 export controls, it took months to convince the Netherlands and Japan—two critical producer nations in the semiconductor supply chain whose participation is critical to the success of these export controls—to get on board with US policy.61 Even now, although media reporting says an agreement has been reached, no details of the agreements have been made public, likely due to concerns surrounding Chinese retaliation.

    These issues are not exclusive to trade controls or protect measures. On the promote side, the IRA has also put South Korea in a difficult position as it relates to EVs and related components. When first announced, many on the South Korean side argued that the EV provisions of the IRA violated trade rules. At one point in late 2022, the South Korean government considered filing a complaint with the WTO over the issue.62 Although things seem to have cooled between Washington and Seoul—and the Netherlands and Japan have officially, albeit privately, agreed to join the US on semiconductor controls—these two instances should be lessons for US policymakers in how to approach technology policies going forward. Policies that push allies and partners too hard to decouple from the Chinese market are likely to be met with resistance, as many (if not all) US allies have deeply woven ties with Chinese industry, and often do not have the same domestic capabilities or resources that the United States has that can insulate us from potential harm. China is acutely aware of this, and will likely continue to take advantage of this narrative to convince US allies to not join in US decoupling efforts. China has historically leveraged economic punishments against countries for a variety of reasons, so US policymakers should be sure to incorporate this reality into their policy planning to ensure that allies are not put in tough positions. 

    Recently, government officials within the Group of Seven (G7) have been using the term “de-risking ” instead of “decoupling.” The term was first used by a major public official during a speech by Ursula von der Leyen, president of the European Commission, in a March 2023 speech where she called for an “open and frank” discussion with China on contentious issues.63 The term was used again in the G7 communique of May 2023: economic security should be “based on diversifying and deepening partnerships and de-risking, not de-coupling.”64 This rhetorical shift represents a recognition that full economic decoupling from China is unwise, and perhaps impossible. Moreover, it also is a tacit admission that decoupling sends the wrong signals not only to China, but to the private sector in the West as well.

    In the authors’ opinion, de-risking is superior to decoupling as a rhetorical device—but changes in phrasing do not solve the underlying problem for policymakers in the United States, Europe, East Asia, and beyond. That underlying problem is to define and then implement a coherent strategy, coordinated across national capitals, that manages to enable them to stay a step ahead of China in the development of cutting-edge technology while preventing an economically disastrous trade war with China.
  2. Harm to global governance: Another major set of risks involves the harms to global governance should the strategic competition between the United States and China continue on its current trajectory. Although the strategy outlined in these pages emphasizes, under the coordination pillar, maintenance of global governance architecture—the norms, institutions, pathways, laws, good-faith behavior, and so on that guide technology development—there is no guarantee that China and the United States, along with other important state and nonstate actors, will be able to do so given conflicting pressures to reduce or eliminate cooperative behavior. 

    Tragic outcomes of this strategic competition, therefore, would be: failure to continue cooperation regarding development of norms and standards that should guide S&T research; and failure to continue S&T research cooperation focused on solving global-commons challenges such as pandemics and climate change. 

    Any reduction in cooperation among the United States, China, and other leading S&T-research countries will harm the ability to establish norms and standards surrounding tech development in sensitive areas—for instance, in AI or biotechnology. As recent global conversations about the risks associated with rapid AI development show, effective governance of these powerful emerging technologies is no idle issue.65

    Even under the best of circumstances, however, global governance of such technologies is exceedingly difficult. For example, Gigi Kwik Gronvall, an immunologist and professor at Johns Hopkins University, has written that biotechnology development is “inherently international and cannot be controlled by any international command and control system” and that, therefore, “building a web of governance, with multiple institutions and organizations shaping the rules of the road, is the only possibility for [effective] governance.”66 By this, she meant that—although a single system of rules for governing the biotechnology development is impossible to create given the speed of biotech research and multiplicity of biotech research actors involved (private and public-sector labs, etc.) around the world—it is possible to support a “web of governance” institutions such as the WHO that set norms and rules. Although this system is imperfect, as she admits, it is much better than the alternative, which is to have no governance web at all. The risk of a weak or nonexistent web becomes much more real if the United States, China, and other S&T leaders fail to cooperate in strengthening it. 

Back to top

Conclusions and recommendations

The arguments advanced in this paper provide an overview of the range and diversity of policy questions that must be taken into consideration when formulating strategies to compete with China in science and technology. This final section offers a set of recommendations that follow from this analysis.

  1. Restore and sustain public R&D funding for scientific and technological advancement. As noted in this paper, public investment in R&D—most critically, federal-government investment in R&D—has been allowed to atrophy since the end of the Cold War. Although private-sector investment was then, and is now, a critical component of the nation’s R&D spending, public funding is also imperative for pure scientific research (versus applied research) and for funneling R&D toward ends that are in the public interest (defense, public health, etc.). Although the CHIPS and Science Act and the IRA both pledge massive increases in the amount of federal R&D investment, there is no guarantee that increased funding will be sustained over time. Less than a year after the CHIPS Act was signed into law, funding levels proposed in Congress and by the White House have fallen well short of amounts specified in the act.67
  2. Improve and sustain STEM education and skills training across K–12, university, community college, technical schools. It is widely recognized that the United States has fallen behind peer nations in STEM education and training at all levels, from K–12 through graduate training.68 Although the Biden administration’s signature pieces of legislation, including the CHIPS Act, address this problem through increased funding vehicles for STEM education and worker-training programs, the challenge for policymakers will be to sustain interest in, and levels of funding for, such programs well into the future, analogous to the federal R&D spending challenge. Other related problems include the high cost of higher education, driven in part by lower funding by US states, that drives students into long-term indebtedness, and the need to boost participation in (and reduce stigma around) STEM-related training at community colleges and technical schools.69 Germany’s well-established, well-funded, and highly respected technical apprenticeship programs are models.70
  3. Craft a more diverse tech sector. A closely related challenge is to ensure that the tech sector in the United States reflects the country’s diversity, defined in terms of gender, ethnicity, class, and geography. This is a long-term challenge that has multiple roots and many different pathways to success, including public investment in education, training, and apprenticeship programs, among other things.71 Among the most challenging problems (with potentially the most beneficial solutions) are those rooted in economic geography—specifically regional imbalances in the knowledge economy, where places like Silicon Valley and Boston steam ahead and many other places fall behind. As in other areas, recent legislation including the IRA, CHIPS Act, and IIJA have called for billions in funding to spread the knowledge economy to a greater number of “tech hubs” around the country. As with other pieces of the investment equation, however, there is no guarantee that billions will be allocated under current legislation.72
  4. Attract and retain high-skilled talent from abroad. One of the United States’ enduring strengths is its ability to attract and retain the world’s best talent, which has been of enormous benefit to its tech sector. A December 2022 survey conducted by the National Bureau of Economic Research (NBER), for example, found that between 1990 and 2016, about 16 percent of all inventors in the United States were immigrants, who, in turn, were responsible for 23 percent of all patents filed during the same period.73 Although the United States is still the top destination for high-skilled migrants, other countries have become more attractive in recent years, owing to foreign countries’ tech-savvy immigration policies and problems related to the US H-1B visa system.74
  5. Support whole-of-government strategy development. This paper stresses the need to improve strategic decision-making regarding technology through improving (or relocating) interagency processes and foresight and intelligence capabilities. One recommendation is to follow the suggestion by the National Academies of Sciences, Engineering, and Medicine, and bring a whole-of-government strategic perspective together under the guidance of the White House.75 Such a capacity would bring under its purview and/or draw upon a tech-focused foresight capacity, as well as an improved tech-focused intelligence apparatus (see below). The CHIPS Act contains provisions that call for development of quadrennial S&T assessments followed by technology strategy formulation, both to be conducted by the White House’s Office of Science Technology and Policy (OSTP).76 A bill that was introduced in June 2022 by Senators Michael Bennet, Ben Sasse, and Mark Warner (and reintroduced in June 2023) would, if passed, create an Office of Global Competition Analysis, the purpose of which would be to “fuse information across the federal government, including classified sources, to help us better understand U.S. competitiveness in technologies critical to our national security and economic prosperity and inform responses that will boost U.S. leadership.”77
  6. Ensure private sector firms remain at the cutting edge of global competitiveness. Policymakers will need to strengthen the enabling environment to allow US tech firms to meet and exceed business competition from around the world. Doing so will require constant monitoring of best-practice policy development elsewhere, based on the presumption that other countries are tweaking their own policies to outcompete the United States. Policymakers will need to properly recalibrate, as appropriate and informed by best practices, an array of policy instruments including labor market and immigration policies, types and level of infrastructural investments, competition policies, forms of direct and indirect support, and more. An Office of Global Competition Analysis, as referred to above, might be an appropriate mechanism to conduct the horizon scanning tasks necessary to support this recommendation.
  7. Improve S&T intelligence and counterintelligence. Consistent with the observations about shortcomings in the US Intelligence Community regarding S&T collection, analysis, and dissemination, some analysts have floated creation of an S&T intelligence capability outside the Intelligence Community itself. This capability would be independent of other agencies and departments within the government and would focus on collection and analysis of S&T intelligence for stakeholders within and outside of the US government, as appropriate.78
  8. Ensure calibrated development and application of punitive measures. As this paper has stressed at multiple points, although the US government has powerful protect measures at its disposal, implementing those measures often comes with a price, including friction with allies and partners. The US government should create an office within the Bureau of Industry and Security (BIS) at the Commerce Department to monitor the economic impact (intended and unintended) of its export-control policies on global supply chains before they are implemented (including impacts on allied and partner economies).79 This office would have a function that is similar in intent to the Sanctions Economic Analysis Unit, recently established at the US Treasury to “research the collateral damage of sanctions before they’re imposed, and after they’ve been put in place to see if they should be adjusted.”80
  9. Build out and sustain robust multilateral institutions. This paper has stressed that any effort by the United States to succeed in its tech-focused competition with China will require that it successfully engage allies and partners in multilateral settings such as the EU-TTC, Quad, and others. As with so many other recommendations on this list, success will be determined by the degree to which senior policymakers can stay focused over the long run (i.e., across administrations) on this priority and in these multilateral forums. In addition, US policymakers might consider updating multilateral forums based on new realities. For example, some analysts have called for the creation of a new multilateral export-control regime that would have the world’s “techno-democracies…identify together the commodities, software, technologies, end uses, and end users that warrant control to address shared national security, economic security, and human rights issues.”81
  10. Engagement with China cannot be avoided. The downturn in bilateral relations between the United States and China should not obscure the need to continue engaging China on S&T as appropriate, and as opportunities arise. There are zero-sum tradeoffs involved in the strategic competition with China over technology. At the same time, there are also positive-sum elements within that competition that need to be preserved or even strengthened. As the Ford-CATL Michigan battery-plant example underscores, trade in nonstrategic technologies (EVs, batteries, etc.) benefits both countries, assuming trade occurs on a level playing field. The same is true of science cooperation, where the risk is of global scientific research on climate change and disease prevention shrinking if Sino-American scientific exchange falls dramatically. Policymakers in the United States will need to accept some amount of S&T collaboration risk with China. They will need to decide what is (and is not) of highest risk and communicate that effectively to US allies and partners around the world, the scientific community, and the general public. 

Back to top

The authors would like to thank Noah Stein for his research assistance with this report.

Report authors

Explore the Strategy Paper Series

Explore the programs

The Scowcroft Center for Strategy and Security works to develop sustainable, nonpartisan strategies to address the most important security challenges facing the United States and the world.

The Global China Hub researches and devises allied solutions to the global challenges posed by China’s rise, leveraging and amplifying the Atlantic Council’s work on China across its fifteen other programs and centers.

1    Although China likely will not close the spending gap with the United States by the mid-2030s, current spending trajectories strongly suggest that China will have narrowed the gap considerably. See the US-China bilateral comparison in: “Asia Power Index 2023,” Lowy Institute, last visited June 13, 2023, https://power.lowyinstitute.org; “China v America: How Xi Jinping Plans to Narrow the Military Gap,” Economist, May 8, 2023, https://www.economist.com/china/2023/05/08/china-v-america-how-xi-jinping-plans-to-narrow-the-military-gap.
2    See, e.g., the arguments presented by: Bryce Barros, Nathan Kohlenberg, and Etienne Soula, “China and the Digital Information Stack in the Global South,” German Marshall Fund, June 15, 2022, https://securingdemocracy.gmfus.org/china-digital-stack/.
3    For a brief overview of China’s efforts in this regard, see: Bulelani Jili, China’s Surveillance Ecosystem and the Global Spread of Its Tools, Atlantic Council, October 17, 2022, https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/chinese-surveillance-ecosystem-and-the-global-spread-of-its-tools/.
4    For background to these practices, see: Karen M. Sutter, ““Made in China 2025’ Industrial Policies: Issues for Congress,” Congressional Research Service, March 10, 2023, https://sgp.fas.org/crs/row/IF10964.pdf; Gerard DiPippo, Ilaria Mazzocco, and Scott Kennedy, “Red Ink: Estimating Chinese Industrial Policy Spending in Comparative Perspective,” Center for Strategic and International Studies, May 23, 2022, https://www.csis.org/analysis/red-ink-estimating-chinese-industrial-policy-spending-comparative-perspective; “America Is Struggling to Counter China’s Intellectual Property Theft,” Financial Times, April 18, 2022, https://www.ft.com/content/1d13ab71-bffd-4d63-a0bf-9e9bdfc33c39; “USTR Releases Annual Report on China’s WTO Compliance,” Office of the United States Trade Representative, February 16, 2022, press release, 3, https://ustr.gov/about-us/policy-offices/press-office/press-releases/2022/february/ustr-releases-annual-report-chinas-wto-compliance.
5     On China and technical standards, see: Matt Sheehan, Marjory Blumenthal, and Michael R. Nelson, “Three Takeaways from China’s New Standards Strategy,” Carnegie Endowment for International Peace, October 28, 2021, https://carnegieendowment.org/2021/10/28/three-takeaways-from-china-s-new-standards-strategy-pub-85678.
6    China’s current (2023) AI regulations are generally seen as more developed than those in either Europe or the United States. However, analysts argue that the individual rights and corporate responsibilities to protect them, as outlined in China’s regulations, will be selectively enforced, if at all, by the state. See: Ryan Heath, “China Races Ahead of U.S. on AI Regulation,” Axios, May 8, 2023, https://www.axios.com/2023/05/08/china-ai-regulation-race.
7    The scientific community has warned that this scenario is a real risk, owing to heightened Sino-American tension. James Mitchell Crow, “US–China partnerships bring strength in numbers to big science projects,” Nature, March 9, 2022, https://www.nature.com/articles/d41586-022-00570-0.
8    Deng Xiaoping’s reforms included pursuit of “Four Modernizations” in agriculture, industry, science and technology, and national defense. In the S&T field, his reforms included massive educational and worker-upskilling programs, large investments in scientific research centers, comprehensive programs to send Chinese STEM (science, technology, engineering, and math) students abroad for advanced education and training, experimentation with foreign technologies in manufacturing and other production processes, and upgrading of China’s military to include a focus on development of dual-use technologies. Bernard Z. Keo, “Crossing the River by Feeling the Stones: Deng Xiaoping in the Making of Modern China,” Education About Asia 25, 2 (2020), 36, https://www.asianstudies.org/publications/eaa/archives/crossing-the-river-by-feeling-the-stones-deng-xiaoping-in-the-making-of-modern-china/.
9    Dan Wang, “China’s Hidden Tech Revolution: How Beijing Threatens U.S. Dominance,” Foreign Affairs, March/April 2023, https://www.foreignaffairs.com/china/chinas-hidden-tech-revolution-how-beijing-threatens-us-dominance-dan-wang.
10    “Full Text of Clinton’s Speech on China Trade Bill,” Federal News Service, March 9, 2000, https://www.iatp.org/sites/default/files/Full_Text_of_Clintons_Speech_on_China_Trade_Bi.htm.
11    “Speech by President Jiang Zemin at George Bush Presidential Library,” Ministry of Foreign Affairs of the PRC, October 24, 2002, https://perma.cc/7NYS-4REZ; G. John Ikenberrgy, “The Rise of China and the Future of the West: Can the Liberal System Survive?” Foreign Affairs 87, 1, (2008), https://www.jstor.org/stable/20020265.
12    Elizabeth Economy, “Changing Course on China,” Current History 102, 665, China and East Asia (2003), https://www.jstor.org/stable/45317282; Thomas W. Lippman, “Bush Makes Clinton’s China Policy an Issue,” Washington Post, August 20, 1999, https://www.washingtonpost.com/wp-srv/politics/campaigns/wh2000/stories/chiwan082099.htm.
13     Kurt M. Campbell and Ely Ratner, “The China Reckoning: How Beijing Defied American Expectations,” Foreign Affairs, February 18, 2018, https://www.foreignaffairs.com/articles/china/2018-02-13/china-reckoning.
14     “Number of Tourist Arrivals in the United States from China from 2005 to 2022 with Forecasts until 2025,” Statista, April 11, 2023, https://www.statista.com/statistics/214813/number-of-visitors-to-the-us-from-china/; and “Visa Statistics,” U.S. Department of State, https://travel.state.gov/content/travel/en/legal/visa-law0/visa-statistics.html.
15    “Direct Investment Position of the United States in China from 2000 to 2021,” Statista, January 26, 2023, https://www.statista.com/statistics/188629/united-states-direct-investments-in-china-since-2000/.
16     Robbie Gramer, “Washington’s China Hawks Take Flight,” Foreign Policy, February 15, 2023, https://foreignpolicy.com/2023/02/15/china-us-relations-hawks-engagement-cold-war-taiwan/; Sam LaGrone, “China Sends Uninvited Spy Ship to RIMPAC,” USNI News, July 18, 2014, https://news.usni.org/2014/07/18/china-sends-uninvited-spy-ship-rimpac.
17    “Findings of the Investigations into China’s Acts, Policies, and Practices Related to Technology Transfer, Intellectual Property, and Innovation Under Section 301 of the Trade Act of 1974,” Office of the United States Trade Representative, March 22, 2018, https://ustr.gov/sites/default/files/Section%20301%20FINAL.PDF. When asked in November 2018 if China was violating the 2015 cyber-espionage agreement, senior National Security Agency cybersecurity official Rob Joyce said, “it’s clear that they [China] are well beyond the bounds today of the agreement that was forced between our countries.” See: “U.S. Accuses China of Violating Bilateral Anti-Hacking Deal,” Reuters, November 8, 2018, https://www.reuters.com/article/us-usa-china-cyber/u-s-accuses-china-of-violating-bilateral-anti-hacking-deal-idUSKCN1NE02E.
18    Jacob Feldgoise, et. al, “Studying Tech Competition through Research Output: Some CSET Best Practices,” Center for Security and Emerging Technology, April 2023, https://cset.georgetown.edu/article/studying-tech-competition-through-research-output-some-cset-best-practices.
19    The World Intellectual Property Organization’s annual “Global Innovation Index,” considered the gold standard rankings assessment of the world’s tech-producing economies, ranks South Korea sixth and Japan thirteenth in the 2022 edition. “Global Innovation Index 2022. What Is the Future of Innovation-Driven Growth?” World Intellectual Property Organization, 2022, https://www.globalinnovationindex.org/analysis-indicator.
20    For a general review of the Japanese case, see: Mireya Solis, “Economic Security: Boon or Bane for the US-Japan Alliance?,” Sasakawa Peace Foundation USA, November 5–6, 2022, https://spfusa.org/publications/economic-security-boon-or-bane-for-the-us-japan-alliance/#_ftn19. For the South Korean case, see: Seong-Ho Sheen and Mireya Solis, “How South Korea Sees Technology Competition with China and Export Controls,” Brookings, May 17, 2023, https://www.brookings.edu/blog/order-from-chaos/2023/05/17/how-south-korea-sees-technology-competition-with-china-and-export-controls/.
21    Jeremy Mark and Dexter Tiff Roberts, United States–China Semiconductor Standoff: A Supply Chain under StressAtlantic Council, February 23, 2023, https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/united-states-china-semiconductor-standoff-a-supply-chain-under-stress/.
22    Yang Jie and Megumi Fujikawa, “Tokyo Meeting Highlights Democracies’ Push to Secure Chip Supplies,” Wall Street Journal, May 18, 2023, https://www.wsj.com/articles/tokyo-meeting-highlights-democracies-push-to-secure-chip-supplies-54e1173d?mod=article_inline; “US Urges South Korea not to Fill Chip Shortfalls in China if Micron Banned, Financial Times Reports,” Reuters, April 23, 2023, https://www.reuters.com/technology/us-urges-south-korea-not-fill-china-shortfalls-if-beijing-bans-micron-chips-ft-2023-04-23/.
23    See, e.g., the arguments in: Matias Spektor, “In Defense of the Fence Sitters. What the West Gets Wrong about Hedging,” Foreign Affairs, May/June 2023, https://www.foreignaffairs.com/world/global-south-defense-fence-sitters.
24    On the expansion of trade under Bretton Woods during the first postwar decades, see: Tamim Bayoumi, “The Postwar Economic Achievement,” Finance & Development, June 1995, https://www.elibrary.imf.org/view/journals/022/0032/002/article-A013-en.xml
25    For a review of the history of the bilateral trade relationship, see: Anshu Siripurapu and Noah Berman, “Backgrounder: The Contentious U.S.-China Trade Relationship,” Council on Foreign Relations, December 5, 2022, https://www.cfr.org/backgrounder/contentious-us-china-trade-relationship.
26    Eric Martin and Ana Monteiro, “US-China Goods Trade Hits Record Even as Political Split Widens,” Bloomberg, February 7, 2023, https://www.bloomberg.com/news/articles/2023-02-07/us-china-trade-climbs-to-record-in-2022-despite-efforts-to-split?sref=a9fBmPFG#xj4y7vzkg
27    Neal E. Boudette and Keith Bradsher, “Ford Will Build a U.S. Battery Factory with Technology from China,” New York Times, February 13, 2023, https://www.nytimes.com/2023/02/13/business/energy-environment/ford-catl-electric-vehicle-battery.html.
28    “Tracking the Collaborative Networks of Five Leading Science Nations,” Nature 603, S10–S11 (2022), https://www.nature.com/articles/d41586-022-00571-z.
29     “Protecting U.S. Technological Advantage,” National Academies of Sciences, Engineering, and Medicine, 2022, 12, https://doi.org/10.17226/26647.
30     Robert W. Seidel, “Science Policy and the Role of the National Laboratories,” Los Alamos Science 21 (1993), 218–226, https://sgp.fas.org/othergov/doe/lanl/pubs/00285712.pdf.
31     The federal government’s hand in creating Silicon Valley is well known. For a short summary, see: W. Patrick McCray, “Silicon Valley: A Region High on Historical Amnesia,” Los Angeles Review of Books, September 19, 2019, https://lareviewofbooks.org/article/silicon-valley-a-region-high-on-historical-amnesia/. A forceful defense of the federal government’s role in creating and sustaining Silicon Valley is: Jacob S. Hacker and Paul Pierson, “Why Technological Innovation Relies on Government Support,” Atlantic, March 28, 2016, https://www.theatlantic.com/politics/archive/2016/03/andy-grove-government-technology/475626/.
32     Robert D. Atkinson, “Understanding the U.S. National Innovation System, 2020,” International Technology & Innovation Foundation, November 2020, 1, https://www2.itif.org/2020-us-innovation-system.pdf.
33     “National Innovation Policies: What Countries Do Best and How They Can Improve,” International Technology & Innovation Foundation, June 13, 2019, 82, https://itif.org/publications/2019/06/13/national-innovation-policies-what-countries-do-best-and-how-they-can-improve/; “Historical Trends in Federal R&D, Federal R&D as a Percent of GDP, 1976-2023,” American Association for the Advancement of Science, last visited June 13, 2023, https://www.aaas.org/programs/r-d-budget-and-policy/historical-trends-federal-rd.
34     Matt Hourihan, “A Snapshot of U.S. R&D Competitiveness: 2020 Update,” American Association for the Advancement of Science, October 22, 2020, https://www.aaas.org/news/snapshot-us-rd-competitiveness-2020-update.
35    Remco Zwetsloot, et al., “China is Fast Outpacing U.S. STEM PhD Growth,” Center for Security and Emerging Technology, August 2021, 2–4, https://cset.georgetown.edu/publication/china-is-fast-outpacing-u-s-stem-phd-growth/.
36    As reviewed in: Robert D. Atkinson, “Understanding the U.S. National Innovation System, 2020,” International Technology & Innovation Foundation, November 2020, https://www2.itif.org/2020-us-innovation-system.pdf.
37    See, e.g., the arguments laid out by Frank Lucas, chairman of the House Science, Space, and Technology Committee, in: Frank Lucas, “A Next-Generation Strategy for American Science,” Issues in Science and Technology 39, 3, Spring 2023, https://issues.org/strategy-american-science-lucas/.
38     “Findings of the Investigations into China’s Acts, Policies, and Practices Related to Technology Transfer, Intellectual Property, and Innovation Under Section 301 of the Trade Act of 1974”; “Threats to the U.S. Research Enterprise: China’s Talent Recruitment Plans,” Permanent Subcommittee on Investigations, Committee on Homeland Security and Governmental Affairs, US Senate, November 2019, https://www.hsgac.senate.gov/wp-content/uploads/imo/media/doc/2019-11-18%20PSI%20Staff%20Report%20-%20China’s%20Talent%20Recruitment%20Plans%20Updated2.pdf; Michael Brown and Pavneet Singh, “China’s Technology Transfer Strategy: How Chinese Investments in Emerging Technology Enable A Strategic Competitor to Access the Crown Jewels of U.S. Innovation,” Defense Innovation Unit Experimental (DIUx), January 2018, https://www.documentcloud.org/documents/4549143-DIUx-Study-on-China-s-Technology-Transfer.
39     Steven F. Hill, et. al, “Trump Administration Significantly Enhances Export Control Supply Chain Restrictions on Huawei,” K&L Gates, September 2020, https://www.klgates.com/Trump-Administration-Significantly-Enhances-Export-Control-Supply-Chain-Restrictions-on-Huawei-9-2-2020; and “Implementation of Additional Export Controls: Certain Advanced Computing and Semiconductor Manufacturing Items; Supercomputer and Semiconductor End Use; Entity List Modification,” Bureau of Industry and Security, US Department of Commerce, October 14, 2022, https://www.federalregister.gov/documents/2022/10/13/2022-21658/implementation-of-additional-export-controls-certain-advanced-computing-and-semiconductor.
40    “The Committee on Foreign Investment in the United States,” US Department of the Treasury, last visited June 13, 2023, https://home.treasury.gov/policy-issues/international/the-committee-on-foreign-investment-in-the-united-states-cfius.
41    Hans Nichols and Dave Lawler, “Biden’s Next Move to Box China out on Sensitive Tech,” Axios, May 25, 2023, https://www.axios.com/2023/05/25/china-investments-ai-semiconductor-biden-order.
42    “With Over 300 Sanctions, U.S. Targets Russia’s Circumvention and Evasion, Military-Industrial Supply Chains, and Future Energy Revenues,” US Department of the Treasury, press release, May 19, 2023, https://home.treasury.gov/news/press-releases/jy1494.
43     Tim Hwang and Emily S. Weinstein, “Decoupling in Strategic Technologies: From Satellites to Artificial Intelligence,” Center for Security and Emerging Technology, July 2022, https://cset.georgetown.edu/publication/decoupling-in-strategic-technologies/.
44     The articles were published in China’s state-run newspaper, Science and Technology Daily. Ben Murphy, “Chokepoints: China’s Self-Identified Strategic Technology Import Dependencies,” Center for Security and Emerging Technology, May 2022, https://cset.georgetown.edu/publication/chokepoints/.
45     Antony J. Blinken, “The Administration’s Approach to the People’s Republic of China,” US Department of State, May 26, 2022, https://www.state.gov/the-administrations-approach-to-the-peoples-republic-of-china/.
46     “Media Reaction: US Inflation Reduction Act and the Global ‘Clean-Energy Arms Race,’” Carbon Brief, February 3, 2023, https://www.carbonbrief.org/media-reaction-us-inflation-reduction-act-and-the-global-clean-energy-arms-race/; Théophile Pouget-Abadie, Francis Shin, and Jonah Allen, Clean Industrial Policies: A Space for EU-US Collaboration, Atlantic Council, March 10, 2023, https://www.atlanticcouncil.org/blogs/energysource/clean-industrial-policies-a-space-for-eu-us-collaboration/.
47     Shannon Tiezzi, “Are US Allies Falling out of ‘Alignment’ on China?” Diplomat, December 19, 2022, https://thediplomat.com/2022/12/are-us-allies-falling-out-of-alignment-on-china/.
48     “The Fall of Empires Preys on Xi Jinping’s Mind,” Economist, May 11, 2023, https://www.economist.com/briefing/2023/05/11/the-fall-of-empires-preys-on-xi-jinpings-mind; Kunal Sharma, “What China Learned from the Collapse of the USSR,” Diplomat, December 6, 2021, https://thediplomat.com/2021/12/what-china-learned-from-the-collapse-of-the-ussr/; Simone McCarthy, “Why Gorbachev’s Legacy Haunts China’s Ruling Communist Party,” CNN, August 31, 2022, https://www.cnn.com/2022/08/31/china/china-reaction-mikhail-gorbachev-intl-hnk/index.html.
49     For a review of the complex history of the construction and deconstruction of the Soviet Union, see: Serhii Plokhy, “The Empire Returns: Russia, Ukraine and the Long Shadow of the Soviet Union,”Financial Times, January 28, 2022, https://www.ft.com/content/0cbbd590-8e48-4687-a302-e74b6f0c905d.
50     Phelim Kine, “China ‘Is Infinitely Stronger than the Soviet Union Ever Was,’” Politico, April 28, 2023, https://www.politico.com/newsletters/global-insider/2023/04/28/china-is-infinitely-stronger-than-the-soviet-union-ever-was-00094266.
51     Hal Brands, “The Dangers of China’s Decline,” Foreign Policy, April 14, 2022, https://foreignpolicy.com/2022/04/14/china-decline-dangers/.
52     Tarun Chhabra, et al., “Open-Source Intelligence for S&T Analysis,” Center for Security and Emerging Technology (CSET), Georgetown University Walsh School of Foreign Service, September 2020, https://cset.georgetown.edu/publication/open-source-intelligence-for-st-analysis/.
53     A summary of and link to the committee’s redacted report is in: Tia Sewell, “U.S. Intelligence Community Ill-Prepared to Respond to China, Bipartisan House Report Finds,” Lawfare, September 30, 2020, https://www.lawfareblog.com/us-intelligence-community-ill-prepared-respond-china-bipartisan-house-report-finds.
54     William Hannas and Huey-Meei Chang, “China’s Access to Foreign AI Technology,” Center for Security and Emerging Technology (CSET), Georgetown University Walsh School of Foreign Service, September 2019, https://cset.georgetown.edu/publication/chinas-access-to-foreign-ai-technology/.
55     Ellen Nakashima, “Justice Department Shutters China Initiative, Launches Broader Strategy to Counter Nation-State Threats,” Washington Post, February 23, 2022, https://www.washingtonpost.com/national-security/2022/02/23/china-initivative-redo/.
56     Tuomo Kuosa, “Strategic Foresight in Government: The Cases of Finland, Singapore, and the European Union,” S. Rajaratnam School of International Studies, Nanyang Technological University, 43, https://www.files.ethz.ch/isn/145831/Monograph19.pdf.
57     For a review, including a summary of such recommendations, see: J. Peter Scoblic, “Strategic Foresight in U.S. Agencies. An Analysis of Long-term Anticipatory Thinking in the Federal Government,” New America, December 15, 2021, https://www.newamerica.org/international-security/reports/strategic-foresight-in-us-agencies/.
58     See, for example: Marie A. Mak, “Critical Technologies: Agency Initiatives Address Some Weaknesses, but Additional Interagency Collaboration Is Needed,” General Accounting Office, February 2015, https://www.gao.gov/assets/gao-15-288.pdf.
59     “Protecting U.S. Technological Advantage,” 97.
60     Ibid.
61    Toby Sterling, Karen Freifeld, and Alexandra Alper, “Dutch to Restrict Semiconductor Tech Exports to China, Joining US Effort,”Reuters, March 8, 2023, https://www.reuters.com/technology/dutch-responds-us-china-policy-with-plan-curb-semiconductor-tech-exports-2023-03-08/.
62    Troy Stangarone, “Inflation Reduction Act Roils South Korea-US Relations,” Diplomat, September 20, 2022, https://thediplomat.com/2022/09/inflation-reduction-act-roils-south-korea-us-relations/; “S. Korea in Preparation for Legal Disputes with U.S. over IRA,” Yonhap News Agency, November 3, 2022, https://en.yna.co.kr/view/AEN20221103004500320.
63    “Speech by President von der Leyen on EU-China Relations to the Mercator Institute for China Studies and the European Policy Centre,” European Commission, March 30, 2023, https://ec.europa.eu/commission/presscorner/detail/en/speech_23_2063.
64    “G7 Hiroshima Leaders’ Communiqué,” White House, May 20, 2023, https://www.whitehouse.gov/briefing-room/statements-releases/2023/05/20/g7-hiroshima-leaders-communique/.
65    See, e.g.: Kevin Roose, “A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn,” New York Times, May 30, 2023, https://www.nytimes.com/2023/05/30/technology/ai-threat-warning.html.
66    Gigi Kwik Gronvall, “Managing the Risks of Biotechnology Innovation,” Council on Foreign Relations, January 30, 2023, 7, https://www.cfr.org/report/managing-risks-biotechnology-innovation.
67     Madeleine Ngo, “CHIPS Act Funding for Science and Research Falls Short,” New York Times, May 30, 2023, https://www.nytimes.com/2023/05/30/us/politics/chips-act-science-funding.html; Matt Hourihan, Mark Muro, and Melissa Roberts Chapman, “The Bold Vision of the CHIPS and Science Act Isn’t Getting the Funding It Needs,” Brookings, May 17, 2023, https://www.brookings.edu/blog/the-avenue/2023/05/17/the-bold-vision-of-the-chips-and-science-act-isnt-getting-the-funding-it-needs/.
68    See, e.g.: Gabrielle Athanasia and Jillian Cota, “The U.S. Should Strengthen STEM Education to Remain Globally Competitive,” Center for Strategic and International Studies, April 1, 2022, https://www.csis.org/blogs/perspectives-innovation/us-should-strengthen-stem-education-remain-globally-competitive.
69     On per-student university funding at state level, see: Mary Ellen Flannery, “State Funding for Higher Education Still Lagging,” NEA Today, October 25, 2022, https://www.nea.org/advocating-for-change/new-from-nea/state-funding-higher-education-still-lagging
70    Matt Fieldman, “5 Things We Learned in Germany,” NIST Manufacturing Innovation Blog, December 14, 2022, https://www.nist.gov/blogs/manufacturing-innovation-blog/5-things-we-learned-germany.
71    For a review, see: Peter Engelke and Robert A. Manning, Keeping America’s Innovative EdgeAtlantic Council, April 2017, https://www.atlanticcouncil.org/in-depth-research-reports/report/keeping-america-s-innovative-edge-2/.
72    To date, Congress has allocated only 5 percent of the funds called for in the piece of the CHIPS Act that funds the tech hubs. Madeleine Ngo, “CHIPS Act Funding for Science and Research Falls Short,” New York Times, May 30, 2023, https://www.nytimes.com/2023/05/30/us/politics/chips-act-science-funding.html; Mark Muro, et al., “Breaking Down an $80 Billion Surge in Place-Based Industrial Policy,” Brookings, December 15, 2022, https://www.brookings.edu/blog/the-avenue/2022/12/15/breaking-down-an-80-billion-surge-in-place-based-industrial-policy/.
73    Shai Bernstein, et al., “The Contribution of High-Skilled Immigrants to Innovation in the United States,” National Bureau of Economic Research, December 2022, 3, https://www.nber.org/papers/w30797.
74    Miranda Dixon-Luinenburg, “America Has an Innovation Problem. The H-1B Visa Backlog Is Making It Worse,” Vox, July 13, 2022, https://www.vox.com/future-perfect/23177446/immigrants-tech-companies-united-states-innovation-h1b-visas-immigration.
75    “Protecting U.S. Technological Advantage,” 98–99.
76    Matt Hourihan, “CHIPS And Science Highlights: National Strategy,” Federation of American Scientists, August 9, 2022, https://fas.org/publication/chips-national-strategy/.
77     “Press Release: Bennet, Sasse, Warner Unveil Legislation to Strengthen U.S. Technology Competitiveness,” Office of Michael Bennet, June 9, 2022, https://www.bennet.senate.gov/public/index.cfm/2022/6/bennet-sasse-warner-unveil-legislation-to-strengthen-u-s-technology-competitiveness.
78     Tarun Chhabra, et al., “Open-Source Intelligence for S&T Analysis,” Center for Security and Emerging Technology (CSET),Georgetown University Walsh School of Foreign Service, September 2020, https://cset.georgetown.edu/publication/open-source-intelligence-for-st-analysis/.
79     Emily Weinstein, “The Role of Taiwan in the U.S. Semiconductor Supply Chain Strategy,” National Bureau of Asian Research, January 21, 2023, https://www.nbr.org/publication/the-role-of-taiwan-in-the-u-s-semiconductor-supply-chain-strategy/.
80    Daniel Flatley, “US Treasury Hires Economists to Study Consequences of Sanctions,” Bloomberg, May 17, 2023, https://www.bloomberg.com/news/articles/2023-05-18/us-treasury-hires-economists-to-study-consequences-of-sanctions?sref=a9fBmPFG.
81    Kevin Wolf and Emily S. Weinstein, “COCOM’s daughter?” World ECR, May 13, 2022, 25, https://cset.georgetown.edu/wp-content/uploads/WorldECR-109-pp24-28-Article1-Wolf-Weinstein.pdf.

The post Global Strategy 2023: Winning the tech race with China appeared first on Atlantic Council.

]]>
The 5×5—Cyber conflict in international relations: A scholar’s perspective https://www.atlanticcouncil.org/content-series/the-5x5/the-5x5-cyber-conflict-in-international-relations-a-scholars-perspective/ Tue, 20 Jun 2023 04:01:00 +0000 https://www.atlanticcouncil.org/?p=654086 Leading scholars provide insights on cyber conflict’s role in international relations, how the topic can best be taught to students, and how scholars and policymakers can better incorporate each other’s perspectives.

The post The 5×5—Cyber conflict in international relations: A scholar’s perspective appeared first on Atlantic Council.

]]>
This article is part of The 5×5, a monthly series by the Cyber Statecraft Initiative, in which five featured experts answer five questions on a common theme, trend, or current event in the world of cyber. Interested in the 5×5 and want to see a particular topic, event, or question covered? Contact Simon Handler with the Cyber Statecraft Initiative at SHandler@atlanticcouncil.org.

Over the past decade, scholarly debate over the topic of cyber conflict’s place in international relations has evolved significantly. The idea that cyber tools would fundamentally change the nature of war and warfare has largely given way to the idea that cyber conflict is merely a different way of doing the same old things, and primarily suited for engaging in an intelligence contest. Other, less settled questions range from whether cyber operations are useful tools of signaling to if these operations lead to escalation. These unsettled questions remain active in scholarly literature and, critically, inform policymaking approaches. 

We brought together a group of leading scholars to provide insights on cyber conflict’s role in international relations, how the topic can best be taught to students, and how scholars and policymakers can better incorporate each other’s perspectives.

#1 What, in your opinion, is the biggest misconception about cyber conflict’s role in international relations theory?

Andrew Dwyer, lecturer in information security, Department of Information Security, Royal Holloway, University of London; steering committee lead, Offensive Cyber Working Group

“[The biggest misconception is] that cyberspace is malleable and controllable. The environment is often presented tangentially and, when it is, it is often about how people use the terrains of computation. I think that a lack of attention on how the environment ‘shapes’ people is one of the greatest missing parts of international relations thought. Simply, the environment and terrain have much more impact than is typically accounted for.” 

Melissa Griffith, lecturer in technology and national security, Johns Hopkins University School of Advanced International Studies (SAIS) and the Alperovitch Institute for Cybersecurity Studies; non-resident research fellow, University of California, Berkeley’s Center for Long-Term Cybersecurity (CLTC)

“Much of the scholarship focused on the intersection between cyber conflict and international relations theory has concentrated on capturing the nature of the evolving cyber threat. This has led, in turn, to ongoing and vibrant debates over whether (a) deterrence strategies are feasible or valuable, (b) cyberspace favors the offense or defense, (c) cyber operations are useful tools for coercion, (d) cyberspace is escalatory, or (e) strategic competition in cyberspace is best understood as an intelligence contest, for example. While these are important areas of focus, they have previously overshadowed other lines of inquiry. Such as, why we see variation in how states respond in practice, a line of inquiry that requires leveraging international relations theories beyond those focused on grappling with what best captures the dynamics of this new threat space as a whole.” 

Richard Harknett, professor & director, School of Public and International Affairs (SPIA); chair, Center for Cyber Strategy and Policy (CCSP), University of Cincinnati

“[The biggest misconception is] that the most salient impact of cyber operations should be in conflict; that is, the equivalent of armed attack and warfighting. Much of cybersecurity studies, itself, has focused on the construct of cyber war and thus international relations theory has primarily treated ‘cyber’ as another form of war, when the majority of state cyber activity is actually a strategic attempt to gain relative power via an alternative to war. I argue from a realist-structuralist perspective that the most fascinating theoretical question is the interplay between states struggle for autonomy and the organizing principle of interconnectedness that defines the cyber strategic environment.” 

Jenny Jun, research fellow, CyberAI Project, Center for Security and Emerging Technology; nonresident fellow, Cyber Statecraft Initiative, Digital Forensic Research Lab (DFRLab), Atlantic Council; Ph.D. candidate, Department of Political Science, Columbia University

“It is much more useful to think that conflict and competition have cyber dimensions to them, rather than to think that cyber conflict occurs in isolation.” 

Jon Lindsay, associate professor, School of Cybersecurity and Privacy, Sam Nunn School of International Affairs, Georgia Institute of Technology

“The biggest misconception remains that cyber operations are a revolution in military affairs akin to the invention of nuclear weapons. Cyber ‘conflict’ is better understood as the digital dimension of intelligence competition, between both state and nonstate competitors, which is an increasingly important and still understudied dimension of international relations.” 

Michael Poznansky, associate professor, Strategic & Operational Research Department, US Naval War College; core faculty member, Cyber & Innovation Policy Institute

Disclaimer: The opinions expressed below are the author’s alone and do not represent those of the U.S. Naval War College, the Department of Navy, the Department of Defense, or any government entity. 

“One potential misconception is that we need entirely new theoretical frameworks to understand cyber conflict. Are there are distinctive attributes of cyberspace that should give us pause from unthinkingly applying existing international relations theories to it? You bet. But the real task—which many have been doing and are continuing to do—is to figure out where we can apply existing theories, perhaps with certain modifications, and where novel frameworks are genuinely needed.”

#2 What would you like to see change about how cyber conflict is widely taught?

Dwyer: “As much as there is frequent discussion about ‘interdisciplinarity’ in the study of cyber conflict, all too often we teach through and in silos. That is, we teach ‘from’ an angle, whether that be international relations, computer science, psychology, and so on. I think this does a disservice to the study of cyber conflict. I am not claiming for a wholly radical empiricism here, but about one that is less grounded in theory as a starting place for exploration.” 

Griffith: “Notably, in this field, perhaps far more so than others, there is no uniform or widely pursued approach across classrooms, as pointed out by Herr, Laudrian, and Smeets in ‘Mapping the Known Unknowns of Cybersecurity Education‘. That said, students entering this field armed with social science and policy leaning coursework should be comfortable engaging with technical and private sector reporting alongside academic, government, and legal documents. At risk of straying beyond the focus of this topic (i.e., theory), I favor introducing students first to core technical foundations and operational realities—what is cyberspace and how has it evolved; how, when, and which groups hack; and how, where, and when does defense play out—before turning to policy, strategy, or theoretical debates. In my experience, this approach allows subsequent discussions of systemic, international, national, and subnational questions to be firmly grounded in the realities of the space.” 

Harknett: “Cybersecurity is not a technical problem, but a political, economic, organizational, and behavioral challenge in a technically fluid environment. Thus, how cyber insecurity can be reduced and state competition in and through cyberspace can be stabilized should be taught from multiple perspectives across the computing and social sciences and humanities. Basically, a more multidisciplinary integrated, rather than segmented, approach to courses and curriculum.” 

Jun: “There should be a greater effort to integrate literature on cyber conflict as part of bigger international relations themes such as coercion, signaling, trade, etc., and move away from viewing dynamics in cyberspace as monolithic. In many international relations syllabi, cyber conflict often appears at the very end (if at all) in about week thirteen as a standalone module. Often, the discussion question then becomes, “To what extent is cyber different from all of the traditional stuff we learned so far?” This not only leads to overgeneralizations about cyberspace and cyber conflict, but also nudges students into viewing cyber as something separate and distinct from other major themes and dynamics in international relations.” 

Lindsay: “Two things that would improve cybersecurity education would be 1) to situate it in the history of intelligence and covert action, and 2) to give more attention to the political economy of cyberspace, which fundamentally shapes the dynamics of cyber conflict.” 

Poznansky: “My hunch is that cyber conflict is often included in many international relations courses as part of a module on emerging technologies alongside space, autonomous systems, quantum, and so forth. Because cyberspace has relevance for almost all aspects of modern statecraft—warfighting, coercion, commerce, diplomacy—a better approach may be to consciously integrate it into modules on all these broader topics. Stand-alone courses also have high upside by allowing for a deep dive, but infusing cyber throughout discussions of major international relations concepts would offer a better foundation.”

#3 What is a piece of literature on cyber conflict theory that you recommend aspiring policymakers read closely and why?

Dwyer: “I think one of the best and underacknowledged written pieces is by JD Work, ‘Balancing on the Rail – considering responsibility and restraint in the July 2021 Iran Railways incident.’ In this piece, Work examines an incident on Iranian railways in July 2021. The explication of responsibility and restraint in offensive cyber operations is a must-read for anyone interested in the area.” 

Griffith: “Whether or not readers agree with them, Michael Fisherkeller, Emily Goldman, and Richard Harknett’s Cyber Persistence Theory (2022) sets the stage for a productive and ongoing theoretical debate over the structural conditions animating cyberspace. Though an exercise in theory development rather than policy prescription, the book is not merely of interest to academics. Echoes of the underlying logic can be found animating US Cyber Command’s Persistent Engagement and the UK National Cyber Force’s recently released, ‘Responsible Cyber Power in Practice,’ for example.” 

Harknett: “As Alexander George correctly wrote to bridge the gap between policy and theory, it is the theoretician that must cross-over the bridge to meet policymakers on their own turf. Two recent books that do a good job at this are Max Smeets’ No Short Cuts: Why States Struggle to Develop a Military Cyber-Force and a just released edited book from Smeets and Robert Chesney, Deter, Disrupt, or Deceive, which examines the debate between those who posit cyberspace as strategic competition and those who view it as an intelligence contest and thus apply research from intelligence studies. Misconceiving this fundamental categorization would have profound impact on policy development, and thus grappling with the difference between the two perspectives is important.” 

Jun: “Aspiring policymakers should be familiar with the arguments made in Cyber Persistence Theory by Goldman, Fischerkeller, and Harknett, as well as the back-and-forth debate leading up to the publication of this book in various journals and opinion pieces. Ideas laid out in this book embody much of the thinking behind the 2018 US government pivot towards Persistent Engagement and Defend Forward away from a strategy based on deterrence by punishment. Reading the book as well as the debate around it will allow an aspiring policymaker to trace how certain characterizations of cyberspace and its functions will lead to corresponding theoretical predictions, and how such assessments are translated into strategy documents by various agencies.” 

Lindsay: “I highly recommend the new volume by Robert Chesney and Max Smeets exploring the debate over cyber as an intelligence contest or something else. I also recommend that international relations scholars become more familiar with the Workshop on the Economics of Information Security community, which produces fascinating papers every year.” 

Poznansky: “I am going to cheat and highlight two. First is an article by Jordan Branch looking at how the military’s use of familiar metaphors to understand and describe cyberspace affected investments and policy decisions. Branch shows that the comparisons we invoke to understand new phenomena have real-world impacts. Second is a new book by Erica Lonergan and Shawn Lonergan on the dynamics of escalation in cyberspace. It tackles one of the most pressing issues in cyber conflict in a way that appeals to scholars and practitioners alike.”

More from the Cyber Statecraft Initiative:

#4 How has the theory of cyber conflict evolved in the last five years and where do you see the field evolving in the next five years?

Dwyer: “Undoubtedly, the greatest transformation has been the demise of ‘cyber war’ and ‘cyber weapons’ in both theory and practice. This has steadily been replaced (albeit over much more than the past five years) by cyber conflict as an ‘intelligence contest.’ In many ways, this is a welcome development. For the next five years, one might ask what then is distinct about cyber conflict; is it simply a transplant of conventional intelligence-related activity with new tools? I would wager not, and I hope that the cyber conflict studies community examines the role that technology plays that does not simply reduce computation to a tool with none of its own agency.” 

Griffith: “Two significant shifts stand out. One of the biggest was the pivot away from the early focus on war toward a recognition of the diversity of activity that occurs in the absence of and below the threshold of war. In the process, the theories and disciplines cyber conflict scholars brought to bear expanded beyond security studies approaches, which had largely dominated the field, to increasingly include intelligence studies, history, economics, law, etc. In the next five years, I hope to see that aperture continue to widen as we continue to move beyond those early ‘cyber war’ framings to an array of questions stemming from a diversity of disciplines and examining a greater diversity of countries.” 

Harknett: “Along with the work above, Ben Buchanan’s The Hacker and the State and Daniel Moore’s Offensive Cyber Operations have begun to examine the operational space as it is, rather than how people thought it would be. I think there is a significant pivot away from the cyber war construct occurring. Of course, my own bias is that Cyber Persistence Theory as presented by myself, Emily Goldman and Michael Fischerkeller offers a foundational piece of theory that explains a lot of the shifting in state strategy and behavior. I think the utility of the constructs of initiative persistence, campaigning, and strategic competition, will garner debate and may emerge or will be challenged as further research with this focus develops.” 

Jun: “In the past five years, there has been a shift away from efforts to study cyber deterrence to focus on the dynamics of cyber incidents and/or campaigns below the threshold of armed conflict that occur on a regular basis. The field is also becoming more methodologically diverse. In the next five years, the field is likely to focus on getting at the nuances of cyber activity occurring below the threshold of armed conflict. The scholarly community may seek to answer questions such as: when a state takes certain offensive or defensive actions in cyberspace, what do these actions signal, and how are they interpreted on the receiving side? How do we measure or evaluate the effectiveness of cyber campaigns? As other states acquire cyber capabilities and respond to cyber threats, what accounts for how their cyber strategies evolve?” 

Lindsay: “In the last five years, the field has taken a decidedly empirical turn. Cyber is no longer an emerging technology. It has emerged. We have decades of data to explore. This empirical turn complements the theoretical emphasis on intelligence that I mentioned above.” 

Poznansky: “There has been an explosion of work over the last few years devoted to better understanding what exactly cyberspace represents. Is it yet another arena of warfare with some new bells and whistles or is it more akin to an intelligence contest? How we understand the nature of cyberspace has major implications for how we theorize cyber conflict and, equally important, what sorts of policy implications we arrive at. There is much more to be done here.”

#5 How can scholars and policymakers of cyber conflict better incorporate perspectives from each other’s work?

Dwyer: “This is by far the hardest question; however, it is about understanding the needs and goals of both academics and policymakers. This simply requires 1) a firm commitment and foundation from policymakers to fund critical social science and humanities work that can sustain positive engagement and trust building; 2) recognition and support for academics in the translation of their work and impact in ways that are visible to their institutions; and 3) for academics not enter a room with preconceived notions of the solutions to policymakers’ problems.” 

Griffith: “There are a variety of models at our disposal, but one approach of note is on full display in Robert Chesney and Max Smeets’ recent edited volume, Deter, Disrupt, or Deceive, which explicitly puts authors who disagree—and who spearhead emerging schools of thought—in direct conversation with each other. This volume represents the culmination of roughly four years of formal and informal debate and has actively sought to continue the conversation through an ongoing, global series of workshops in the wake of its publication. Another model can be found in the field-building work of the Cyber Conflict Studies Association in United States and the European Cyber Conflict Research Initiative in Europe.” 

Harknett: “Again, the bridge between policy and theory has never been easy to traverse, but one essential element is adopting an agreed upon lexicon. There is, currently, this interesting phenomena in which the UK’s National Cyber Mission Forces’ Responsible Cyber Power in Practice document and the US Defense Department’s approaches of cyber persistent engagement and defend forward, as well as the broader 2023 US National Cybersecurity Strategy, align with the logic of initiative persistence and the structural reasoning of cyber persistence theory, with growing focus on continuous campaigns and seizing the initiative, rather than legacy constructs such as deterrence threats. Although full lexicon consensus has yet to solidify, it will be interesting to observe whether it occurs overtime.” 

Jun: “[Scholars and policymakers of cyber conflict can better incorporate perspectives from each other’s work with] more frequent conversations that raise good new policy-relevant research questions, efforts to ground theoretical and empirical research in what is actually going on, and efforts to turn conclusions from scholarly analysis into actionable policy agendas.” 

Lindsay: “This question is tricky because there are several different groups on either side of the gap, and it is important for all of them to talk. On the policy side, there are government policymakers and intelligence professionals, but also the hugely important commercial sector. And on the academic side you have international relations scholars, computer scientists, and many other social scientists and engineers working in related areas. Cybersecurity is a pretty wicked interdisciplinary problem.” 

Poznansky: “For scholars, being open to the possibility that many of the things we often bracket, in part because they can be hard to measure—bureaucratic politics, organizational culture, leadership, and so forth—is valuable. These factors probably explain more about cyber conflict than we care to admit. For practitioners, remaining open minded to debates that might sound purely academic in nature at first blush but in fact have immense practical relevance is also valuable. Whether cyberspace is mainly an arena for intelligence competition or warfighting—a debate, as mentioned, that is happening right now—matters for the prospect of developing norms, the utility of coercion, the dynamics of escalation, and more.”

Simon Handler is a fellow at the Atlantic Council’s Cyber Statecraft Initiative within the Digital Forensic Research Lab (DFRLab). He is also the editor-in-chief of The 5×5, a series on trends and themes in cyber policy. Follow him on Twitter @SimonPHandler.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post The 5×5—Cyber conflict in international relations: A scholar’s perspective appeared first on Atlantic Council.

]]>
Activists and experts assemble in Costa Rica to protect human rights in the digital age https://www.atlanticcouncil.org/content-series/360os/activists-and-experts-assemble-in-costa-rica-to-protect-human-rights-in-the-digital-age/ Wed, 07 Jun 2023 20:21:18 +0000 https://www.atlanticcouncil.org/?p=652275 Our Digital Forensic Research Lab is convening top tech thinkers and human-rights defenders at RightsCon to collaborate on an agenda for advancing rights globally.

The post Activists and experts assemble in Costa Rica to protect human rights in the digital age appeared first on Atlantic Council.

]]>
Will the world’s human-rights defenders be able to match the pace of quickly moving technological challenges arising from artificial intelligence, information wars, and more?

Rights activists, tech leaders, and other stakeholders are meeting at RightsCon Costa Rica on June 5-8 to collectively set an agenda for advancing human rights in this digital age.

Our experts at the Digital Forensic Research Lab are coordinating part of that effort, with a slate of RightsCon events as part of their 360/Open Summit: Around the World global programming. Below are highlights from the events at RightsCon, which cover digital frameworks in Africa, disinformation in Ukraine, online harassment of women globally, and more.


The latest from San José

Rethinking transparency reporting

Human rights must be central in the African Union’s Digital Transformation Strategy

Day two wraps with a warning about dangerous threats, from militant accelerationism to violence toward women

What’s behind today’s militant accelerationism?

The digital ecosystem’s impact on women’s political participation

Day one wraps with recommendations for Africa’s digital transformation, Venezuela’s digital connectivity, and an inclusionary web

What does a trustworthy web look like?

Mapping—and addressing—Venezuela’s information desert

Where open-source intelligence meets human-rights advocacy


Rethinking transparency reporting

On Day 3 of RightsCon Costa Rica, Rose Jackson, director of the DFRLab’s Democracy & Tech Initiative, joined panelists Frederike Kaltheuner, director for technology and human rights at Human Rights Watch, and David Green, civil liberties director at Electronic Frontier Foundation, for a panel on rethinking transparency reporting. The discussion was led and moderated by Gemma Shields, Online Safety Policy Lead at the United Kingdom’s Office of Communications (Ofcom).

Shields opened the session by describing the online safety bill currently making its way through the UK parliament and the role of Ofcom in its implementation. The bill will give new powers to Ofcom to test mandatory platform transparency reporting requirements. Through these efforts, Ofcom hopes that “good, effective meaningful transparency reporting might encourage proactive action from the platforms,” Shields explained.

During the discussion, the panelists discussed what will be central to implementation of the online safety bill, including what effective transparency reporting looks like. Kaltheuner emphasized the complexity of defining meaningful transparency when the use cases vary across end users, regulators, civil society, journalists, and academics. Green underscored the importance of centering user needs in the conversation and the need to tailor reporting mandates to specific platforms.

Jackson noted that it is a strategic imperative for the UK government to consult experts from the global majority and consider how regulations and norms could be potentially used for harm by non-democratic actors. As Jackson put it, “what happens in the most unprotected spaces is the beta test for what will show up in your backyard.” She also highlighted the importance of global civil society engaging with the UK Online Safety Bill and European transparency regulations, such as the Digital Services Act, because these policies are first movers in codifying more regulation, and future policies will refer back to these efforts.

Human rights must be central in the African Union’s Digital Transformation Strategy

The DFRLab gathered stakeholders from the policy-making, democracy, rights, and tech communities across the African continent to discuss the African Union’s Digital Transformation Strategy. Participants compared notes and identified opportunities for increasing the strategy’s human-rights focus as it approaches its mid-mandate review. Participants also agreed that trusted conveners, such as watchdog agencies within national governments, can play a critical facilitating role in ensuring effective communication between experts, users, and civil society on one hand and policymakers and elected officials on the other. Discussion of particular concerns with the Strategy or recommendations to increasingly center human rights in it will be continued in future gatherings.

Day two wraps with a warning about dangerous threats, from militant accelerationism to violence toward women

The DFRLab kicked off day two at RightsCon with a conversation on how Russian information operations, deployed ahead of the full-scale invasion of Ukraine, were used to build false justifications for the war, deny responsibility for the war of aggression, and mask Russia’s military build-up. The panel also highlighted two DFRLab reports, released in February 2023, that examine Russia’s justifications for the war and Russia’s attempts to undermine Ukraine’s resistance and support from the international community.

Read more

Transcript

Jun 8, 2023

Mapping the last decade of Russia’s disinformation and influence campaign in Ukraine

By Atlantic Council

Since its full-scale invasion of Ukraine, Russia has continued its information operations, targeting more than just Ukraine, say speakers at a RightsCon event hosted by the Digital Forensic Research Lab.

Disinformation Russia

While at RightsCon, the DFRLab participated in a discussion on militant accelerationism, its impact on minority communities, and how bad actors can be held accountable. The event, hosted by the United Kingdom’s Office of Communications and Slovakia’s Council of Media Services, featured panelists who discussed the ways in which policy can hold all voices, including those of the powerful, accountable. During the panel, DFRLab Research Fellow Meghan Conroy discussed how such violent narratives have become increasingly commonplace in some American ideologies and how extremist individuals and groups sympathetic to these narratives have been mobilized.

To close out the day, the DFRLab and the National Democratic Institute co-hosted a panel featuring global experts from civil society, government, and industry on how the threat of violence and harassment online has impacted the potential for women to participate in politics. As noted by the panelists, abuse suffered online is meant to strictly intimidate and silence those who want to get involved, and it is, therefore, all the more important that these very women, and those already established, stand up and speak out so as to serve as role models and protect diversity and equity in politics, tech, and beyond.

What’s behind today’s militant accelerationism?

By Meghan Conroy

While at RightsCon, I—a DFRLab research fellow and co-founder of the Accelerationism Research Consortium—joined an event hosted by the UK Office of Communications and Slovakia’s Council of Media Services on militant accelerationism.

My co-panelists and I provided an overview of militant accelerationism and an explanation of the marginalized groups that have been targets of militant accelerationist violence. I discussed accelerationist narratives that have not only permeated mainstream discourse but have also mobilized extremists to violence. Hannah Rose, research fellow and PhD candidate at King’s College London’s International Centre for the Study of Radicalization, zeroed in on the role of conspiracy theories in enabling the propagation of these extreme worldviews.

Stanislav Matějka, head of the Analytical Department at the Slovakian Council of Media Services, delved into the October 2022 attack in Bratislava. He flagged the role of larger, more mainstream platforms as well as filesharing services in enabling the spread of harmful content preceding the attack. Murtaza Shaikh, principal at the UK Office of Communications for illegal harms and hate and terrorism, highlighted the office’s work on the May 2022 attack in Buffalo, New York. He raised that these attacks result, in part, from majority populations framing themselves as under threat by minority populations, and then taking up arms against those minority populations.

Attendees then broke into groups to discuss regulatory solutions and highlight obstacles that may stand in the way of those solutions’ implementation or effectiveness. Key takeaways included the following:

  • Powerful voices need to be held to account. Politicians, influencers, and large platforms have played an outsized role in enabling the mainstreaming and broad reach of these worldviews.
  • Bad actors will accuse platforms and regulators of censorship, regardless of the extent to which content is moderated. As aforementioned, they’ll often position themselves as victims of oppression, and doing so in the context of content moderation policies is no different—even if the accusations are not rooted in reality.
  • Regulators must capitalize on existing expertise. Ahost of experts who monitor these actors, groups, and narratives across platforms, as well as their offline activities, can help regulators and platforms craft creative, adaptive, and effective policies to tackle the nebulous set of problems linked to militant accelerationism.

This conversation spurred some initial ideas that are geared toward generating more substantial discussion. Introducing those unfamiliar with understudied and misunderstood concepts, like militant accelerationism, is of the utmost importance to permit more effective combatting of online harms and their offline manifestations—especially those that have proven deadly.

Meghan Conroy is a US research fellow with the Atlantic Council’s Digital Forensic Research Lab.

The digital ecosystem’s impact on women’s political participation

By Abigail Wollam

The DFRLab and the National Democratic Institute (NDI) co-hosted a panel that brought together four global experts from civil society, government, and industry to discuss a shared and prevalent issue: The threat of digital violence and harassment that women face online, and the impact that it has on women’s participation in political life.

The panel was facilitated by Moira Whelan, director for democracy and technology at NDI; she opened the conversation by highlighting how critical these conversations are, outlining the threat to democracy posed by digital violence. She noted that as online harassment towards women becomes more prevalent, women are self-censoring and removing themselves from online spaces. “Targeted misogynistic abuse is designed to silence voices,” added panelist Julia Inman Grant, the eSafety commissioner of Australia.  

Both Neema Lugangira (chairperson for the African Parliamentary Network on Internet Governance and member of parliament in Tanzania) and Tracy Chou (founder and chief executive officer of Block Party) spoke about their experiences with online harassment and how those experiences spurred their actions in the space. Lugangira found, through her experience as a female politician in Tanzania, that the more outspoken or visible a woman is, the more abuse she gets. She observed that women might be less inspired to participate in political life because they see the abuse other women face—and the lack of defense or support these women get from other people. “I decided that since we’re a group that nobody speaks for… I’m going to speak for women in politics,” said Lugangira.

Chou said that she faced online harassment when she became an activist for diversity, equity, and inclusion in the tech community. She wanted to address the problem that she was facing herself and founded Block Party, a company that builds tools to combat online harassment.  

Despite these challenges, the panelists discussed potential solutions and ways forward. Australia is leading by example with its eSafety commissioner and Online Safety Act, which provide Australians with an avenue through which to report online abuses and receive assistance. Fernanda Martins, director of InternetLab, discussed the need to change how marginalized communities that face gendered abuse are seen and talked about; instead of talking about the community as a problem, it’s important to see them as part of the solution and bring them into the discussions.

Abigail Wollam is an assistant director at the Atlantic Council’s DFRLab

Read more

Transcript

Jun 8, 2023

The international community must protect women politicians from abuse online. Here’s how.

By Atlantic Council

At RightsCon, human-rights advocates and tech leaders who have faced harassment online detail their experiences—and ways the international community can support women moving forward.

Disinformation Resilience & Society

Day one wraps with recommendations for Africa’s digital transformation, Venezuela’s digital connectivity, and an inclusionary web

This year at RightsCon Costa Rica, the DFRLab previewed its forthcoming Task Force for a Trustworthy Future Web report and gathered human-rights defenders and tech leaders to talk about digital frameworks in Africa, disinformation in Latin America and Ukraine, and the impact online harassment has on women in political life, and what’s to come with the European Union’s Digital Services Act. 

Read more

Transcript

Jun 8, 2023

The European Commission’s Rita Wezenbeek on what comes next in implementing the Digital Services Act and Digital Markets Act

By Atlantic Council

At a DFRLab RightsCon event, Wezenbeek spoke about the need to get everyone involved in the implementation of the DSA and DMA.

Disinformation European Union

The programming kicked off on June 5 with the Digital Sherlocks training program in San José, which marked the first time the session was conducted in both English and Spanish. The workshop aimed to provide human-rights defenders with the tools and skills they need to build movements that are resilient to disinformation.  

On June 6, the programming opened with a meeting on centering human rights in the African Union’s Digital Transformation Strategy. The DFRLab gathered stakeholders from democracy, rights, and tech communities across the African continent to discuss the African Union’s Digital Transformation Strategy. Participants compared notes and identified opportunities for impact as the strategy approaches its mid-mandate review. 

Next, the DFRLab, Venezuela Inteligente, and Access Now hosted a session on strengthening Venezuela’s digital information ecosystem, a coalition-building meeting with twenty organizations. The discussion drew from a DFRLab analysis of Venezuela’s needs and capabilities related to the country’s media ecosystems and digital security, literacy, and connectivity. The speakers emphasized ways to serve vulnerable groups.

Following these discussions, the DFRLab participated a dialogue previewing findings from the Task Force for a Trustworthy Future Web. The DFRLab’s Task Force is convening a broad cross-section of industry, civil-society, and government leaders to set a clear and action-oriented agenda for future online ecosystems. As the Task Force wraps up its report, members discussed one of the group’s major findings: the importance of inclusionary design in product, policy, and regulatory development. To close out the first day of DFRLab programming at RightsCon Costa Rica, the task force notified the audience that it will be launching its report in the coming weeks. 

What does a trustworthy web look like?

By Jacqueline Malaret and Abigail Wollam

The DFRLab’s Task Force for a Trustworthy Future Web is charting a clear and action-oriented roadmap for future online ecosystems to protect users’ rights, support innovation, and center trust and safety principles. As the Task Force is wrapping up its report, members joined Task Force Director Kat Duffy to discuss one of the Task Force’s major findings—the importance of inclusionary design in product, policy, and regulatory development—on the first day of RightsCon Costa Rica.

In just eight weeks, Elon Musk took over Twitter, the cryptocurrency market crashed, ChatGPT launched, and major steps have been made in the development of augmented reality and virtual reality, fundamentally shifting the landscape of how we engage with technology. Framing the panel, Duffy highlighted how not only has technology changed at a breakneck pace, but the development and professionalization of the trust and safety industry have unfolded rapidly in tandem, bringing risks, harms, and opportunities to make the digital world safer for all.

Read more

Digital mouse cursor

Task Force for a Trustworthy Future Web

The Task Force for a Trustworthy Future Web will chart a clear and action-oriented roadmap for future online ecosystems to protect users’ rights, support innovation, and center trust and safety principles.

The three panelists—Agustina del Campo, director of the Center for Studies on Freedom of Expression; Nighat Dad, executive director of the Digital Rights Foundation; and Victoire Rio, a digital-rights advocate—agreed that the biggest risk, which could yield the greatest harm, is shaping industry practices through a Western-centric lens, without allowing space for the global majority. Excluding populations from the conversation around tech only solidifies the mistakes of the past and risks creating a knowledge gap. Additionally, the conversation touched on the risk of losing sight of the role of government, entrenching self-regulation as an industry norm, and absolving both companies and the state for harms that can occur because of the adoption of these technologies.

Where there is risk, there is also an opportunity to build safer and rights-respecting technologies. Panelists said that they found promise in the professionalization and organization of industry, which can create a space for dialogue and for civil society to engage and innovate in the field. They are also encouraged that more and more industry engagements are taking place within the structures of international law and universal human rights. The speakers were encouraged by new opportunities to shape regulation in a way that coalesces action around systemic and forward-looking solutions.

But how can industry, philanthropy, and civil society maximize these opportunities? There is an inherent need to support civil society that is already deeply engaged in this work and to help develop this field, particularly in the global majority. There is also a need to pursue research that can shift the narrative to incentivize investment in trust and safety teams and articulate a clear case for the existence of this work.

Jacqueline Malaret is an assistant director at the Atlantic Council’s DFRLab

Abigail Wollam is an assistant director at the Atlantic Council’s DFRLab

Mapping—and addressing—Venezuela’s information desert

By Iria Puyosa and Daniel Suárez Pérez

On June 6, the DFRLab, Venezuela Inteligente, and Access Now (which runs RightsCon) hosted a coalition-building meeting with twenty organizations that are currently working on strengthening Venezuela’s digital information ecosystem. The discussion was built on an analysis, conducted by the DFRLab, of the country’s media ecosystems and digital security, literacy, and connectivity; the speakers focused on ways to serve vulnerable groups such as grassroots activists, human-rights defenders, border populations, and populations in regions afflicted by irregular armed groups. 

The idea of developing a pilot project in an information desert combining four dimensions—connectivity, relevant information, security, and literacy—was discussed. Participants agreed that projects should combine technical solutions to increase access to connectivity and generate relevant information for communities, with a human-rights focus. In addition, projects should include a digital- and media-literacy component and continuous support for digital security.

Iria Puyosa is a senior research fellow at the Atlantic Council’s DFRLab

Daniel Suárez Pérez is a research associate for Latin America at the Atlantic Council’s DFRLab

Where open-source intelligence meets human-rights advocacy

By Ana Arriagada

On June 5, the DFRLab hosted a Digital Sherlocks workshop on strengthening human-rights advocacy through open-source intelligence (OSINT) and countering disinformation.

I co-led the workshop with DFRLab Associate Researchers Jean le Roux, Daniel Suárez Pérez, and Esteban Ponce de León.

In the session, attendees discussed the worrying rise of antidemocratic governments in Latin America—such as in Nicaragua and Guatemala—who are  using open-source tools for digital surveillance and are criminalizing the work of journalists and human-rights defenders. When faced with these challenges, it becomes imperative for civil-society organizations to acquire and use investigative skills to produce well-documented reports and investigations. 

During the workshop, DFRLab researchers shared their experiences investigating paid campaigns that spread disinformation or promote violence or online harassment. They recounted having used an array of tools to analyze the origin and behavior of these paid advertisements. 

DFRLab researchers also discussed tools that helped them detect suspicious activity on platforms such as YouTube, where, for example, some gamer channels spread videos related to disinformation campaigns or political violence. The workshop attendees also discussed how policy changes at Twitter have made the platform increasingly challenging to investigate, but they added that open-source researchers are still investigating, thanks to the help of available tools and the researchers’ creative methodologies. 

The workshop also showcased the DFRLab’s work with the Action Coalition on Meaningful Transparency (ACT). Attendees received a preview of ACT’s upcoming portal launch, for which the DFRLab has been offering guidance. The new resource will offer access to a repository of transparency reporting, policy documents, and analysis from companies, governments, and civil society. It will also include a registry of relevant actors and initiatives, and it will allow users to establish links between entries to see the connections between organizations, the initiatives they are involved in, and the reports they have published. 

The workshop ended with the DFRLab explaining that social network analysis— the study of social relationships and structures using graph theory—is important because it allows for investigating suspicious activity or unnatural behavior exhibited by users on social media platforms. 

Ana Arriagada is an assistant director for Latin America at the Atlantic Council’s DFRLab

The post Activists and experts assemble in Costa Rica to protect human rights in the digital age appeared first on Atlantic Council.

]]>
Will the debt ceiling deal mean less for homeland security? https://www.atlanticcouncil.org/blogs/new-atlanticist/will-the-debt-ceiling-deal-mean-less-for-homeland-security/ Wed, 31 May 2023 19:00:12 +0000 https://www.atlanticcouncil.org/?p=650792 Congress needs to ensure that the Department of Homeland Security has the resources it needs to defend the nation against nonmilitary threats.

The post Will the debt ceiling deal mean less for homeland security? appeared first on Atlantic Council.

]]>
What the new budget deal to raise the federal debt ceiling means for homeland security is only slowly coming into focus. Very few of the initial statements out of the White House or House Republican leadership about the Fiscal Responsibility Act of 2023 mention what the new budget cap means for the Department of Homeland Security (DHS) or for homeland security more broadly. A close look, however, leaves reason for concern. DHS will be competing for fewer civilian budget dollars against the full range of the nation’s domestic needs and priorities. This puts the United States’ defenses at risk in areas where the threats are increasing, as in cybersecurity, border and immigration security, and domestic counterterrorism. 

US President Joe Biden and House Speaker Kevin McCarthy deserve praise for avoiding a catastrophic default on the United States’ fiscal obligations that otherwise would have disrupted debt payments, Social Security payments to seniors, and the federal payroll that includes everyone who keeps the United States safe. Most commentators on the budget part of the deal have focused on the contrast between “defense spending,” where the agreement largely endorses the Biden administration’s requested increase for the Department of Defense, versus domestic programs, which are slated for a cut over the previous year’s levels. However, it is important to remember that DHS leads the defense of the United States against nonmilitary threats. DHS is responsible for border, aviation, and maritime security, as well as cybersecurity. It also helps protect critical infrastructure, oversees immigration, builds resilience, restores communities after disasters, and combats crimes of exploitation. As the third-largest cabinet department in the federal government, DHS’s budget is intrinsically linked to the security of the United States. However, DHS’s budget for fiscal year (FY) 2024 is not getting the same treatment as the budget for the Department of Defense (DOD).

When security is “nonsecurity”

The Fiscal Responsibility Act of 2023 classifies most of DHS’s budget as “nonsecurity.” This is paradoxical but true. Barring future changes to the deal, which are always possible, DHS will be in a zero-sum competition in the FY 2024 budget negotiations against other civilian programs such as nutrition programs for children, domestic law enforcement, housing programs, community grants programs, and national parks. Whereas the federal government should be spending more on cybersecurity, border and immigration security, and community programs to prevent violent extremism and domestic terrorism, the Fiscal Responsibility Act of 2023 will make this harder because the overall pot of money for nondefense programs for FY 2024 will be less than in FY 2023. This appears to be the case even though more spending on cybersecurity and border security has strong bipartisan support.

The Fiscal Responsibility Act of 2023 follows the legislative language of the Budget Control Act of 2011 (the first of several debt ceiling deals in the Obama administration), which divided so-called “discretionary” federal spending into two different two-way splits. First, there is the “security category” and the “nonsecurity category.” The security category includes most of the budgets of the departments of Defense, Homeland Security, and Veterans Affairs. It also includes the National Nuclear Security Administration, the intelligence community management account, and the so-called “150 account” for international programs such as military aid, development assistance, and overseas diplomatic operations. The nonsecurity category is essentially everything else, such as the departments of Justice, Health and Human Services, Commerce, Housing and Urban Development, and Interior. 

Central to the 2011 budget deal was that it did not apply to nondiscretionary programs such as Social Security and fee-based programs such as citizenship and visa applications, which are not considered “discretionary” spending. Emergency spending, narrowly defined, was exempt from the budget caps, as was most of the war against al-Qaeda, which was categorized as “Overseas Contingency Operations” and exempt from the budget caps that began in 2011.

DHS will be competing for fewer civilian budget dollars against the full range of the nation’s domestic needs and priorities.

The second split in budget law, which originated in a budget deal in December 2013, is between the “revised security category” and the “revised nonsecurity category.” The revised security category includes only budget account 050, roughly 96 percent of which is the Department of Defense (budget code 051). About 3 percent is for nuclear programs run by the Department of Energy (code 053), and about 1 percent is for national defense-related programs at DHS, the Federal Bureau of Investigation (mainly counterintelligence programs), and parts of the Central Intelligence Agency.

The main DHS programs funded under this revised security category (budget code 054) are extremely limited: emergency management functions of the Federal Emergency Management Agency on things like emergency communications systems and alternate sites the federal government could use in case of emergency or an extreme event such as a nuclear attack, as well as some functions of the Cybersecurity and Infrastructure Security Agency.

Thus, since 2013, most of the budgets of DHS, the Department of Veterans Affairs, and foreign military assistance have been in the “security category” but have also paradoxically been in the “revised nonsecurity category.”

In the May 2023 budget deal, the $886.3 billion spending cap agreed to by the White House and the House Republican leadership for FY 2024 is only for the “revised security category.” Most of DHS, the Department of Veterans Affairs, and military assistance are lumped in with the $703.6 billion cap for “revised nonsecurity” civilian parts of the federal government. Of that, $703.6 billion, $121 billion is earmarked for veterans’ programs. After several other adjustments and offsets, as the White House calculates it, this leaves $637 billion for all other “revised nonsecurity” programs. This is a nominal cut of one billion dollars from what those departments got in the FY 2023 budget passed in December 2022. Because inflation in the past year was 4.9 percent, the effective budget cut to “revised nonsecurity programs” would be greater than one billion dollars. The House Republicans calculate an even greater cut, to $583 billion, by not including the adjustments and offsets.

Flash back to 2011 and forward to 2024

In 2011, the debate between the Obama administration and the Republicans in Congress could be simplified into the idea that Democrats wanted more spending on social programs in the “nonsecurity category,” while Republicans wanted more money spent on “security,” principally defense spending but also including homeland security.

The debate in 2023 does not break down so neatly. There is increasing, bipartisan agreement that the United States needs to be spending more on border and immigration security, and that waiting until the start of FY 2024 to address this shortfall is not going to enable the administration’s strategy to succeed. There is also bipartisan agreement that the federal government as a whole should spend more on cybersecurity. And as the Bipartisan Safer Communities Act showed, mental health and community grants to address the causes of school shootings have bipartisan support. There is also bipartisan support for military assistance to help Ukraine defend itself from Russian aggression and to help Taiwan build up its defenses to deter a possible Chinese invasion. These programs are all funded mostly or wholly from “revised nonsecurity” programs. It is not clear how these programs will fare in the budget environment created by the Fiscal Responsibility Act of 2023.

Commercial aviation and borders still need to be protected, even while cyber threats mount and increased quantities of fentanyl come through ports of entry.

Other departments and agencies can reallocate funds when priorities change, but not DHS. After DOD successfully led international efforts to take away the Islamic State of Iraq and al-Sham’s territory in Iraq and Syria, the military was able to pivot to Asia, redeploying drones and personnel out of the Middle East to defend the Indo-Pacific. However, for DHS, as the 2023 Quadrennial Homeland Security Review made clear, threats seldom go away, even when the homeland faces new threats. Commercial aviation and borders still need to be protected, even while cyber threats mount and increased quantities of fentanyl come through ports of entry.

As valid as these concerns are, they are no reason to torpedo the Fiscal Responsibility Act of 2023. To the contrary, failure to pass the bill would gravely jeopardize national and homeland security, not to mention the economic security of the United States.

Nor do these concerns mean that other departments and agencies do not have their own justifications for increased resources in FY 2024. But the Fiscal Responsibility Act of 2023 is not going to make it easier for homeland security. Congress needs to recognize this as it works toward the final budget for FY 2024, and, perhaps more urgently, when it considers whether to pass an emergency supplemental appropriations bill for border and immigration security. Congress needs to ensure, as it provided for military security in the “security category” of the Fiscal Responsibility Act, that DHS has the resources it needs to defend the nation against nonmilitary threats.


Thomas S. Warrick is the director of the Future of DHS project at the Scowcroft Center for Strategy and Security’s Forward Defense program and a nonresident senior fellow and the Scowcroft Middle East Security Initiative at the Atlantic Council. He is a former DHS deputy assistant secretary for counterterrorism policy.

The post Will the debt ceiling deal mean less for homeland security? appeared first on Atlantic Council.

]]>
Ukraine’s Diia platform sets the global gold standard for e-government https://www.atlanticcouncil.org/blogs/ukrainealert/ukraines-diia-platform-sets-the-global-gold-standard-for-e-government/ Wed, 31 May 2023 01:30:31 +0000 https://www.atlanticcouncil.org/?p=650569 Ukraine's Diia app is widely seen as the world's first next-generation e-government platform, and is credited with implementing what many see as a more human-centric government service model, writes Anatoly Motkin.

The post Ukraine’s Diia platform sets the global gold standard for e-government appeared first on Atlantic Council.

]]>
Several thousand people gathered at the Warner Theater in Washington DC on May 23 for a special event dedicated to Ukraine’s award-winning e-governance platform Diia. “Ukrainians are not only fighting. For four years behind the scenes, they have been creating the future of democracy,” USAID Administrator Samantha Power commented at the event.

According to Power, users of Diia can digitally access the kinds of state services that US citizens can only dream of, including crossing the border using a smartphone application as a legal ID, obtaining a building permit, and starting a new business. The platform also reduces the potential for corruption by removing redundant bureaucracy, and helps the Ukrainian government respond to crises such as the Covid pandemic and the Russian invasion.

Since February 2022, the Diia platform has played a particularly important part in Ukraine’s response to Russia’s full-scale invasion. According to Ukraine’s Minister of Digital Transformation Mykhailo Fedorov, in the first days of the invasion the platform made it possible to provide evacuation documents along with the ability to report property damage. Other features have since been added. The e-enemy function allows any resident of Ukraine to report the location and movement of Russian troops. Radio and TV functions help to inform people who find themselves cut off from traditional media in areas where broadcasting infrastructure has been damaged or destroyed.

Today, the Diia ecosystem offers the world’s first digital passport and access to 14 other digital documents along with 25 public services. It is used by more than half the Ukrainian adult population. In addition to consumer-oriented functions, the system collects information for the national statistical office and serves as a digital platform for officials. Diia is widely seen as the world’s first next-generation e-government platform, and is credited with implementing what many see as a more human-centric government service model.

Stay updated

As the world watches the Russian invasion of Ukraine unfold, UkraineAlert delivers the best Atlantic Council expert insight and analysis on Ukraine twice a week directly to your inbox.

In today’s increasingly digital environment, governments may find that they have a lot of siloed systems in place, with each system based on its own separate data, infrastructure, and even principles. As a result, people typically suffer from additional bureaucracy and need to deal repeatedly with different official organizations. Most e-government initiatives are characterized by the same problems worldwide, such as technical disparity of state systems, inappropriate data security and data protection systems, absence of unified interoperability, and inefficient interaction between different elements. Ukraine is pioneering efforts to identify more human-centric solutions to these common problems.

One of the main challenges on the path to building sustainable e-government is to combine user friendliness with a high level of cyber security. If we look at the corresponding indices such as the Online Services Index and Baseline Cyber Security Index, we see that only a handful of European countries have so far managed to achieve the right balance: Estonia, Denmark, France, Spain, and Lithuania. Beyond Europe, only Singapore and Malaysia currently meet the necessary standards.

Ukraine has a strong record in terms of security. Since the onset of the Russian invasion, the Diia system has repeatedly been attacked by Russian cyber forces and has been able to successfully resist these attacks. This is an indication that the Ukrainian platform has the necessary reserve of cyber security along with a robust and secure digital public infrastructure.

The success of the IT industry in Ukraine over the past decade has already changed international perceptions of the country. Instead of being primarily seen as an exporter of metals and agricultural products, Ukraine is now increasingly viewed as a trusted provider of tech solutions. The Ministry of Digital Transformation is now working to make Diia the global role model for human-centric GovTech. According to Samantha Power, the Ukrainian authorities are interested in sharing their experience with the international community so that others can build digital infrastructure for their citizens based on the same human-centric principles.

USAID has announced a special program to support countries that, inspired by Diia, will develop their own e-government systems on its basis. This initiative will be launched initially in Colombia, Kosovo, and Zambia. Ukraine’s Diia system could soon be serving as a model throughout the transitional world.

As they develop their own e-government systems based on Ukraine’s experience and innovations, participating governments should be able to significantly reduce corruption tied to bureaucratic obstacles. By deploying local versions of Diia, transitional countries will also develop a large number of their own high-level IT specialists with expertise in e-government. This is an important initiative that other global development agencies may also see value in supporting.

Anatoly Motkin is president of the StrategEast Center for a New Economy, a non-profit organization with offices in the United States, Ukraine, Georgia, Kazakhstan, and Kyrgyzstan.

Further reading

The views expressed in UkraineAlert are solely those of the authors and do not necessarily reflect the views of the Atlantic Council, its staff, or its supporters.

The Eurasia Center’s mission is to enhance transatlantic cooperation in promoting stability, democratic values and prosperity in Eurasia, from Eastern Europe and Turkey in the West to the Caucasus, Russia and Central Asia in the East.

Follow us on social media
and support our work

The post Ukraine’s Diia platform sets the global gold standard for e-government appeared first on Atlantic Council.

]]>
The 5×5—Cross-community perspectives on cyber threat intelligence and policy https://www.atlanticcouncil.org/content-series/the-5x5/the-5x5-cross-community-perspectives-on-cyber-threat-intelligence-and-policy/ Tue, 30 May 2023 04:01:00 +0000 https://www.atlanticcouncil.org/?p=649392 Individuals with experience from the worlds of cyber threat intelligence and cyber policy share their insights and career advice.

The post The 5×5—Cross-community perspectives on cyber threat intelligence and policy appeared first on Atlantic Council.

]]>
This article is part of The 5×5, a monthly series by the Cyber Statecraft Initiative, in which five featured experts answer five questions on a common theme, trend, or current event in the world of cyber. Interested in the 5×5 and want to see a particular topic, event, or question covered? Contact Simon Handler with the Cyber Statecraft Initiative at SHandler@atlanticcouncil.org.

A core objective of the Atlantic Council’s Cyber Statecraft Initiative is to shape policy in order to better secure users of technology by bringing together stakeholders from across disciplines. Cybersecurity is strengthened by ongoing collaboration and dialogue between policymakers and practitioners, including cyber threat intelligence analysts. Translating the skills, products, and values of these communities between each other can be challenging but there is prospective benefit, as it helps drive intelligence requirements and keeps policymakers abreast of the latest developments and realities regarding threats. For younger professionals, jumping from one community to another can appear to be a daunting challenge.

We brought together five individuals with experience from both the worlds of cyber threat intelligence and cyber policy to share their experiences, perspectives on the dynamics between the two communities, and advice to those interested in transitioning back and forth.

#1 What’s one bad piece of advice you hear for threat intelligence professionals interested in making a transition to working in cyber policy?

Winnona DeSombre Bernsen, nonresident fellow, Cyber Statecraft Initiative, Digital Forensic Research Lab (DFRLab), Atlantic Council

“I have not heard bad pieces of advice specifically geared toward threat intelligence professionals, but I was told by someone once that if I wanted to break into policy, I could not focus on cyber. This is mostly untrue: the number of cyber policy jobs in both the public and the private sectors are growing rapidly, because so many policy problems touch cybersecurity. Defense acquisition? Water safety? Civil Rights? China policy? All of these issues (and many more!) touch upon cybersecurity in some way. However, cyber cannot be your only focus! As most threat intelligence professionals know, cybersecurity does not operate in a vacuum. A company’s security protocols are only as good as the least aware employee, and a nation-state’s targets in cyberspace usually are chosen to further geopolitical goals. Understanding the issues that are adjacent to cyber in a way that creates sound policy is important when making the transition.” 

Sherry Huang, program fellow, Cyber Initiative and Special Projects, William and Flora Hewlett Foundation

“I would not count this as advice, but the emphasis on getting cybersecurity certifications that is persistent in the cyber threat intelligence community is not directly helpful to working in the cyber policy space. Having technical knowledge and skills is always a plus, but in my view, having the ability to translate between policymakers and technical experts is even more valuable in the cyber policy space, and there is not a certification for that.” 

Katie Nickels, nonresident senior fellow, Cyber Statecraft Initiative, Digital Forensic Research Lab (DFRLab), Atlantic Council; director of intelligence, Red Canary

“I think there is a misconception that to work in cyber policy, you need to have spent time on Capitol Hill or at a think tank. I have found that to be untrue, and I think that misconception might make cybersecurity practitioners hesitant to weigh in on policy matters. The way I think of it is that cyber policy is the convergence of two fields: cybersecurity and policymaking. Whichever field is your primary one, you will have to learn about the other. Practitioners can absolutely learn about policy.” 

Christopher Porter, nonresident senior fellow, Cyber Statecraft Initiative, Digital Forensic Research Lab (DFRLab), Atlantic Council

“When intelligence professionals think about policy work, they often experience a feeling of personal control—‘now I get to make the decisions!’ So there is a temptation to start applying your own pet theories or desired policy outcomes and start working on persuasion. That is part of it, but in reality policymaking looks a lot like intelligence work in one key aspect—it is still a team sport. You have to have buy-in from a lot of stakeholders, many of whom will have different perspectives or intellectual approaches to the same problem. Even if you share the same goal, they may have very different tools. So just as intelligence is a team sport, policymaking is too. That is a reality that is not reflected in a lot of academic preparation, which emphasizes theoretical rather than practical policymaking.” 

Robert Sheldon, director of public policy & strategy, Crowdstrike

“I sometimes hear people treating technical career paths and policy career paths as binary–and I do not think that is the direction that we are headed as a community. People currently working in technical cybersecurity disciplines, including threat intelligence, should consider gaining exposure to policy work without fully transitioning and leaving their technical pursuits behind. This is a straightforward way to make ongoing, relevant contributions to a crowded cyber policy discourse.”

#2 What about working in threat intelligence best prepared you for a career in cyber policy, or vice versa?

Desombre Bernsen: “Threat intelligence gave me two key skills: the first is the ability to analyze a large-scale problem. Just like threat intelligence analysts, cyber policymakers must look through large systems to find chokepoints and potential vulnerabilities, while also making sure that the analytic judgments one makes about the system are sound. This skill enables one to craft recommendations that best fit the problem. The second skill is the ability to tailor briefings to different principal decisionmakers. Threat intelligence is consumed by network defenders and C-suite executives alike, so understanding at what level you are briefing is key. A chief information security officer does not care about implementing YARA rules, just like a network defender does not want their time wasted with a recommendation on altering their company-wide phishing policies. Being able to figure out what the principal cares about, and to tailor recommendations to the audience best able to action on them is applicable to the cyber policy field as well. When briefing a company or government agency, knowing their risk tolerance and organization mission, for example, helps tailor the briefing to help them understand what they can do about the problem.” 

Huang: “Being a cyber threat intelligence analyst gave me exposure to a wide variety of issues that are top of mind for government and corporate clients. In a week, I could be writing about nation-state information operations, briefing clients on cybersecurity trends in a certain industry, and sorting through data dumps on dark web marketplaces. Knowing a bit about numerous cyber topics made it easier for me to identify interest areas that wanted to pursue in the cyber policy space and, more importantly, allows me to easily understand and interact with experts on different cyber policy issue areas, which is helpful in my current role.” 

Nickels: “The ability to communicate complex information in an accessible way is a skill I learned from my threat intelligence career that has translated well to policy work. Threat intelligence is all about informing decisions, so there are many overlaps with writing to inform policy.” 

Porter: “In Silicon Valley, it is typical to have a position like ‘chief solutions architect.’ I have spent most of my career in intelligence being the ‘chief problems architect.’ It is the nature of the job to look for threats, problems, and shortcomings. Policymakers have the inverse task—to imagine a better future and build it, even if that is not the path we are on currently. But still, I think policymakers need to keep in mind how their plans might fail or lead to unintended consequences. When it comes to cybersecurity, new policies almost never eliminate a threat, they only change its shape. Much like the end to Ghostbusters, you get to choose the kind of problem you are going to face, but not whether or not you face one. Anyone with a background in intelligence will be ready for that step, where you have to imagine second- and third-order implications beyond the first-order effect you are seeking to have.” 

Sheldon: “Working as an analyst early in my career taught me a lot about analytical methods and rigor, evidence quality, and constructing arguments. Each of these competencies apply directly to policy work.”

#3 What realities of working in the threat intelligence world do you believe are overlooked by the cyber policy community?

Desombre Bernsen: “The cyber policy community has not yet realized that threat intelligence researchers and parts of the security community themselves—similarly to high level cyber policy decisionmakers—are targets of cyberespionage and digital transnational repression. North Korea, Russia, China, and Iran have all targeted researchers and members of civil society in cyberspace. Famously, North Korea would infect Western vulnerability researchers, likely to steal capabilities. In addition, threat intelligence researchers lack the government protections many policymakers have. Researchers that publicly lambast US adversaries can be targeted and threatened online by state-backed trolls. Protections for these individuals are few and far between—CISA just this year rolled out a program for protecting civil society members targeted by transnational repression, so I hope it gets expanded soon.” 

Huang: “Most of the time, threat intelligence analysts (at least in the private sector) do not hear from clients after a report has gone out and do not have visibility into whether their analysis and recommendations are helpful or have real-world impact. Feedback, whether positive or constructive, can help analysts fine-tune their craft and improve future analysis.” 

Nickels: “I think the cyber policy community largely considers threat intelligence to be information to be shared about breaches, often in the form of indicators like IP addresses. While that can be one aspect of it, they may not recognize that threat intelligence analysts consider much more than that. Broadly, threat intelligence is about using an understanding of how cyber threats work to make decisions. Under that broad definition, cyber policymakers have a significant need for threat intelligence—if policymakers do not know how the threats operate, they cannot determine how to create policies to help organizations better protect against them.” 

Porter: “There are aspects of the work—such as attribution—that are more reliable and not as difficult as imagined. Conversely, there are critical functions, like putting together good trends data or linking together multiple different pieces of evidence, that can be very difficult and time-intensive but seem simple to those outside the profession. So there is always a little bit of education that needs to take place before getting into a substantive back-and-forth, where the cyber intelligence community needs to explain a little bit about how they are doing their work, and the strengths and limitations of that so that everyone has the same assumptions and understands one another’s perspective.” 

Sheldon: “The policy community sometimes lacks understanding of the sources and methods that threat intelligence practitioners leverage in their analysis. This informs the overall quality of their work, the skill needed to produce it, timeliness, extensibility, the possibility for sharing, and so on. All of these are good reasons for the two communities to talk more about how they do their work.”

More from the Cyber Statecraft Initiative:

#4 What is the biggest change in writing for a threat intelligence audience vs. policymakers? 

Desombre Bernsen: “The scope is much broader. Threats to a corporate system are confined largely to the corporate system itself, but the world of geopolitics has far more players and many more first- and second-order effects of the policies you recommend.” 

Huang: “Not having to be as diligent about confidence levels! Jokes aside, it is similar in that being precise in wording and being brief and to the point are appreciated by both audiences. However, I do find that a policy audience often cares more about the forward-looking aspect and the ‘so what?’” 

Nickels: “The biggest difference is that when writing for policymakers, you are expected to express your opinion! As part of traditional intelligence doctrine, threat intelligence analysts avoid injecting personal opinions into their assessments and try to minimize the effects of their cognitive biases. Intelligence analysts might write about potential outcomes of a decision, but should not weigh in on which decision should be made. However, policymakers want to hear what you recommend. It can feel freeing to be able to share opinions, and it remains valuable to try to hedge against cognitive biases because it allows for sounder policy recommendations.” 

Porter: “Threat intelligence professionals are going to be very interested in how the work gets done, as the culture—to some degree—borrows from academic work, in terms of rewarding reproducibility of results and sharing of information. But, strictly speaking, policymakers do not care about that. Their job is to link the findings in those reports to the broader strategic context. One really only need to show enough of how the intelligence work was done to give the policymaker confidence and help them use the intelligence appropriately without understating or overstating the case. The result is that for policy audiences you end up starting from the end of the story—instead of a blog post or white paper building up to a firm conclusion, you talk about the conclusion and, depending on the level of technical understanding and skepticism on the part of the policymaker, may or may not get into the story of how things were pieced together at all.” 

Sheldon: “Good writing in both disciplines has much in common. Each should be concise, include assertions and evidence, provide context, and make unknowns clear. But there are perhaps fewer ‘product types’ relevant to core threat intelligence consumers and, in some settings, analysts can assume some fundamental knowledge base among their audience.” 

#5 Where is one opportunity to work on policy while still in industry that most people miss?

Desombre Bernsen: “You absolutely can work on policy issues while working in threat intelligence! I cannot just choose one, but I highly recommend searching for non-resident fellowship programs in think tanks (ECCRI, Atlantic Council, etc.), speaking at conferences on threat trends and their policy implications, and doing more policy through corporate threat wargaming internally.” 

Huang: “Volunteering at conferences that involve the cyber policy community, such as Policy@DEF CON and IGF-USA. These are great opportunities to support policy-focused discussions and to have deeper interactions with peers in the cyber policy space.” 

Nickels: “In the United States, one commonly missed opportunity is to reach out to elected representatives with opinions on cybersecurity legislation. Cybersecurity practitioners can also be on the lookout for opportunities to provide comments that help shape proposed regulations affecting the industry. For example, the Commerce Department invited public comments to proposed changes to the Wassenaar Arrangement around export controls of security software, and cybersecurity practitioners weighed in on how they felt the changes would influence tool development.” 

Porter: “That will vary greatly from company to company; almost universally though, you will have the opportunity to help your colleagues and future generations by providing mentorship and career development opportunities. Personnel is policy, so in addition to thinking about particular policies you might want to shape, think also about how you can shape the overall policymaking process by helping others make the most of their talents. It will take years, but, in the long run, those are the kinds of changes that are most lasting.” 

Sheldon: “Regardless of your current role, you can read almost everything relevant to the policy discourse. National strategies, executive orders, bills, commission and think tank reports, and so on are all publicly available. Unfortunately, many in the policy community are only skimming, but reading these sources deeply and internalizing them is a great basis to distinguish yourself in a policy discussion. Also, there are more opportunities than ever to read and respond to Requests for Comment from the National Institute of Standards and Technology and other government agencies, and these frequently include very technical questions.”

Simon Handler is a fellow at the Atlantic Council’s Cyber Statecraft Initiative within the Digital Forensic Research Lab (DFRLab). He is also the editor-in-chief of The 5×5, a series on trends and themes in cyber policy. Follow him on Twitter @SimonPHandler.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post The 5×5—Cross-community perspectives on cyber threat intelligence and policy appeared first on Atlantic Council.

]]>
Iran is using its cyber capabilities to kidnap its foes in the real world https://www.atlanticcouncil.org/blogs/iransource/iran-cyber-warfare-kidnappings/ Wed, 24 May 2023 16:28:19 +0000 https://www.atlanticcouncil.org/?p=649191 This new form of transnational repression by Iran has alarmed security professionals and governments worldwide. 

The post Iran is using its cyber capabilities to kidnap its foes in the real world appeared first on Atlantic Council.

]]>
In November 2020, as results for the closely watched and hotly contested United States presidential and congressional elections began to emerge, hackers gained access to at least one website announcing results. They were thwarted, but it took the resources of the US military and the Department of Homeland Security to block what could have turned into another attempt to spread doubts and confusion about a vote that would eventually threaten to undermine US democracy some weeks later. 

The culprit in the attack, according to US officials and tech professionals cited by The Washington Post, was a hacking group operating out of or at the direction of Iran—an increasingly powerful state actor in the world of cyber warfare. 

The Islamic Republic has been steadily improving and sharpening its cyber warfare, cyber espionage, and electronic sabotage abilities, staging complex operations that, while not always successful, show what experts in the field describe as devious inventiveness. 

In addition to its nuclear ambitions, its refining of missile technologies, and cultivation of armed ideologically motivated proxy paramilitary groups, Iran’s electronic warfare and intelligence operations are emerging as yet another worry about the country’s international posture. 

The cyber realm fits snugly into Iran’s security arsenal. It is characterized by the asymmetricity, clandestinity, and plausible deniability that complement the proxy and shadow operations that have long been Islamic Republic’s favored tools for decades. 

Iran’s most aggressive cyber realm actions are also powered by a sense of righteous grievance and resentment, emotional and ideological motivations that have long energized the clerical establishment. After all, it was US and Israeli spy agencies that, according to many experts, launched the era of cyber warfare by deploying the Stuxnet virus against the country’s controversial nuclear program in 2010, damaging hundreds of its centrifuges. Tehran is proud that its growing army of techies is catching up and, in some ways, surpassing the West at its own games. 

Iran’s cyber efforts have been steadily broadening. They range from attempting to hack into defense, civil society, and private systems abroad to harassment campaigns against opponents in the diaspora. Experts closely watching Iran’s Internet and electronic warfare activities have detected an escalation of its abilities and ambitions in recent months. In early May, Microsoft issued a warning about Iran’s increasingly aggressive and sophisticated tactics. 

“Iranian cyber actors have been at the forefront of cyber-enabled influence operations, in which they combine offensive cyber operations with multi-pronged influence operations to fuel geopolitical change in alignment with the regime’s objectives,” said the report by Microsoft’s Clint Watts, a former FBI cybersecurity expert. 

In particular, Iran appears to be building complex tactics that merge cyber and real world operations to lure people into kidnappings. This new form of transnational repression has alarmed security professionals and governments worldwide. 

“We’re seeing an evolution over time of this actor evolving and using their techniques in ever more complex ways,” Sherrod DeGrippo, a former head of threat research and detection at the cyber security firm Proofpoint told me in January. “Iran is seen in the big four of the main actors. It is really stepping onto the stage and evolving what it’s doing.”

One particularly nefarious tactic that they are using is creating fake personas in the form of researchers who approach targets and try to glean information or lure them out into the open for suspected kidnapping practices. Through my research in Turkey, we learned that it is quite possible Iranian intelligence operatives have infiltrated the Turkish mobile phone networks and are using the data to track dissidents in the country. In one instance, a vocal dissident journalist received a message identifying a cafe near her home that she walked past every day. She was so terrified that she refused to leave her home for months and wound up obtaining asylum in a Western country.

In another instance, a dissident living in Turkey received messages with photographs of recent tourist sites he had visited on a trip to Istanbul. The speculation is that Iran had managed to purchase or surreptitiously access tracking data for their phones and use it to intimidate them.

According to a December 2022 report by ProofPoint, Iran’s cyber activities have gone beyond anonymous hacks and phishing campaigns to include made-up personas meant to lure people out into the open and in at least one alleged attempt, a kidnapping attempt. Sometimes alleged Iranian operatives use US or Western phone numbers to register WhatsApp accounts which can obscure their identities. 

Last year, Israel’s domestic security service Shin Bet uncovered an alleged plot to use false identities with robust and complex legends to lure businessmen and scholars abroad in what security officials suspect were Iranian kidnapping plots. In one case, an operative pretending to be a prominent Swiss political scientist invited Israelis to a conference abroad. A number of Israelis were on the verge of traveling before the plot was exposed. 

Experts are also noticing that Iran is getting better and better at creating virtual honey traps. “They’re evolving their ability to create personas,” said DeGrippo, who has since moved to Microsoft. “They’ve used these personas that are mildly attractive. They like to use women’s names, as they have learned that they get a bit more interaction and success when they use female personas.”

The US and other Western countries are well aware of the threat posed by Iranian cyber operations and have taken steps to counter them. But Iran’s state-sponsored program continues to evolve. Tehran likely believes the cyber capabilities give it leverage to yield information without the messiness of a hostage crisis, the headlines of a boat seizure, the riskiness of a human intelligence operation, or the potential retribution of a missile strike.

In January, the London cyber security firm Secureworks published a report on the emergence of a new likely Iranian hacking collective called Abraham’s Ax, which aimed to use leaks and hacks to prevent the expansion of the Abraham Accords normalizing ties between Israel and some Arab states. The collective leaked allegedly stolen from the Saudi Ministry of the Interior and a recording said to be an intercepted phone conversation between Saudi ministers.

“There are clear political motivations behind this group with information operations designed to destabilize delicate Israeli-Saudi Arabian relations,” Rafe Pilling, a researcher at Secureworks, was quoted as saying.

Less than two months later, in March, Saudi Arabia signed a deal to resume ties with Iran rather than commence them with Israel, as many in Washington and Jerusalem were expecting.  

While Prime Minister Benjamin Netanyahu’s hardline government and his rightwing policies likely played a major role in Saudi’s decision to hold off on joining the Abraham Accords, Riyadh’s hopes that it could rein Iran’s diverse array of threats—including its increasing cyber warfare capabilities—likely played a role in its decision to pen the China-brokered deal with Tehran. 

Iran invests in its cyber warfare program because it works.

Borzou Daragahi is an international correspondent for The Independent. He has covered the Middle East and North Africa since 2002. He is also a nonresident fellow with the Atlantic Council’s Middle East Security Initiative. Follow him on Twitter: @borzou.

The post Iran is using its cyber capabilities to kidnap its foes in the real world appeared first on Atlantic Council.

]]>
Regional cyber powers are banking on a wired future. Expanding the Abraham Accords to cybersecurity will help. https://www.atlanticcouncil.org/blogs/menasource/cybersecurity-iran-abraham-accords-israel/ Fri, 19 May 2023 19:44:07 +0000 https://www.atlanticcouncil.org/?p=647942 The Abraham Accord countries face threats from hostile actors, and defending their technology and their peoples is a challenge.

The post Regional cyber powers are banking on a wired future. <strong>Expanding the Abraham Accords to cybersecurity will help</strong>. appeared first on Atlantic Council.

]]>

The Abraham Accords is one of the major diplomatic achievements of the last five years. This historic agreement normalized relations between Israel and the Arab countries of Bahrain, Morocco, Sudan, and the United Arab Emirates (UAE), in partnership with the United States. Following the initial burst of activity late in the Donald Trump administration, the accords’ first expansion under the Joe Biden administration was announced in Tel Aviv on January 31, when Bahrain, Israel, the UAE, and the United States said they would widen the scope of the accords to include cybersecurity.

The January announcement by US Department of Homeland Security Under Secretary for Strategy, Policy, and Plans Robert Silvers was, like the accords themselves, a surprise that seems perfectly logical in hindsight. Israel and the Arab countries who participated in the announcement are among the Middle East and North Africa (MENA) region’s most dynamic economies, with substantial public and private investments in high tech being an important factor in each country. These countries face threats from hostile actors, and defending their technology and their peoples is a challenge. A challenge shared can lead to a challenge overcome.

Cyberattacks from nation-states and cybercriminals affect everyone

Each of the countries involved, with the possible exception of Morocco, has recent historical reason to be concerned about protecting its people and its industrial base—cyber and non-cyber—against cyberattacks. The greatest threats come from the Islamic Republic of Iran and cybercriminals—and the two overlap like Venn diagram circles.

Iran uses a well-documented peculiar sense of symmetry in how it conducts cyberattacks. Iran has an especially aggressive cyber offensive state capability for a country its size. Most of Iran’s nearby peers in population (ex: Turkey, Congo, Thailand, and Tanzania) or GDP per capita (ex: Bosnia, Namibia, Paraguay, and Ecuador) do not mount offensive cyberattacks or information operations against other countries on the scale that Tehran does. Iran and Israel have been engaged in “gray zone” cyberattacks against each other for more than a decade, and Iran has carried out various kinds of cyber operations against Israel, Saudi Arabia, Bahrain, most of the Arab countries of the Gulf, and the United States.

Cybercrime is another threat that has increased in recent years. The United States has convened two international conferences on ransomware, with the most recent being held in October-November 2022. The UAE and Saudi Arabia were reportedly the main targets in the Gulf for ransomware attacks, according to media reports, but other Gulf Arab countries are also at risk.

Complicating the picture is the fact that Iran often uses private contractors to carry out cyber operations—sometimes those entities carry out cyberattacks for profit as well. This complicates attribution and gives Tehran a patina of plausible deniability.

These factors make deterring cyberattacks especially difficult in the Middle East. The United States has sometimes retaliated against Iranian cyberattacks by carrying out operations against the perpetrators. However, the logic of deterrence requires an ability to impose costs that surpass the adversary’s perceived gains from the conduct in question. Iran has shown limited susceptibility thus far to being deterred by the US or others’ cyber operations. This makes cyber defense even more important.

Setting aside old rivalries to work together on cybersecurity is now in everyone’s interest

Iranian cyber behavior, the rising threat of cybercrime, and the inability so far to deter these behaviors have made it imperative that Israel, the Gulf countries, and the United States work more closely on civilian cyber defense.

Network imperatives make it important that this collaboration be both at network speeds and peer-to-peer. Cybersecurity needs to move quickly to be effective at addressing threats, which means that governments facing common threats should work together. The architecture of pre-Internet times allowed for hub-and-spoke information sharing in a situation where several governments were regional rivals but all had a common ally they could trust (usually, an ally that was considerably far away).

As a result, the United States could simultaneously be an ally of Israel and most Arab countries in the Middle East, and each of the countries would be willing to share information with the United States, even if they wouldn’t do so with each other (France and the United Kingdom have played similar roles with different sets of countries). Each country could trust the United States to protect its sources and methods while working for the common good, which, in earlier days, was focused on keeping the Soviet Union at bay.

For a time, this approach worked in cybersecurity. But this is no longer the case. Al-Qaeda and the Islamic State of Iraq and al-Sham (ISIS) were social-media savvy but lacked the resources and deep bench of a nation-state, allowing the United States and MENA to limit terrorists’ efforts to raise funds and recruit new fighters.

Today, Iran, even under sanctions, has far more resources than al-Qaeda ever did to use cyber tools to target Israel and the Gulf Arab states. While there are signs that a lack of funds holds back some of Iran’s cyber operations, cyberattacks are still remarkably cost-effective. Cybercrime raises enough funds to enrich organized gangs to run their own 24/7 ransomware help desks. “Ransomware-as-a-service” is now an actual thing.

The countries in the MENA region still face a number of challenges in the cyber domain. The use of Chinese technology by some countries raises fears of possible network penetration. Each country needs to work out how privacy norms and expectations should govern electronic surveillance tools, because the abuse of those tools has become an international concern. US concerns over “spyware” has already led to an executive order against the use of commercial tools that pose a risk to national security or have been misused to enable human rights abuses around the world.

A number of countries in MENA—Israel, Bahrain, and the UAE included—are increasingly becoming regional cyber powers and are banking on a wired future. Many governments in the region are trying to stimulate local investment in the digital sector, and protecting small but growing companies from cyber threats is becoming a significant business, with market research experts estimating a doubling of dollar volume in five years. The UAE’s new National Security Strategy aims to train more than forty thousand cybersecurity professionals and encourages Emirati students to pursue a career in this field.

To the private sector, an agreement among Abraham Accords members is more than just a sign of possible government-to-government cooperation. The agreement gives a valuable green light for direct business-to-business exchanges that could benefit the economy of the region. It may also heighten the value of joining the accords for other nations facing cyber threats, such as Saudi Arabia.

Given the importance of a closer cybersecurity partnership among Israel, key Gulf Arab states, and the United States, broadening the Abraham Accords to include cybersecurity is an eminently sensible approach. Like other parts of the accords, expanding them to include cybersecurity will have a lasting impact if cooperation leads to real benefits in security and commerce, making the Middle East more secure and prosperous than ever before.

Thomas S. Warrick is the director of the Future of DHS project at the Scowcroft Center for Strategy and Security’s Forward Defense practice, and a senior fellow and the Scowcroft Middle East Security Initiative at the Atlantic Council. 

The post Regional cyber powers are banking on a wired future. <strong>Expanding the Abraham Accords to cybersecurity will help</strong>. appeared first on Atlantic Council.

]]>
The 5×5—Cryptocurrency hacking’s geopolitical and cyber implications https://www.atlanticcouncil.org/content-series/the-5x5/the-5x5-cryptocurrency-hackings-geopolitical-and-cyber-implications/ Wed, 03 May 2023 04:01:00 +0000 https://www.atlanticcouncil.org/?p=641955 Experts explore the cybersecurity implications of cryptocurrencies, and how the United States and its allies should approach this challenge.

The post The 5×5—Cryptocurrency hacking’s geopolitical and cyber implications appeared first on Atlantic Council.

]]>
This article is part of The 5×5, a monthly series by the Cyber Statecraft Initiative, in which five featured experts answer five questions on a common theme, trend, or current event in the world of cyber. Interested in the 5×5 and want to see a particular topic, event, or question covered? Contact Simon Handler with the Cyber Statecraft Initiative at SHandler@atlanticcouncil.org.

In January 2023, a South Korean intelligence service and a team of US private investigators conducted an operation to interdict $100 million worth of stolen cryptocurrency before its hackers could successfully convert the haul into fiat currency. The operation was the culmination of a roughly seven-month hunt to trace and retrieve the funds, stolen in June 2022 from a US-based cryptocurrency company, Harmony. The Federal Bureau of Investigation (FBI) attributed the theft to a team of North Korean state-linked hackers—one in a string of massive cryptocurrency hauls aimed at funding the hermit kingdom’s illicit nuclear and missile programs. According to blockchain analysis firm Chainalysis, North Korean hackers stole roughly $1.7 billion worth of cryptocurrency in 2022—a large percentage of the approximately $3.8 billion stolen globally last year.

North Korea’s operations have brought attention to the risks surrounding cryptocurrencies and how state and non-state groups can leverage hacking operations against cryptocurrency wallets and exchanges to further their geopolitical objectives. We brought together a group of experts to explore cybersecurity implications of cryptocurrencies, and how the United States and its allies should approach this challenge.

#1 What are the cybersecurity risks of decentralized finance (DeFi) and cryptocurrencies? What are the cybersecurity risks to cryptocurrencies?

Eitan Danon, senior cybercrimes investigator, Chainalysis

Disclaimer: Any views and opinions expressed are the author’s alone and do not reflect the official position of Chainalysis. 

“DeFi is one of the cryptocurrency ecosystem’s fastest-growing areas, and DeFi protocols accounted for 82.1 percent of all cryptocurrency stolen (totaling $3.1 billion) by hackers in 2022. One important way to mitigate against this trend is for protocols to undergo code audits for smart contracts. This would prevent hackers from exploiting vulnerabilities in protocols’ underlying code, especially for cross-chain bridges, a popular target for hackers that allows users to move funds across blockchains. As far as the risk to cryptocurrencies, the decentralized nature of cryptocurrencies increases their security by making it extraordinarily difficult for a hostile actor to take control of permissionless, public blockchains. Transactions associated with illicit activity continue to represent a minute portion (0.24 percent) of the total crypto[currency] market. On a fundamental level, cryptocurrency is a technology—like data encryption, generative artificial intelligence, and advanced biometrics—and thus a double-edged sword.” 

Kimberly Donovan director, Economic Statecraft Initiative, and Ananya Kumar, associate director of digital currencies, GeoEconomics Center, Atlantic Council

“We encourage policymakers to think about cybersecurity vulnerabilities of crypto-assets and services in two ways. The first factor is the threat of cyberattacks for issuers, exchanges, custodians, or wherever user assets are pooled and stored. Major cryptocurrency exchanges like Binance and FTX have had serious security breaches, which has led to millions of dollars being stolen. The second factor to consider is the use of crypto-assets and crypto-services in money-laundering. Often, attackers use cryptocurrencies to receive payments due to the ability to hide or obfuscate financial trails, often seen in the case of ransomware attacks. Certain kinds of crypto-services such as DeFi mixers and aggregators allow for a greater degree of anonymity to launder money for criminals, who are interested in hiding money and moving it quickly across borders.” 

Giulia Fanti, assistant professor of electrical and computer engineering, Carnegie Mellon University

“The primary cybersecurity risks (and benefits) posed by DeFi and cryptocurrencies are related to lack of centralized control, which is inherent to blockchain technology and the philosophy underlying it. Without centralized control, it is very difficult to control how these technologies are used, including for nefarious purposes. Ransomware, for example, enables the flow of money to cybercriminial organizations. The primary cybersecurity risks to cryptocurrencies on the other hand can occur at many levels. Cryptocurrencies are built on various layers of technology, ranging from an underlying peer-to-peer network to a distributed consensus mechanism to the applications that run atop the blockchain. Attacks on cryptocurrencies can happen at any of these layers. The most widely documented attacks—and those with the most significant financial repercussions—are happening at the application layer, usually exploiting vulnerabilities in smart contract code (or in some cases, private code supporting cryptocurrency wallets) to steal funds.” 

Zara Perumal, chief technology officer, Overwatch Data

“Decentralized means no one person or institution is in control. It also means that no one person can easily step in to enforce. In cases like Glupteba, fraudulent servers or data listed on a blockchain can be hard to take down in comparison to cloud hosted servers where companies can intervene. Cybersecurity risks to cryptocurrencies include endpoint risk, since there is not a centralized party to handle returning accounts as the standard ways of credential theft is a risk to cryptocurrency users. There is a bigger risk in cases like crypto[currency] lending, where one wallet or owner holds a lot of keys and is a large target. In 2022, there were numerous high-profile protocol attacks, including the Wormhole, Ronin, and BitMart attacks. These attacks highlight the risks associated with fundamental protocol vulnerabilities via blockchain, smart contracts or user interface.”

#2 What organizations are most active and capable of cryptocurrency hacking and what, if any, geopolitical impact does this enable for them?

Danon: “North Korea- and Russia-based actors remain on the forefront of crypto[currency] crime. North Korea-linked hackers, such as those in the Lazarus Group cybercrime syndicate, stole an estimated $1.7 billion in 2022 in crypto[currency] hacks that the United Nations and others ­­have assessed the cash-strapped regime uses to fund its weapons of mass destruction and ballistic missiles programs. Press reporting about Federation Tower East—a skyscraper in Moscow’s financial district housing more than a dozen companies that convert crypto[currency] to cash—has highlighted links between some of these companies to money laundering associated with the ransomware industry. Last year’s designations of Russia-based cryptocurrency exchanges Bitzlato and Garantex for laundering hundreds of millions of dollars’ worth of crypto[currency] for Russia-based darknet markets and ransomware actors cast the magnitude of this problem into starker relief and shed light on a diverse constellation of cybercriminals. Although many pundits have correctly noted that Russia cannot ‘flip a switch’ and run its G20 economy on the blockchain, crypto[currency] can enable heavily sanctioned countries, such as Russia, North Korea, and others, to project power abroad while generating sorely needed revenue.” 

Donovan and Kumar: “We see actors from North Korea, Iran, and Russia using both kinds of cybersecurity threats described above to gain access to money and move it around without compliance. Geopolitical implications include sanctioned state actors or state-sponsored actors using the technology to generate revenue and evade sanctions. Hacking and cyber vulnerabilities are not specific to the crypto-industry and exist across digital infrastructures, specifically payments architecture. These threats can lead to national security implications for the private and public entities accessing or relying on this architecture.” 

Perumal: “Generally, there are state-sponsored hacking groups that are targeting cryptocurrencies for financial gain, but also those like the Lazarus Group that are disrupting the cryptocurrency industry. Next, criminal hacking groups may both use cryptocurrency to receive ransom payments or also attack on chain protocols. These groups may or may not be associated with a government or political agenda. Many actors are purely financially motivated, while other government actors may hack to attack adversaries without escalating to kinetic impact.”

#3 How are developments in technology shifting the cryptocurrency hacking landscape?

Danon: “The continued maturation of the blockchain analytics sector has made it harder for hackers and other illicit actors to move their ill-gotten funds undetected. The ability to visualize complex crypto[currency]-based money laundering networks, including across blockchains and smart contract transactions, has been invaluable in enabling financial institutions and crypto[currency] businesses to comply with anti-money laundering and know-your-customer requirements, and empowering governments to investigate suspicious activity. In some instances, hackers have chosen to let stolen funds lie dormant in personal wallets, as sleuths on crypto[currency] Twitter and in industry forums publicly track high-profile hacks and share addresses in real-time, complicating efforts to off-ramp stolen funds. In other instances, this has led some actors to question whether this transparency risks unnecessary scrutiny from authorities. For example, in late April, Hamas’s military wing, the Izz al-Din al-Qassam Brigades, publicly announced that it was ending its longstanding cryptocurrency donation program, citing successful government efforts to identify and prosecute donors.” 

Donovan and Kumar: “Industry is responding and innovating in this space to develop technology to protect and/or trace cyber threats and cryptocurrency hacks. We are also seeing the law enforcement, regulatory, and other government communities develop the capability and expertise to investigate these types of cybercrimes. These communities are taking steps to make public the information gathered from their investigations, which further informs the private sector to safeguard against cyber operations as well as technology innovations to secure this space.” 

Fanti: “They are not really. For the most part, hacks on cryptocurrencies are not increasing in frequency because of sophisticated new hacking techniques, but rather because of relatively mundane vulnerabilities in smart contracts. There has been some research on using cutting-edge tools such as deep reinforcement learning to try to gain funds from smart contracts and other users, particularly in the context of DeFi. However, it is unclear to what extent DeFi users are using such tools; on-chain records do not allow observers to definitively conclude whether such activity is happening.” 

Perumal: “As the rate of ransomware attacks rises, cryptocurrency is more often used as a mechanism to pay ransoms. For both that and stolen cryptocurrency, defenders aim to track actors across the blockchain and threat actors increase their usage mixers and microtransactions to hide their tracks. A second trend is crypto-jacking and using cloud computing from small to large services to fund mining. The last development is not new. Sadly, phishing and social engineering for crypto[currency] logins is still a pervasive threat and there is no technical solution to easily address human error.”

More from the Cyber Statecraft Initiative:

#4 What has been the approach of the United States and allied governments toward securing this space? How should they be approaching it?

Danon: “The US approach toward securing the space has centered on law enforcement actions, including asset seizures and takedowns with partners of darknet markets, such as Hydra Market and Genesis Market. Sanctions in the crypto[currency] space, which have dramatically accelerated since Russia’s invasion of Ukraine last February, have generated awareness about crypto[currency] based money laundering. However, as is the case across a range of national security problems, the United States has at times over relied on sanctions, which are unlikely to change actors’ behavior in the absence of a comprehensive strategy. The United States and other governments committed to AML should continue to use available tools and data offered by companies like Chainalysis to disrupt and deter bad actors from abusing the international financial system through the blockchain. Given the blockchain’s borderless and unclassified nature, the United States should also pursue robust collaboration with other jurisdictions and in multilateral institutions.” 

Donovan and Kumar: “The United States and its allies are actively involved in this space to prevent regulatory arbitrage and increase information sharing on cyber risks and threats. They have also increased communication with the public and private sectors to make them aware of cyber risks and threats, and are making information available to the public and industry to protect consumers against cybercrime. Government agencies and allies should continue to approach this issue by increasing public awareness of the threats and enabling industry innovation to protect against them.” 

Fanti: “One area that I think needs more attention from a consumer protection standpoint is smart contract security. For example, there could be more baseline requirements and transparency in the smart contract ecosystem about the practices used to develop and audit smart contracts. Users currently have no standardized way to evaluate whether a smart contract was developed using secure software development practices or tested prior to deployment. Standards bodies could help set up baseline requirements, and marketplaces could be required to report such details. While such practices cannot guarantee that a smart contract is safe, they could help reduce the prevalence of some of the most common vulnerabilities.” 

Perumal: “Two recent developments from the US government are the White House cybersecurity strategy and the Cybersecurity and Infrastructure Security Agency’s (CISA) move to ‘secure by default.’ They both emphasize cooperation with the private sector to move security of this ecosystem to cloud providers. While the system is inherently decentralized, if mining or credential theft is happening on major technology platforms, these platforms have an opportunity to mitigate risk. The White House emphasized better tracing of transactions to “trace and interdict ransomware payments,” and CISA emphasizes designing software and crypto[currency] systems to be secure by default so smaller actors and users bear less of the defensive burden. At a high level, I like that this strategy moves protections to large technology players that can defend against state actors. I also like the focus on flexible frameworks that prioritize economics (e.g., cyber liability) to set the goal, but letting the market be flexible on the solution—as opposed to a prescriptive regulatory approach that cannot adapt to new technologies. In some of these cases, I think cost reduction may be a better lever than liability, which promotes fear on a balance sheet, however, I think the push toward financially motivated goals and flexible solutions is the right direction.”

#5 Has the balance of the threats between non-state vs. state actors against cryptocurrencies changed in the last five years? Should we be worried about the same entities as in 2018?

Danon: “Conventional categories of crypto[currency]-related crime, such as fraud shops, darknet markets, and child abuse material, are on the decline. Similarly, the threat from non-state actors, such as terrorist groups, remains extremely low relative to nation states, with actors such as North Korea and Russia continuing to leverage their technical sophistication to acquire and move cryptocurrency. With great power competition now dominating the policy agenda across many capitals, analysts should not overlook other ways in which states are exercising economic statecraft in the digital realm. For example, despite its crypto[currency] ban, China’s promotion of its permissioned, private blockchain, the Blockchain-based Service Network, and its central bank digital currency, the ‘digital yuan,’ deserve sustained research and analysis. Against the backdrop of China’s rise and the fallout from the war on Ukraine, it will also be instructive to monitor the efforts of Iran, Russia, and others to support non-dollar-pegged stablecoins and other initiatives aimed at eroding the dollar’s role as the international reserve currency.” 

Donovan and Kumar: “More is publicly known now on the range of actors in this space than ever. Agencies such as CISA, FBI, and the Departments of Justice and the Treasury and others have made information available and provided a wide array of resources for people to get help or learn—such as stopransomware.gov. Private blockchain analytics firms have also enabled tracing and forensics, which in partnership with enforcement can prevent and punish cybercrime in the crypto[currency] space. Both the knowledge about ransomware and awareness of ransomware attacks have increased since 2018. As the popularity of Ransomware as a Service rises, both state and non-state actors can cause destruction. We should continue to be worried about cybercrime in general and remain agnostic of the actors.” 

Perumal: “State actors continue to get more involved in this space. As cryptocurrencies and some digital currencies based on the blockchain become more mainstream, attacking it allows a more targeted geopolitical impact. In addition to attacks by governments (like Lazarus Group), a big recent development was China’s ban on cryptocurrency, which moved mining power from China to other parts of the world, especially the United States and Russia. This changed attack patterns and targets. At a high level, we should be worried about both financially-motivated and government-backed groups, but as the crypto[currency] market grows so does the sophistication of attacks and attackers.”

Simon Handler is a fellow at the Atlantic Council’s Cyber Statecraft Initiative within the Digital Forensic Research Lab (DFRLab). He is also the editor-in-chief of The 5×5, a series on trends and themes in cyber policy. Follow him on Twitter @SimonPHandler.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post The 5×5—Cryptocurrency hacking’s geopolitical and cyber implications appeared first on Atlantic Council.

]]>
Practice makes perfect: What China wants from its digital currency in 2023 https://www.atlanticcouncil.org/blogs/econographics/practice-makes-perfect-what-china-wants-from-its-digital-currency-in-2023/ Mon, 24 Apr 2023 16:58:55 +0000 https://www.atlanticcouncil.org/?p=639365 The e-CNY network has expanded over the last year, and China's goals have only become clearer. Domestically, the People’s Bank of China is still in test-and-learn mode, globally, China is more focused on setting defining international standards.

The post <strong>Practice makes perfect: What China wants from its digital currency in 2023</strong> appeared first on Atlantic Council.

]]>
It’s been a year since the Beijing Olympics, where China’s central bank digital currency (CBDC), the e-CNY, debuted in front of an international audience. As the e-CNY network has expanded over the last 12 months, China’s goals have become clearer. Domestically, the People’s Bank of China (PBOC) is still in test-and-learn mode, prioritizing experimentation over adoption. Globally, China is less focused on internationalizing the RMB than it is on setting technical and regulatory standards that will define how other countries’ central bank digital currencies will work going forward. 

Domestic ambitions 

Even with its persistent low adoption rates, the e-CNY is by far the largest CBDC pilot in the world by both the amount of currency in circulation—13.61 billion RMB—and the number of users—260 million wallets. As the pilot regions have expanded to 25 cities, so have the real-world use-cases tested through the pilots. From the start, the PBOC’s objective within its borders has been to not just to compete in China’s domestic payments landscape, which is dominated by two “private” players—AliPay and TencentPay/WePay—but to expand the universe of economic activities that are included the state-enabled payments network. So far, common use-cases being tested include public transportation, public health checkpoints including COVID test centers, integrated identification cards to receive and pay utilities such as retirement benefits and school tuition payments, as well as tax payments and refunds. The pilots have also begun testing technical and programmability functions like smart contracts for B2B and B2C functions, e-commerce and credit provision. Some of these projects are described in the table below..

These domestic test cases are likely to expand this year and cover a broader range of activities and regions. Already, the PBOC is looking to reach the margins of society: e-CNY is being tested amongst elderly populations and in broader rural connectivity schemes initiated to improve digitization. It is also aiming to reach AliPay and TencentPay/WePay customers through integrating their wallet and e-commerce functions for e-CNY distribution. Over the last few years, the PBOC, like the broader Chinese state apparatus, has displayed a tendency toward centralizing regulatory authority when it comes to the two sectors at the intersection of CBDCs —finance and technology. The universe of expanded economic networks enabled by the e-CNY has rightly created concerns regarding the centralization of authority by the PBOC, and the resulting impacts on freedom of choice and from state surveillance for its users. The expanded network of use cases across applications that would collect data on personal identification, health information, and consumption habits and behavior should also raise concerns around the vulnerability of such data to cyber threats domestically and abroad. 

Recent developments on regulation

Interestingly, on the regulatory side, at the National People’s Congress in early March, there were a few changes announced to China’s financial regulators. The PBOC has lost its authority over financial holding companies and financial consumer protection regulation to a new regulator, the State Administration of Financial Supervision, which will also oversee banking and insurance regulation. The PBOC is also opening up 31 new provincial-level branches signaling deeper coordination between the PBOC and provincial level authorities. This reshuffle in authority signals further centralization of power under the party apparatus. Unlike other central banks, the PBOC is not fully independent, and requires the State Council to sign off on decisions relating to money supply and interest rates, and the State Council has been tracking PBOC’s research into e-CNY since approving the initial plan in 2016

From a monetary policy perspective, the e-CNY infrastructure could be a handy tool in the hands of the PBOC, with which it can increase or decrease money supply. As China devises a strategy to stimulate consumer spending this year, there is an opportunity to do so by using and expanding the e-CNY network. China has already increased bank’s short term liquidity by $118 billion and long term liquidity by $72 billion through reducing reserve ratio requirements this year.   

PBOC’s ambition for an all-encompassing domestic network of e-CNY infrastructure raises questions about the state’s ability and reach in controlling citizens’ activities. The pilots test real-world scenarios for CBDC use cases, and while adoption has been low, the broad range of applications suggest that testing, not adoption, is the priority for now.

e-CNY around the world

Use of the word “e-CNY” commonly refers to this domestic, retail payments infrastructure. However, much of the discussion in Washington references the cross-border, wholesale capabilities that the PBOC has been testing publicly for a while now. The PBOC participates in a joint experiment with the Hong Kong Monetary Authority, the Bank of Thailand, the Central Bank of the UAE and the Bank for International Settlements named Project mBridge, the purpose of which is to create a common infrastructure across borders to facilitate real-time and cheap transaction settlement. Last October, the project successfully conducted 164 transactions in collaboration with 20 banks across the 4 countries, settling a total of $22 million. Instead of relying on correspondent banking networks, banks were able to link with their foreign counterparts directly to conduct payments, FX settlements, redemptions and issuance across e-HKD, e-AED, e-THB and e-CNY. Interestingly, almost half of all transactions were in e-CNY, which amounted to approximately $1,705,453 issued, $3,410,906 used in payments and FX settlements and $6,811,812 redeemed. Both issuance and redemption transactions were highest in e-CNY, and as stated by the BIS, it was likely because of the automatic integration of the retail e-CNY system and the higher share of RMB in regional trade settlements. 

Analysts have characterized wholesale cross-border arrangements like the mBridge as an effort towards de-dollarization and internationalization of the RMB. The e-CNY, much like its physical counterpart, faces the same liquidity constraints due to capital controls on off-shore transactions and holdings. This was reflected in the mBridge experiment, as one of the main feedback from participants was the need for greater liquidity from FX market makers and other liquidity providers to improve the FX transaction capabilities of the platform. Additionally, even if e-CNY were to become freely traded in the future, it could lead to significant appreciation of RMB and balance of payments issues for the PBOC. This is likely not a desirable outcome for the PBOC, which is why currency arrangements like the mBridge can only have a limited impact on the role of the dollar.

If winning the currency competition is an unlikely short-term objective of the PBOC, what has raised national security concerns regarding the e-CNY? China has long used the rhetoric of international cooperation and “do no harm” principles in its cross-border CBDC engagements. However, these cross-border experiments require months of preparation and coordination between central and commercial banks to ensure that regulatory and jurisdictional requirements are aligned. They highlight the need for legal pathways and standards for data sharing, privacy, and risk frameworks between heretofore unsynchronized jurisdictions. Similarly, experiments rely on technological prototypes that interact with existing domestic e-CNY framework, creating de-facto technical standards for cross-border transactions which are likely to be replicated by other jurisdictions. What can potentially emerge is a set of technical and regulatory standards built in the image of the e-CNY, and with that comes the baggage of surveillance and unauthorized access by the Chinese state. MBridge’s platform, for instance, can be utilized for domestic CBDC infrastructure if required by any jurisdiction. 

Already, Chinese company Red Date Technology—which, along with China Mobile, UnionPay, State Information Center and others—is behind the creation of Blockchain Service Network (BSN) (a blockchain infrastructure service that connects different payment networks) has launched a similar product under the name of Universal Digital Payments Network. At an event at the World Economic Forum in January 2023, it targeted emerging markets experimenting with CBDCs and stablecoins, since the project wants to build an interconnected global architecture in the vein of BSN’s ambitions.

Technological and regulatory replication by country blocs, enabled by Chinese state and private actors, could create a parallel system of financial networks outside of the dollar, especially where there is a high volume of transactions. The United States relies on the dollar’s dominance to establish global anti-money laundering standards and achieve effective and broad implementation of financial sanctions. The emergence of alternate currency-blocs—enabled by e-CNY-like technology—has the potential to chip away at the primacy of the dollar in global finance and trade, as the dollar will not be the only available option. 

Therefore, even though it is unlikely that the development of e-CNY would lead to a broader share of the RMB as a payment or reserve currency, replication of the e-CNY’s technical and regulatory model could further payments infrastructure that is not only inherently unworkable with the dollar, but exacerbates the privacy and surveillance concerns of the retail e-CNY by exporting the problem to the world. China’s domestic motivations of greater control and surveillance, therefore, are intertwined with its global ambitions, and the consequences will be dire in the absence of a competing, privacy preserving, dollar-enabling, payments infrastructure.


Ananya Kumar is the associate director for digital currencies with the GeoEconomics Center.

At the intersection of economics, finance, and foreign policy, the GeoEconomics Center is a translation hub with the goal of helping shape a better global economic future.

Check out our CBDC Tracker

Central Bank Digital Currency Tracker

Our flagship Central Bank Digital Currency (CBDC) Tracker takes you inside the rapid evolution of money all over the world. The interactive database now tracks over 130 countries— triple the number of countries we first identified as being active in CBDC development in 2020.

The post <strong>Practice makes perfect: What China wants from its digital currency in 2023</strong> appeared first on Atlantic Council.

]]>
Russia’s invasion of Ukraine is also being fought in cyberspace https://www.atlanticcouncil.org/blogs/ukrainealert/russias-invasion-of-ukraine-is-also-being-fought-in-cyberspace/ Thu, 20 Apr 2023 16:30:09 +0000 https://www.atlanticcouncil.org/?p=638524 While the war in Ukraine often resembles the trench warfare of the twentieth century, the battle for cyber dominance is highly innovative and offers insights into the future of international aggression, writes Vera Mironova.

The post Russia’s invasion of Ukraine is also being fought in cyberspace appeared first on Atlantic Council.

]]>
The Russian invasion of Ukraine is the first modern war to feature a major cyber warfare component. While the conventional fighting in Ukraine often resembles the trench warfare of the early twentieth century, the evolving battle for cyber dominance is highly innovative and offers important insights into the future of international aggression.

The priority for Ukraine’s cyber forces is defense. This is something they have long been training for and are excelling at. Indeed, Estonian PM Kaja Kallas recently published an article in The Economist claiming that Ukraine is “giving the free world a masterclass on cyber defense.”

When Russian aggression against Ukraine began in 2014 with the invasion of Crimea and eastern Ukraine’s Donbas region, Russia also began launching cyber attacks. One of the first attacks was an attempt to falsify the results of Ukraine’s spring 2014 presidential election. The following year, an attempt was made to hack into Ukraine’s electricity grid. In 2017, Russia launched a far larger malware attack against Ukraine known as NotPetya that Western governments rated as the most destructive cyber attack ever conducted.

In preparation for the full-scale invasion of 2022, Russia sought to access Ukraine’s government IT platforms. One of the goals was to obtain the personal information of Ukrainians, particularly those working in military and law enforcement. These efforts, which peaked in January 2022 in the weeks prior to the invasion, failed to seriously disrupt Ukraine’s state institutions but provided the country’s cyber security specialists with further important experience. “With their nonstop attacks, Russia has effectively been training us since 2014. So by February 2022, we were ready and knew everything about their capabilities,” commented one Ukrainian cyber security specialist involved in defending critical infrastructure who was speaking anonymously as they were not authorized to discuss details.

Stay updated

As the world watches the Russian invasion of Ukraine unfold, UkraineAlert delivers the best Atlantic Council expert insight and analysis on Ukraine twice a week directly to your inbox.

Ukrainian specialists say that while Russian hackers previously tried to disguise their origins, many now no longer even attempt to hide their IP addresses. Instead, attacks have become far larger in scale and more indiscriminate in nature, with the apparent goal of seeking to infiltrate as many systems as possible. However, the defenders of Ukraine’s cyberspace claim Russia’s reliance on the same malware and tactics makes it easier to detect them.

The growing importance of digital technologies within the Ukrainian military has presented Russia with a expanding range of high-value targets. However, efforts to access platforms like Ukraine’s Delta situational awareness system have so far proved unsuccessful. Speaking off the record, Ukrainian specialists charged with protecting Delta say Russian hackers have used a variety of different methods. “They tried phishing attacks, but this only resulted in our colleagues having to work two extra hours to block them. They have also created fake interfaces to gain passwords and login details.”

Ukrainian security measures that immediately detect and block unauthorized users requesting information have proved effective for the Delta system and similar platforms. Russian hackers have had more success targeting the messaging platforms and situation reports of various individual Ukrainian military units. However, due to the fast-changing nature of the situation along the front lines, this information tends to become outdated very quickly and therefore is not regarded as a major security threat.

Ukraine’s cyber efforts are not exclusively focused on defending the country against Russian attack. Ukrainians have also been conducting counterattacks of their own against Russian targets. One of the challenges they have encountered is the comparatively low level of digitalization in modern Russian society compared to Ukraine. “We could hack into Russia’s railway IT systems, for example, but what information would this give us? We would be able to access train timetables and that’s all. Everything else is still done with paper and pens,” notes one Ukrainian hacker.

This has limited the scope of Ukrainian cyber attacks. Targets have included the financial data of Russian military personnel via Russian banks, while hackers have penetrated cartographic and geographic information systems that serve as important infrastructure elements of the Ukraine invasion. Ukrainian cyber attacks have also played a role in psychological warfare efforts, with Russian television and radio broadcasts hacked and replaced with content revealing suppressed details of the invasion including Russian military casualties and war crimes against Ukrainian civilians.

While Ukraine’s partners throughout the democratic world have provided the country with significant military aid, the international community has also played a role on the cyber front. Many individual foreign volunteers have joined the IT Army of Ukraine initiative, which counts more than 200,000 participants. Foreign hacker groups are credited with conducting a number of offensive operations against Russian targets. However, the large number of people involved also poses significant security challenges. Some critics argue that the practice of making Russian targets public globally provides advance warning and undermines the effectiveness of cyber attacks.

Russia has attempted to replicate Ukraine’s IT Army initiative with what they have called the Cyber Army of Russia, but this is believed to have attracted fewer international recruits. Nevertheless, Russia’s volunteer cyber force is thought to have been behind a number of attacks on diverse targets including Ukrainian government platforms and sites representing the country’s sexual minorities and cultural institutions.

The cyber front of the Russo-Ukrainian War is highly dynamic and continues to evolve. With a combination of state and non-state actors, it is a vast and complex battlefield full of gray zones and new frontiers. Both combatant countries have powerful domestic IT industries and strong reputations as hacker hubs, making the cyber front a particularly fascinating aspect of the wider war. The lessons learned are already informing our knowledge of cyber warfare and are likely to remain a key subject of study in the coming decades for anyone interested in cyber security.

Vera Mironova is an associate fellow at Harvard University’s Davis Center and author of Conflict Field Notes. You can follow her on Twitter at @vera_mironov.

Further reading

The views expressed in UkraineAlert are solely those of the authors and do not necessarily reflect the views of the Atlantic Council, its staff, or its supporters.

The Eurasia Center’s mission is to enhance transatlantic cooperation in promoting stability, democratic values and prosperity in Eurasia, from Eastern Europe and Turkey in the West to the Caucasus, Russia and Central Asia in the East.

Follow us on social media
and support our work

The post Russia’s invasion of Ukraine is also being fought in cyberspace appeared first on Atlantic Council.

]]>
Critical infrastructure cybersecurity prioritization: A cross-sector methodology for ranking operational technology cyber scenarios and critical entities https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/critical-infrastructure-cybersecurity-prioritization/ Wed, 19 Apr 2023 13:01:19 +0000 https://www.atlanticcouncil.org/?p=636290 As critical infrastructure becomes increasingly targeted by malicious adversaries, how can we effectively prioritize criticality?

The post Critical infrastructure cybersecurity prioritization: <strong>A cross-sector methodology for ranking operational technology cyber scenarios and critical entities</strong> appeared first on Atlantic Council.

]]>

Executive summary

“Cyber policy today has created a world in which seemingly everything non-military can be held at risk—hospitals, trains, dams, energy, water—and nothing is off limits.”1

Policy experts have long looked to other fields to gain a better understanding of cyber issues—natural disasters, terrorism, insurance and finance, and even nuclear weapons—due to the “always/never” rule. The always/never concept stipulates that weapons must always work correctly when they are supposed to and never be launched or detonated by accident or sabotage. The application of the always/never rule to process control systems across an increasingly digitized critical infrastructure landscape is incredibly difficult to master.

Threading the tapestry of risk across critical infrastructure requires a more granular and purposeful model than the current approach to classifying critical infrastructure can deliver. Failing to contextualize the broad problem set that is critical infrastructure cybersecurity jeopardies increasing the cost of compliance-based cybersecurity to the extent that small- and medium-sized businesses cannot afford the expense and/or expect the government to provide managed cybersecurity services for designated concentrations of risk across multiple sectors—an imprudent, expensive, and unsustainable outcome.

Informing decision-makers requires deeper analysis of critical infrastructure targets through available open-source intelligence, criticality and vulnerability data, the degradation of operations by cyber means, and mean time to recover from cyber impacts that does not exist at scale. This paper offers an initial step to focus on cyber-physical operations, discussing the limitations of current methods to prioritize across critical infrastructure cybersecurity and outlining a methodology for prioritizing scenarios and entities across sectors and local, state, and federal jurisdictions.

This methodology has two primary use cases:

  1. It provides a way for asset owners to rank relevant cyber scenarios, enabling a single entity, organization, facility, or site in scope to prioritize a tabletop exercise scenario that maps cyber-physical impacts from control failures to localized cascading impacts.
  2. It generates a standardized priority score, which can be used by government and industry stakeholders to compare entities, locations, facilities, or sites within any jurisdiction (by geography, sector, regulatory body, etc.)—e.g., to compare 1,000 entities in a single sector or to compare a prison to a water utility or a rail operator to a hospital.

Introduction

The Department of Homeland Security’s National Incident Management System includes five components: plan, organize and equip, train, exercise, and evaluate and improve.2 Cybersecurity conversations are stuck in a limited cycle of buy a product, run a tabletop exercise, and check compliance boxes, often skipping key steps for organization, failing to exercise function-specific responsibilities, and almost never exercising to failure like a real emergency might require. Collectively, cyber-physical security requires new strategic and tactical thinking to better inform decision-makers in cyber policy, planning, and preparedness.

Critical infrastructure sectors and operations depend on equipment, communications, and business operations to supply goods, services, and resources to populations and interdependent commercial industries each day around the clock. Over the last decade, distributed operations, including manual and analog components that were originally not accessible via the internet, have increasingly become digitized and connected as networked technology connects systems to systems, sites to sites, and people to everything.

Owners and operators of critical infrastructure are responsible for securing their operations and processes from the inside out according to assorted regulatory and compliance requirements within and across each sector. The U.S. government is responsible for protecting citizens, national security, and the economy. Despite the tactical understanding of critical infrastructure equipment, communications, and business operations, critical infrastructure cybersecurity remains ambiguous. Several agencies across the U.S. government are working together to develop cybersecurity performance standards, baseline metrics, incident reporting mechanisms, information sharing tools, and liability protections.

Nevertheless, critical infrastructure cybersecurity presents a massive needle in a haystack problem. Where information technology (IT) sees many vulnerabilities, likely to be exploited in similar ways across mainstream and ubiquitous systems, operational technology (OT) security is often a proprietary ,case-by-case distinction. The oversimplification of their differences leads to a contextual gap when translating roles and responsibilities into tasks and capabilities for government and business continuity and disaster recovery for industry.

Essential critical infrastructure sectors

Source: cisa.gov

What is eating critical infrastructure is not a talent gap, the convergence of IT and OT, or even the lack of investment in cybersecurity products and solutions. It is the improbability of determining all possible outcomes from single points of dependence and the failure that exists between and beyond business continuity, physical equipment, and secure data and communications.

One consistently repeated recommendation from high-level decision-makers is that organizations, entities, and/or facilities carry out tabletop exercises and scenario planning to prepare for cyber situations that could have disruptive and devastating outcomes, especially those that threaten human life and national and economic security. However, there is no standardized way to develop or run these exercises or to decide which scenarios to simulate for teams based on size, location, scope, operational specifics, security maturity, and resource capacity.

All of it is critical, so what matters?

“Systems of economic exchange that promote patterns of civil society depend on the sustainable availability and equitable use of natural and social resources necessary for constructing a satisfying and ‘satisficing’ life by present and future generations.”3

Critical infrastructure is critical not only because the disruption, degradation, or destruction of entities/operations will impact life, the economy, or national security, but also because critical infrastructure sectors form the backbone of U.S. civil society. Some critical infrastructure sectors are also transactionally dependent on one another. The water sector depends heavily on operations and outputs from the energy, transportation, finance, and manufacturing sectors. Transportation depends on operations and outputs from the energy, finance, communications, and manufacturing sectors, and so on.4

There are indicators to suggest that government will likely continue tasking industry with cybersecurity requirements. Recent European Commission legislation sheds light on the due diligence of cybersecurity activities. The Network and Information Security 2 directive suggests that entities assess the proportionality of their risk management activities according to their individual degree of exposure to risks, size, likelihood and severity of incidents, and the societal and economic impacts of potential incidents.

According to retired National Cyber Director Chris Inglis, the Biden administration’s National Cybersecurity Strategy drills into “affirmative intentionality,” asking industry to raise the bar on cyber responsibility, liability, and resilience building. This comes at a time when best practices are numerous but implementation specifics are scarce. The strategy is positioned to expand mandated policies at sector risk management agencies and to double down on broader information sharing, combined with international law enforcement, to quell undeterred cyber criminals and threat-actor groups.

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) uses the National Critical Functions Framework to define and assess critical functions across sectors. Critical functions, including the fifty-five published by CISA, are defined as “vital to the security, economy, and public health and safety of the nation.”5 Critical assets are prioritized as those which “if destroyed or disrupted, would cause national or regional catastrophic effects.”6

According to a review by the U.S. Government Accountability Office, this approach has fallen short in three major ways: Stakeholders found it difficult to prioritize the framework given competing planning and operations considerations, struggled with implementing the goals and strategies, and required more tailored information to use the framework in a meaningful way. As a result, only fourteen states out of fifty-six have provided updates to the National Critical Infrastructure Prioritization Program since 2017.7

Entities determined to be the most essential of all critical infrastructure are categorized as Section 9 entities, defined as “critical infrastructure where a cybersecurity incident could reasonably result in catastrophic regional or national effects on public health or safety, economic security, or national security.”8 A recommended definition of systemically important critical infrastructure (SICI) in proposed legislation suggests the secretary of the U.S. Department of Homeland Security could declare a facility, system, or asset as “systemically important critical infrastructure” if the compromise, damage, and/or destruction of that entity would result in the following:

  • The interruption of critical services, including the energy supply, water supply, electricity grid, and/or emergency services, that could cause mass casualties or lead to mass evacuations.
  • The perpetuation of catastrophic damage to the U.S. economy, including the disruption of the financial market, disruption of transportation systems, and the unavailability of critical technology services.
  • The degradation and/or disruption of defense, aerospace, military, intelligence, and national security capabilities.
  • The widespread compromise or malicious intrusion of technologies, devices, or services across the cyber ecosystem.9

Regardless of scoping for SICI, there is a lack of understanding about the inventory of industrial assets and technologies that are in use across critical sectors today and the configuration contingencies for risk management for that inventory. There is a similar absence of holistic awareness about the realistic, cascading impacts or the fallout analysis for entities with varying characteristics and demographics.

Operational technology

OT and industrial control system (ICS) technologies include a wide range of machines and equipment, such as pumps, compressors, valves, turbines and similar equipment, interface computers and workstations, programmable logic controllers, and many diagnostics, safety, and metering and monitoring systems that enable or report the status of variables, processes, and operations.

Supervisory control and data acquisition (SCADA) systems encompass operations management and supervisory control of local or physical OT controls and are programmed and monitored to direct one or more processes operating at scale—i.e., machines and devices command process controls that are involved in directing and manipulating physical sensors and actuators.

Sectors operating OT and ICS on a daily basis include oil and gas, power and utilities, water treatment and purification facilities, manufacturing, transportation, hospitals, and connected buildings. OT devices tend to be legacy devices with fifteen- to twenty-year lifecycles and beyond, operating 24-7 with rarely scheduled or available maintenance windows for software patches and updates. These devices often lack robust security controls by design and feature proprietary communication protocols and varying connectivity and networking requirements.

OT cybersecurity aims to prevent attacks that target process control equipment that reads data, executes logic, and sends outputs back to the machine or equipment. However, IT cybersecurity practices, analytics, forensics, and detection tools do not match the unique data and connectivity requirements and various configurations of OT environments.

A single operation or location might have more than a dozen different types of vendor technologies—SCADA, distributed control systems, programmable logic controllers, remote terminal units, human-machine interfaces, and safety instrumented systems—running with proprietary code and industry specific protocols. Prioritizing availability and data in motion, each asset and system will have unique parameters for identification and communication on a network, making it nearly impossible to manually log granular session- and packet-level details about each asset or system.

Attacks involving OT and ICS come predominately in two forms. Some are tailored specifically for a single target with the intent of establishing prolonged, undetected access to manipulate view and/or control scenarios that could result in physical disruption or destruction. Others involve “living off the land” techniques that target common denominators across organizations based on opportunistic activities, such as using established social engineering; tactics, techniques and procedures (TTPs); credential harvesting; and the purchase of intelligence and access from threat actors and groups conducting continuous reconnaissance and acting as initial access brokers.

Risks and vulnerabilities in operational technology and critical infrastructure

It is increasingly difficult to contextualize critical infrastructure both operationally—based on specific products, services, resources, processes, and technologies—and functionally—based on centralized versus distributed risks, dependencies, and interdependencies. Attempts to at contextualization have led to a debate between asset-specific (things, such as technologies, systems, and equipment) versus function-specific (actions, such as connecting, distributing, managing, and supplying) cybersecurity prioritization. This dichotomy is also characterized as “threats from” a threat actor and their capabilities to impact functions, instead of “threats to” specific assets as explained in product-specific vulnerability disclosures.10

Today there are thousands of known product vulnerabilities in OT and ICS systems from each vendor that produces machines and equipment in those categories. While each vulnerability is published with an associated common vulnerability score, it is impossible to immediately understand how severe that vulnerability will be in context for a single entity or organization’s risk profile based on the designated severity of the vulnerability. Vulnerabilities must be compared with operational status to understand their significance and to prioritize the actions and procedures that will reduce the severity of the vulnerability’s potential impacts.

Unfortunately, “threats from” actors cannot easily be mapped to the exploitation of threats to OT and ICS. The assets versus functions distinction that is commonplace in the current debate over critical infrastructure typically leads to a hyper focus on either systems impact analysis (asset-specific) or business continuity (function-specific) outcomes and limits holistic fallout analysis for four main reasons:

  1. The plethora of existing product vulnerabilities in critical OT do not translate directly into manipulation of view or manipulation of control scenarios.
  2. The severity scoring for vulnerabilities is too vague to determine cascading impacts or relevant fallout analysis for a specific facility or operation.
  3. The loss of function outcomes and consequences are often not well scoped in terms of realistic cyber scenarios that would lead to and produce cascading impacts.
  4. Cyber incidents that impact physical processes are less repeatable than IT attacks and accessible cyber threat intelligence for threat actors and TTPs that specifically target OT and ICS is less widely available, as there are fewer known and analyzed incidents.

Many OT and ICS systems have known vulnerabilities and unsophisticated, yet complex, designs; the security complexity is in the attack path or “kill chain,” targeting simplistic systems that can be configured in a myriad of ways. Critical infrastructure entities can be targeted by threat actors to exploit and extort their IT and OT or ICS systems, but OT and ICS systems—traditionally designed with mission state and continuity in mind—also risk having their native functionality targeted and hijacked in cyber scenarios.11

Risks to cyber-physical systems include:

  • the use of legacy technologies with well-known vulnerabilities
  • the widespread availability of technical information about control systems
  • the connectivity of control systems to other networks
  • constraints on the use of existing security technologies and practices
  • insecure remote connections
  • a lack of visibility into network connectivity
  • complex and just-in-time supply chains
  • human error, neglect, and accidents.

If the core of cybersecurity is a calculation of threats, vulnerabilities, and likelihood, critical infrastructure sectors and technologies represent an exponential number of probabilistic outcomes for cyber scenarios with physical consequences. Despite increased awareness, pressure, and oversight from governments, boards, and insurance providers, the scale and complexity of the problem set quickly intensifies given the entanglement of

  • similar, but not identical, industries and technologies
  • inconsistent change management and documentation
  • reliance on third-party systems and components
  • external threat actors and TTPs
  • risk management and security best practices
  • compensating controls and security policy enforcement
  • compliance, standards, and regulations.

Table for potential escalation of consequences

This complexity results in four types of general OT and ICS cyber scenarios in critical infrastructure. The two most commonly discussed, but not necessarily the most commonly experienced, are if/when an adversary accesses an OT environment and intentionally causes effects within the scope of their objectives or causes unintended consequences beyond the scope of their objectives. These general scenarios can be further dissected and understood by referencing the specific attack paths and impacts outlined in the MITRE Corporation’s ATT&CK Matrix for ICS.12

A scoring methodology for cross-sector entity prioritization

Today, critical infrastructure cyber protection correlates sixteen different sectors, with no way to compare a standardized risk metric from a municipal water facility in Wyoming with a large commercial energy provider in Florida or a rural hospital in Texas with a train operator in New York. This section proposes a scoring methodology for cross-sector entity prioritization using qualitative scenario planning and quantitative indicators for severity scoring, assessing the potential for scenarios to cause public panic and to stress/overcome local, state, and federal response capacity.

Prioritizing critical infrastructure cybersecurity requires robust planning—comprehensive in scope, yet flexible enough to account for contingencies. Tasha Jhangiani and Graham Kennis note that “a risk-based approach to national security requires that the U.S. must prioritize its resources in areas where it can have the greatest impact to prevent the worst consequences.”13 Owners and operators of critical infrastructure have relayed to the U.S. government a need for more “regionally specific information” to address cyber threats. 14

A recent report on the ownership of various utilities in the United States found that “a better indicator of how to approach [cyber] regulations is to look at how many people a utility services,” a direct indicator for fallout analysis when OT systems are impacted.15 Where progress should start can be determined by expanding fallout analysis to identify the most at-risk environments across any given jurisdiction regardless of sector, location, ownership, or cybersecurity policy enforcement.

Scoring entities according to the prioritization methodology outlined below requires a well-executed thought exercise. The results are a way to determine the most consequential scenarios for facilities and operations, as well as the most at-risk facilities and operations within a given jurisdiction. The scoring can be performed at a local, state, or federal level. This type of prioritization offers an accessible way for entities to grapple with cybersecurity concerns in a local and regional context. The ranking also allows prioritization from an effects-based (impacts), rather than a means-based (capabilities), approach.

This methodology has two primary use cases:

  1. The scoring matrix provides a way to rank and prioritize relevant cyber scenarios for a single entity, organization, facility, or site in scope.
    a. The ranking, based on weighted scores, will allow any entity, organization, facility, or site to choose scenarios to exercise based on a choice of two real-world impacts (impact A, impact B) or to assess both impacts when choosing a tabletop scenario.
    i. This ranking has the potential to prioritize scenarios that will cause public panic and/or overwhelm response resources over scenarios that simply have a higher cyber severity rating (see Table 1).
  2. The standardized priority score provides an overall priority score for the entity, organization, facility, or site.
    a. This score can be used to compare and rank different entities, locations, facilities, or sites within a given jurisdiction—city or local, state, federal, sector-specific, etc.

This methodology can be incorporated into assessments, training, and tabletop exercises in the planning phase of cyber risk mitigation and incident response. It can also be used by leaders to prioritize multiple critical infrastructure sectors or locations in their jurisdiction from a cybersecurity perspective.

How to use the methodology

Prioritizing cybersecurity efforts across critical infrastructure can borrow from the suggested fallout analysis applied to the public and local response capacity of a given target. When a weapon of mass destruction is used as an act of terror, according to the 2002 Federal Emergency Management Agency’s Interim Planning Guide for State and Local Governments, “Managing the Emergency Consequences of Terrorist Incidents,” there are two additional possible outcomes:16

  • Impact A—the creation of chaos, confusion, and public panic
  • Impact B—increased stress on local, state, and federal response resources.

Weighting cyber severity scores for scenarios based on impact A and impact B is essential, as each scenario will impact the level of public panic and available resources differently depending on the sector and that sector’s assets and functions, location, and region. For example, a hospital ransomware attack in an urban area may not cause widespread public panic, but it may have the ability to overwhelm response resources in rural areas. Conversely, an attack on the financial sector may result in public panic, but it may be less likely to overwhelm response resources.

An IT system interruption might cause business disruptions and downtime that results primarily in public panic, while manipulation of control at a water facility could have major impacts on both public panic and response resources. The 2021 Colonial Pipeline ransomware incident inadvertently shut down OT and ICS systems and led to unforeseen local and regional impacts. The scoring methodology used here works to manage uncertainty, identifying four essential components in consultation with informed cybersecurity experts, owners and operators, and local and regional stakeholders.

  1. Scenario planning: Six scenarios will be outlined according to their potential to result in either manipulation of view (three scenarios) or manipulation of control (three scenarios) outcomes for OT. 17
  2. Severity scoring: The scoring will be based on cybersecurity severity (see Tables 1 and 2).
  3. Weighting and ranking scenarios: The scenarios will be weighted and ranked based on their potential to cause public panic and/or to stress or overwhelm response capacity.
  4. Final scoring: The standardized priority score will be calculated for the entire entity/operation.

The methodology compliments the SICI definition of critical infrastructure outlined above and can also be used to enhance the following concerted CISA recommendations:18

  • develop primary, alternate, contingency, and emergency plans to mitigate the most severe effects of prolonged disruptions, including the ability to operate manually without the aid of control systems in the event of a compromise
  • ensure redundancies of critical components and data systems to prevent single points of failure that could produce catastrophic results
  • conduct exercises to provide personnel with effective and practical mechanisms to identify best practices, lessons learned, and areas for improvement in plans and procedures.

The resulting scenarios could further be compared using CISA’s National Cyber Incident Scoring System, designed to provide a repeatable and consistent mechanism for estimating the risk of an incident. In the future, this methodology can potentially be used together with a Diamond Model of Intrusion Analysis applied to cyber-physical incidents to better understand how adversaries demonstrate and use certain capabilities and techniques against critical infrastructure targets. This may allow for better nation-state level analysis and more robust information for decision-makers who struggle to understand the likelihood of attacks against specific operations or facilities today.

Analysis and calculations

Step 1: Scenario planning: Six scenarios will be outlined for their potential to result in either manipulation of view (three scenarios) or manipulation of control (three scenarios) outcomes for OT.19

Scenarios can include incidents in which the threat, vulnerability, or exploitation originate in the IT/corporate or enterprise side of operations. First, the top three most realistic manipulation of view scenarios for a target are identified based on impacts to OT, with severity indicators outlined in Table 1. Then, the top three most realistic manipulation of control scenarios for a target are identified based on impacts to OT, with indicators outlined in Table 1.

Table 1: Severity indicators

Qualitative assessment to determine severity score in Table 2

SOURCE: Adapted from the Center for Regional Disaster Resilience “Washington Cybersecurity Situational Awareness Concept of Operations (CONOPS)” guidance document20

Step 2: Severity scoring: The scoring will be based on cybersecurity severity indicators (see Table 1). Each scenario is scored based on a severity rating in Table 2 (scores for each scenario range from 10 to 50).

Table 2: Severity rating

(does not have to equal 100)


SOURCE: Adapted from the Center for Regional Disaster Resilience “Washington Cybersecurity Situational Awareness Concept of Operations (CONOPS)” guidance document.21

Step 3: Weighting and ranking scenarios: The scenarios will be weighted and ranked based on their potential to cause public panic and/or to stress or overwhelm response capacity.

The scenarios will be ranked based on impact A and impact B. All six scenarios will be ranked separately by both likelihood of causing public panic and ability to overwhelm local response resources (see Table 3).

Table 3: Weighting likelihood to cause public panic and to overwhelm resources

(total weights must = 1)

Step 4: Final scoring: The standardized priority score will be calculated for the entire entity/operation. The weighted scores for both impact A and impact B are combined and the standardized priority score is calculated (see Figure 4).

Case study: Prison cybersecurity

In November 2022, the Atlantic Council’s Cyber Statecraft Initiative brought together cybersecurity experts to apply this scoring methodology to a mock tabletop exercise focused on a prison. A prison environment includes many functional OT and ICS systems and helps illustrate the utility of cybersecurity scenario planning beyond what is traditionally considered critical infrastructure. U.S. prisons also offer a real-world environment where experts who specialize in OT and ICS cybersecurity for any Section 9 entities or existing critical infrastructure sectors can address the problem set on equal footing, without speaking directly to any sector they serve or have worked in or with.

Prisons, often referred to as correctional facilities, operate across the United States. Twenty-six states and the Federal Bureau of Prisons rely heavily on private facilities to house incarcerated inmates.22These facilities depend on a myriad of IT and OT systems for safe, healthy, and continuous 24-7 operations. Examples of IT systems in prisons include telephone and email, video, telemedicine, radios, and management platforms (i.e., access to computers or tablets for entertainment, education, job skills, and reentry planning). Examples of OT systems include security platforms, surveillance cameras, access control points, perimeter intrusion detection, cell doors, and health and safety platforms, such as fire alarms and heating, ventilation, and air conditioning (HVAC) systems.23 These OT and ICS systems are exposed to the threats and vulnerabilities that were previously discussed.

Consider one potential OT scenario in which a threat actor gains access to the system that controls the cell doors, which are programmed not to open or close simultaneously. Access to the controllers that incrementally open and close the cell doors could be achieved and a threat actor could override the incremental interval, directing all doors to move at once, potentially surging the power and/or destroying electronics and components of the cyber-physical system. Researchers have discovered prison control rooms with internet access and commissaries connected to OT networks where programmable logic controllers are operating.24 This scenario represents a potential manipulation of control that would likely produce some level of public panic, but may not necessarily overwhelm local response capabilities.

Tabletop participants conducted a 90-minute exercise to develop six potential scenarios—three specifying manipulation of view impacts to OT and three specifying manipulation of control impacts to OT. The guidelines specified that each scenario must be realistic, technically feasible, worst-case scenarios based on cyber-physical impacts. The scenarios could not be duplicative and must be considered irrespective of network segmentation and best practice compensating controls. Scenarios could have initial access vectors in traditional information technologies, directly or indirectly impacting OT.

The prison specifics indicated that the facility opened in 1993 as a supermax prison in upstate New York. The mock facility housed 300 male inmates and had about 500 employees. Visiting hours were reportedly weekends and holidays between 9:00am and 3:15pm. The facility was said to be located five miles outside of a city of 27,000 people. The immediate town had twenty-seven police officers and fourteen civilian support staff. The nearest hospital, with 125 beds, was five miles away and in similar proximity to two large elementary schools. The facility itself was described as a hub-and-spoke model for operations, with a central command center monitoring and operating the facility and control systems located on premise but removed from the command center.

Access vectors were potentially numerous, including technicians with equipment and inventory access, universal serial bus (USB) drive and other transient devices, internet-connected control systems and networks, software updates, remote access, and remote exploitation, leading to the example scenarios outlined below. The scenarios and scoring that follow are a snapshot of this mock exercise and the application of the methodology in this paper. The example demonstrates bounded knowledge of a simulated exercise and is meant to showcase how an organization or facility might use the methodology for an entity or operation. Participants were cybersecurity experts, however, the scenario planning and thought exercise is meant to include all relevant stakeholders.

Mock prison example scenarios: Manipulation of view and manipulation of control


MOV = manipulation of view, MOC = manipulation of control.

Figure 1. Priority based on severity rating alone (Table 1)


NOTE that based on cybersecurity severity alone, MOC 3 ranks highest as a cyber scenario worth preparing and executing a tabletop exercise for.

Figure 2. Weighted priority for impact A (panic)


FORMULA: Score = Severity * Panic
NOTE that based on the cybersecurity severity score and the ability to cause public panic, MOC 3 still ranks highest as a cyber scenario worth preparing and executing a tabletop exercise for.

Figure 3. Weighted priority for impact B (resources)


FORMULA: Score = Severity * Resources
NOTE that based on the cybersecurity severity score and the ability to overwhelm local response
capacity, MOC 2 now ranks highest as a cyber scenario worth preparing and executing a tabletop exercise for.

Figure 4. Weighted priority for impact A and B (both panic and resources)


FORMULA: Score = Severity * Panic * Resources

Manipulation of control scenario two—communications distributed denial-of-service, internally and externally, with capacity/threat to manipulate, modify, and disrupt process control systems—became potentially more impactful than manipulation of control scenario three—third-party access to takeover process control systems of cell block doors only—as a cyber scenario worth preparing for. Planning and training for a scenario that cuts off internal and external communications and includes uncertainty surrounding cyber-physical impacts is a more robust scenario than direct access to a limited OT/ICS asset or a potential ransomware situation that has limited cascading impacts.

The standardized priority score can be used to compare entities from various sectors based on likely real-world scenarios, expected severity, and impacted populations. Another entity with different severity and impact calculations may have a total score of 4.35, for example. It is scalable; a company can compare different facilities and a city or sector or agency can work to enhance protections for the top 10 percent of entities in their purview of responsibility or scope, creating a starting point for addressing the most critical of critical targets and building cross-sector resilience.

Conclusion

When considering whether assets or functions are more important, the answer is concretely somewhere in between—it always depends on the operation, product, or service. Evaluating entities and sectors against how well they implement cybersecurity requirements and best practices is abundant in complexity but limited in scope. Meanwhile, focusing on technology regulation leads to time-consuming and expensive audits and standardizing unrelated sectors yields vague guidance that becomes difficult to implement and enforce. Hypothetical cyber-physical scenarios quickly become convoluted with technical contingencies, competing priorities, overlapping authorities and analysis gaps.

Like the CARVER Target Analysis and Vulnerability Assessment tool, a similar way to standardize and prioritize what is most important from a cyber perspective is needed and must include impact analysis that goes beyond the cyber incident itself to consider scenarios that also impact public panic and the ability to overwhelm local response capabilities.25 The methodology proposed in this paper is a simple scoring system that provides a repeatable mechanism that is suitable for prioritization based on real-world cyber scenarios, cyber-physical impacts, and fallout analysis.

Some sector-specific target and attack data exists, but there is still too much fear, uncertainty, and doubt driving tabletop exercises. Hopefully in the future, cyber policy and preparedness will have processes akin to the Homeland Security Exercise and Evaluation Program, with the key ingredient being a common approach.26This methodology will not resolve all critical infrastructure cybersecurity and systemically critical infrastructure debates. It will take widespread adoption to be most useful, offering a strategic way to scope and prepare for effective tabletop exercises and to compare entities across various sectors and jurisdictions.

About the author

Danielle Jablanski is a nonresident fellow at the Cyber Statecraft Initiative under the Atlantic Council’s Digital Forensic Research Lab (DFRLab) and an OT cybersecurity strategist at Nozomi Networks, responsible for researching global cybersecurity topics and promoting operational technology (OT) and industrial control systems (ICS) cybersecurity awareness throughout the industry. Jablanski serves as a staff and advisory board member of the nonprofit organization Building Cyber Security, leading cyber-physical standards development, eduction, certifications, and labeling authority to advance physical security, safety, and privacy in public and private sectors. Since January 2022, Jablanski has also served as the president of the North Texas Section of the International Society of Automation, organizing monthly member meetings, training, and community engagements. She is also a member of the Cybersecurity Apprenticeship Advisory Taskforce with the Building Apprenticeship Systems in Cybersecurity Program sponsored by the US Department of Labor.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

1    Danielle Jablanski, “Why Cyber Holds the Entire World at Risk,” National Interest, April 5, 2022, https://nationalinterest.org/blog/techland-when-great-power-competition-meets-digital-world/why-cyber-holds-entire-world-risk.
2    “National Preparedness Cycle,” Homeland Security Emergency Management Center of Excellence, https://www.coehsem.com/emergency-management-cycle/.
3    Benjamin R., Barber, A Place for Us: How to Make Society Civil and Democracy Strong (New York: Hill and Wang, 1984).
4    Tyson Macaulay, Critical Infrastructure: Understanding Its Component Parts, Vulnerabilities, Operating Risks, and Interdependencies (Boca Raton: CRC Press, 2009).
5    “Critical Infrastructure Protection: CISA Should Improve Priority Setting, Stakeholder Involvement, and Threat Information Sharing,” U.S. Government Accountability Office, March 1, 2022, https://www.gao.gov/products/gao-22-104279.
6     “Critical Infrastructure Protection,” 2022.
7    “Critical Infrastructure Protection,” 2022.
8    Executive Order 13800, Strengthening the Cybersecurity of Federal Networks and Critical Infrastructure, May 8, 2018.
9    Tasha Jhangiani and Graham Kennis, “Protecting the Critical of Critical: What Is Systemically Important Critical Infrastructure?” Lawfare, June 15, 2021, https://www.lawfareblog.com/protecting-critical-critical-what-systemically-important-critical-infrastructure.
10    Tyson Macaulay and Bryan Singer, Cybersecurity for Industrial Control Systems: SCADA, DCS, PLC, HMI, and SIS (Boca Raton: CRC Press, 2012), 57.
11    Michael J. Assante and Robert M. Lee, “The Industrial Control System Cyber Kill Chain,” SANS Institute, October 2015, https://na-production.s3.amazonaws.com/documents/industrial-control-system-cyber-kill-chain-36297.pdf.
12    “MITRE ATT&CK Matrix for ICS,” MITRE Corporation, last modified May 6, 2022,https://attack.mitre.org/matrices/ics/.
13    Jhangiani and Kennis, 2021.
14    “Critical Infrastructure Protection,” 2022.
15    Jacob Azrilyant, Melissa Sidun, and Mariami Dolashvili, “Fact and Fiction: Demystifying the Myth of the 85%,” capstone project, George Washington University, May 6, 2022, https://www.scribd.com/document/575971848/Fact-and-Fiction-85-and-Critical-Infrastructure.
16    “Managing the Emergency Consequences of Terrorist Incidents: Interim Planning Guide for State and Local Governments,” Federal Emergency Management Agency,July 2002, https://www.fema.gov/pdf/plan/managingemerconseq.pdf.
17    View and/or control cannot be recovered automatically or remotely from manipulation. The potential for sabotage can come through misinformation delivered to control room personnel or through malicious instructions sent to production infrastructure. Macaulay and Singer, 2012.
18    “Sector Spotlight: Cyber-Physical Security Considerations for the Electricity Sub-Sector,” Cybersecurity and Infrastructure Security Agency, https://www.cisa.gov/sites/default/files/publications/Sector%20Spotlight%20Cyber-Physical%20Security%20Considerations%20Electricity%20Sub-Sector%20508%20compliant.pdf.
19    View and/or control cannot be recovered automatically or remotely from manipulation. The potential for sabotage can come through misinformation delivered to control room personnel or through malicious instructions sent to production infrastructure. Macaulay and Singer, 2012.
20    Washington Cybersecurity Situational Awareness Concept of Operations (CONOPS),” Center for Regional Disaster Resilience, https://www.regionalresilience.org/uploads/2/3/2/9/23295822/washington_cybersecurity_situational_awareness_conops.pdf.
21    “Washington Cybersecurity Situational Awareness,” Center for Regional Disaster Resilience.
22    Mackenzie Buday and Ashley Nellis, “Private Prisons in the United Sates, The Sentencing Project,August 23, 2022, https://www.sentencingproject.org/reports/private-prisons-in-the-united-states/.
23    Teague Newman, Tiffany Rad, and John Strauchs, “SCADA & PLC Vulnerabilities in Correctional Facilities,” Wired, July 30, 2011, https://www.wired.com/images_blogs/threatlevel/2011/07/PLC-White-Paper_Newman_Rad_Strauchs_July22_2011.pdf.
24    Newman, Rad, and Strauchs, 2011.
25    “What is the CARVER Target Analysis and Vulnerability Assessment Methodology?” SMI Consultancy, https://www.smiconsultancy.com/what-is-carver.
26    “Homeland Security Exercise and Evaluation Program,” Federal Emergency Management Agency, https://training.fema.gov/programs/nsec/hseep/.

The post Critical infrastructure cybersecurity prioritization: <strong>A cross-sector methodology for ranking operational technology cyber scenarios and critical entities</strong> appeared first on Atlantic Council.

]]>
Russian War Report: Russian army presses on in Bakhmut despite losses https://www.atlanticcouncil.org/blogs/new-atlanticist/russian-war-report-russian-army-presses-on-in-bakhmut-despite-losses/ Fri, 14 Apr 2023 17:34:44 +0000 https://www.atlanticcouncil.org/?p=636784 Bakhmut remains a major conflict zone with dozens of attacks on Ukrainian forces there, despite Russian forces sustaining heavy losses.

The post Russian War Report: Russian army presses on in Bakhmut despite losses appeared first on Atlantic Council.

]]>
As Russia continues its assault on Ukraine, the Atlantic Council’s Digital Forensic Research Lab (DFRLab) is keeping a close eye on Russia’s movements across the military, cyber, and information domains. With more than seven years of experience monitoring the situation in Ukraine—as well as Russia’s use of propaganda and disinformation to undermine the United States, NATO, and the European Union—the DFRLab’s global team presents the latest installment of the Russian War Report. 

Security

Russian army presses on in Bakhmut despite losses

Russia enacts “e-drafting” law

Drone imagery locates new burial site east of Soledar

Russian hackers target NATO websites and email addresses

Russian army presses on in Bakhmut despite losses

The General Staff of the Ukrainian Armed Forces recorded fifty-eight attacks on Ukrainian troop positions on April 9 and 10. Of these attacks, more than thirty were in the direction of Bakhmut, and more than twenty were in the direction of Marinka and Avdiivka. Russian forces also attempted to advance toward Lyman, south of Dibrova.

Documented locations of fighting April 1-13, 2023; data gathered from open-source resources. (Source: Ukraine Control Map, with annotations by the DFRLab)
Documented locations of fighting April 1-13, 2023; data gathered from open-source resources. (Source: Ukraine Control Map, with annotations by the DFRLab)

On April 10, Commander of the Eastern Group of Ukrainian Ground Forces Oleksandr Syrskyi said that Russian forces in Bakhmut increasingly rely on government special forces and paratroopers because Wagner units have suffered losses in the recent battles. Syrskyi visited Bakhmut on April 9 to inspect defense lines and troops deployed to the frontline. According to the United Kingdom’s April 10 military intelligence report, Russian troops are intensifying tank attacks on Marinka but are still struggling with minimal advances and heavy losses. 

On April 13, Deputy Chief of the Main Operational Directorate of Ukrainian Forces Oleksiy Gromov said that Bakhmut remains the most challenging section on the frontline as Russian forces continue to storm the city center, trying to encircle it from the north and south through Ivanivske and Bohdanivka. According to Ukrainian estimates, during a two-week period, Russian army and Wagner Group losses in the battle for Bakhmut amounted to almost 4,500 people killed or wounded. To restore the offensive potential in Bakhmut, Russian units that were previously attacking in the direction of Avdiivka were transferred back to Bakhmut.

On April 8, Commander of the Ukrainian Air Forces Mykola Oleshchuk lobbied for Ukraine obtaining F-16 fighter jets. According to his statement, Ukrainian pilots are now “hostages of old technologies” that render all pilot missions “mortally dangerous.” Oleshchuk noted that American F-16 jets would help strengthen Ukraine’s air defense. Oleshchuk said that even with a proper number of aircraft and pilots, Ukrainian aviation, which is composed of Soviet aircraft and missiles, may be left without weapons at some point. He noted the F-16 has a huge arsenal of modern bombs and missiles. The commander also discussed the need for superiority in the air and control of the sea. Currently, Russian aviation is more technologically advanced and outnumbers Ukraine, meaning Ukraine cannot adequately protect its airspace. In order for the Ukrainian army to advance and re-capture territory occupied by Russia, it will require substantial deliveries of aviation and heavy equipment like tanks, howitzers, and shells. 

April 10, Ukrainian forces reported they had spotted four Russian ships on combat duty in the Black Sea, including one armed with Kalibr missiles. Another Russian ship was spotted in the Sea of Azov, along with seven in the Mediterranean, including three Kalibr cruise missile carriers. 

Meanwhile, according to Ukrainian military intelligence, Russia plans to produce Kh-50 cruise missiles in June. If confirmed, this could potentially lead to increased missile strikes against Ukraine in the fall. The Kh-50 missiles in the “715” configuration are intended to be universal, meaning they can be used by many Russian strategic bombers, including the Tu-22M3, Tu-95MS, and Tu-160.

Ruslan Trad, Resident Fellow for Security Research, Sofia, Bulgaria

Russia enacts “e-drafting” law

On April 11, the Russian State Duma approved a bill reading allowing for the online drafting of Russian citizens using the national social service portal Gosuslugi. One day later, the Russian Federal Council adopted the law. The new law enables military commissariats, or voenkomat, to send mobilization notices to anyone registered in the Gosuslugi portal. Contrary to the traditional in-person delivery of paper notices, the digital mobilization order will be enforced immediately upon being sent out to the user; ordinarily, men drafted for mobilization could dispute the reception of the notice during the twenty-one-day period after the notice was sent. As of 2020, 78 million users were reportedly registered in the Gosuslugi portal, nearly two-thirds of the Russian population.

Alongside the adoption of the digital mobilization notices are newly adopted restrictions regarding unresponsive citizens. Those who fail to appear at their local military commissariat in the twenty-day period following notice will be barred from leaving the country and banned from receiving new credit or driving a car. Of the 164 senators who took part in the vote, only one voted against the bill; Ludmila Narusova argued that the law had been adopted exceptionally hastily and that the punishments against “deviants” who do not respond to the notice are “inadequate.”

As explained by Riga-based Russian news outlet Meduza, the law also states that reserves could be populated with those who legally abstained from military service until the age of twenty-seven, due to an amendment in the bill that allows for personal data to be shared with the Russian defense ministry in order to establish “reasonable grounds” for mobilization notices to be sent out. Several institutions across the country will be subject to the data exchange, including the interior ministry, the federal tax office, the pension and social fund, local and federal institutions, and schools and universities.

Valentin Châtelet, Research Associate, Security, Brussels, Belgium

Drone imagery locates new burial site east of Soledar

Images released by Twitter user @externalPilot revealed a new burial site, located opposite a cemetery, in the village of Volodymyrivka, southeast of Soledar, Donetsk Oblast. The DFRLab collected aerial imagery and assessed that the burial site emerged during the last week of March and the first week of April. The city of Soledar has been under Russian control since mid-January. The burial site faces the Volodymyrivka town cemetery. Drone footage shows several tombs with no apparent orthodox crosses or ornaments. Analysis of the drone imagery indicates around seventy new graves have been dug on this site. A DFRLab assessment of satellite imagery estimates the surface area of the burial site amounts to around thirteen hectares.

Location of new burial site east of Soledar, Volodymyrivka, Donetsk Oblast. (Source: PlanetLab, with annotations by the DFRLab)
Location of new burial site east of Soledar, Volodymyrivka, Donetsk Oblast. (Source: PlanetLab, with annotations by the DFRLab)

Valentin Châtelet, Research Associate, Security, Brussels, Belgium

Russian hackers target NATO websites and email addresses

On April 8, the pro-war Russian hacktivist movement Killnet announced they would target NATO in a hacking operation. On April 10, they said they had carried out the attack. The hacktivists claimed that “40% of NATO’s electronic infrastructure has been paralyzed.” They also claimed to have gained access to the e-mails of NATO staff and announced they had used the e-mails to create user accounts on LBGTQ+ dating sites for 150 NATO employees.

The hacktivists forwarded a Telegram post from the KillMilk channel showing screenshots of one NATO employee’s e-mail being used to register an account on the website GayFriendly.dating. The DFRLab searched the site for an account affiliated with the email but none was found.

Killnet also published a list of e-mails it claims to have hacked. The DFRLab cross-checked the e-mails against publicly available databases of compromised e-mails, like Have I been Pwned, Avast, Namescan, F-secure, and others. As of April 13, none of the e-mails had been linked to the Killnet hack, though this may change as the services update their datasets.

In addition, the DFRLab checked the downtime of the NATO websites that Killnet claims to have targeted with distributed denial of service (DDoS) attacks. According to IsItDownRightNow, eleven of the forty-four NATO-related websites (25 percent) were down at some point on April 10.  

Nika Aleksejeva, Resident Fellow, Riga, Latvia

The post Russian War Report: Russian army presses on in Bakhmut despite losses appeared first on Atlantic Council.

]]>
Banning TikTok alone will not solve the problem of US data security https://www.atlanticcouncil.org/blogs/new-atlanticist/banning-tiktok-alone-will-not-solve-the-problem-of-us-data-security/ Fri, 31 Mar 2023 16:24:22 +0000 https://www.atlanticcouncil.org/?p=631176 TikTok is just a symptom of a much bigger problem involving China-based technology. Here are some steps US policymakers can take now.

The post Banning TikTok alone will not solve the problem of US data security appeared first on Atlantic Council.

]]>
Last week, the TikTok chief executive officer, Shou Zi Chew, appeared before the US House of Representatives Energy and Commerce Committee. The media and political perception within the Washington Beltway is that it did not go well, and it didn’t. Chew’s answers were unconvincing and at times disingenuous, including when he downplayed accusations that the company had spied on journalists critical of the company. On social media, including on TikTok, the perception of the hearing by users was equally decisive, but not in Congress’s favor.

There are 150 million US users of TikTok, and the contrast between the creative and often viral nature of clips produced on the platform—including those defending Chew—and the stodgy nature of C-SPAN’s fixed camera positions, pre-planned talking points, and members demanding “yes” or “no” answers to their questions, made for an unfavorable contrast for committee members. US policymakers considering a ban on TikTok need to think about the very serious ramifications to people and small businesses whose livelihoods do, at least in part, rely on the app. Those Americans who use the app for professional and business purposes should have their legitimate concerns addressed by policymakers in a meaningful manner alongside any sort of ban.

But TikTok users’ usage of the social media app, even if only to generate business, does not mitigate the potential threats to US national security associated with it. In December, Director of National Intelligence Avril Haines warned about the potential uses of TikTok by Beijing stemming from the data the app collects and the possibility of using it to influence public opinion. TikTok’s algorithm, for example—which experts view as more advanced than that of Facebook parent company Meta—could be used by China to create propaganda that seeks to influence or manipulate elections and the broader information environment.

TikTok’s connections to China’s government stem from it being a wholly owned subsidiary of the Beijing-based company ByteDance. Chew testified that “ByteDance is not owned or controlled by the Chinese government.” However, Article VII of China’s National Intelligence Law of 2017 makes clear the mandated responsibility for private sector companies (and any Chinese organization) to “support, assist, and cooperate” with China’s intelligence community. ByteDance, therefore, has an absolute obligation to turn over to China’s intelligence apparatus any data it requests.

There are significant reasons to be skeptical of Chew’s claims that “Project Texas”—TikTok’s effort to wall off US user data from Chinese authorities by solely storing it in the United States—will prevent China from having access to US user data in the future. Worse, even if one takes Chew at his word that “Project Texas” will accomplish this feat, it defies logic to believe that ByteDance would not—independently or compelled by China’s intelligence agencies—retain a copy of all 150 million current US users’ data.

At the same time, TikTok is just a symptom of a much bigger problem. The United States and its allies have a more fundamental issue when it comes to their citizens using China-based apps, programs, or any technology that collects their data. All China-based companies have the same obligations to provide data information to China’s intelligence services whenever requested.

What the US government can do

TikTok’s ban would mitigate the immediate threat posed by the ByteDance subsidiary, but there’s far more work that needs to be done. The Committee on Foreign Investment in the United States (CFIUS) has, up until now, been the most prominent tool used to prevent foreign governments, or individuals associated with them, from making investments in the United States that could be used to ultimately undermine US national security. CFIUS has a specific and meaningful role focused on investments, but nowadays it has too often become the default instrument for reconciling an increasingly broad swath of national security challenges. This is in part because it has a track record of success, but also because it’s one of the only meaningful tools available to policymakers. But it is not an ideal tool for every situation, something best demonstrated by CFIUS’s challenge in resolving TikTok’s ongoing review that has stretched on for more than two years now.

The bipartisan RESTRICT Act—which would give the Department of Commerce the right to review foreign technologies and ban them in the United States or force their sale—is a thoughtful place from which to begin discussions about additional ways to mitigate the US national security challenges related to information and communications platforms available for mass use. But that act alone would not solve the broader data challenges as they exist today.

The lack of federal regulation related to commercial data brokers, which today can and do legally collect and resell the data of millions of Americans, is a glaring gap that needs to be filled immediately. A ban on TikTok, for example, would do nothing to prevent data brokers from aggregating the same consumer data from other apps and re-selling it to commercial entities, including those in China. 

The threat posed by China to US national security, and to Americans’ individual data, is acute. The good news is the United States can deal with these challenges, but it will take more than just banning TikTok.


Jonathan Panikoff is a senior fellow in the Atlantic Council’s GeoEconomics Center and the former director of the Investment Security Group, overseeing the intelligence community’s CFIUS efforts, at the Office of the Director of National Intelligence.

The views expressed in this publication are the author’s and do not imply endorsement by the Office of the Director of National Intelligence, the intelligence community, or any other US government agency.

The post Banning TikTok alone will not solve the problem of US data security appeared first on Atlantic Council.

]]>
What to expect from the world’s democratic tech alliance as the Summit for Democracy unfolds https://www.atlanticcouncil.org/blogs/new-atlanticist/what-to-expect-from-the-worlds-democratic-tech-alliance-as-the-summit-for-democracy-unfolds/ Wed, 29 Mar 2023 17:37:06 +0000 https://www.atlanticcouncil.org/?p=630003 Ahead of the Biden administration’s second Summit for Democracy, stakeholders from the Freedom Online Coalition gave a sneak peek at what to expect on the global effort to protect online rights and freedoms.

The post What to expect from the world’s democratic tech alliance as the Summit for Democracy unfolds appeared first on Atlantic Council.

]]>
Watch the full event

Ahead of the Biden administration’s second Summit for Democracy, US Deputy Secretary of State Wendy Sherman gave a sneak peek at what to expect from the US government on its commitments to protecting online rights and freedoms.

The event, hosted by the Atlantic Council’s Digital Forensic Research Lab on Monday, came on the same day that US President Joe Biden signed an executive order restricting the US government’s use of commercial spyware that may be abused by foreign governments or enable human-rights abuses overseas.

But there’s more in store for this week, Sherman said, as the United States settles into its role as chair of the Freedom Online Coalition (FOC)—a democratic tech alliance of thirty-six countries working together to support human rights online. As chair, the United States needs “to reinforce rules of the road for cyberspace that mirror and match the ideals of the rules-based international order,” said Sherman. She broke that down into four top priorities for the FOC:

  1. Protecting fundamental freedoms online, especially for often-targeted human-rights defenders
  2. Building resilience against digital authoritarians who use technology to achieve their aims
  3. Building a consensus on policies designed to limit abuses of emerging technologies such as artificial intelligence (AI)
  4. Expanding digital inclusion  

“The FOC’s absolutely vital work can feel like a continuous game of catch-up,” said Sherman. But, she added, “we have to set standards that meet this moment… we have to address what we see in front of us and equip ourselves with the building blocks to tackle what we cannot predict.”

Below are more highlights from the event, during which a panel of stakeholders also outlined the FOC’s role in ensuring that the internet and emerging technologies—including AI—adhere to democratic principles.

Deepening fundamental freedoms

  • Sherman explained that the FOC will aim to combat government-initiated internet shutdowns and ensure that people can “keep using technology to advance the reach of freedom.”
  • Boye Adegoke, senior manager of grants and program strategy at the Paradigm Initiative, recounted how technology was supposed to help improve transparency in Nigeria’s recent elections. But instead, the election results came in inconsistently and after long periods of time. Meanwhile, the government triggered internet shutdowns around the election period. “Bad actors… manipulate technology to make sure that the opinions and the wishes of the people do not matter at the end of the day,” he said.
  • “It’s very important to continue to communicate the work that the FOC is doing… so that more and more people become aware” of internet shutdowns and can therefore prepare for the lapses in internet service and in freely flowing, accurate information, Adegoke said.
  • On a practical level, once industry partners expose where disruptions are taking place, the FOC offers a mechanism by which democratic “governments can work together to sort of pressure other governments to say these [actions] aren’t acceptable,” Starzak argued.
  • The FOC also provides a place for dialogue on human rights in the online space, said Alissa Starzak, vice president and global head of public policy at Cloudfare. Adegoke, who also serves in the FOC advisory network, stressed that “human rights [are] rarely at the center of the issues,” so the FOC offers an opportunity to mainstream that conversation into policymakers’ discussions on technology.

Building resilience against digital authoritarianism

  • “Where all of [us FOC countries] may strive to ensure technology delivers for our citizens, autocratic regimes are finding another means of expression,” Sherman explained, adding that those autocratic regimes are using technologies to “divide and disenfranchise; to censor and suppress; to limit freedoms, foment fear, and violate human dignity.” New technologies are essentially “an avenue of control” for authoritarians, she explained.
  • At the FOC, “we will focus on building resilience against the rise of digital authoritarianism,” Sherman said, which has “disproportionate and chilling impacts on journalists, activists, women, and LGBTI+ individuals” who are often directly targeted for challenging the government or expressing themselves.
  • One of the practices digital authoritarians often abuse is surveillance. Sherman said that as part of the Summit for Democracy, the FOC and other partners will lay out guiding principles for the responsible use of surveillance tech.
  • Adegoke recounted how officials in Nigeria justified their use of surveillance tech by saying that the United States also used the technology. “It’s very important to have some sort of guiding principle” from the United States, he said.
  • After Biden signed the spyware executive order, Juan Carlos Lara, executive director at Derechos Digitales, said he expects other countries “to follow suit and hopefully to expand the idea of bans on spyware or bans on surveillance technology” that inherently pose risks to human rights.

Addressing artificial intelligence

  • “The advent of AI is arriving with a level of speed and sophistication we haven’t witnessed before,” warned Sherman. “Who creates it, who controls it, [and] who manipulates it will help define the next phase of the intersection between technology and democracy.”
  • Some governments, Sherman pointed out, have used AI to automate their censorship and suppression practices. “FOC members must build a consensus around policies to limit these abuses,” she argued.
  • Speaking from an industry perspective, Starzak acknowledged that sometimes private companies and governments “are in two different lanes” when it comes to figuring out how they should use AI. But setting norms for both good and bad AI use, she explained, could help get industry and the public sector in the same lane, moving toward a world in which AI is used in compliance with democratic principles.
  • Lara, who also serves in the FOC advisory network, explained that the FOC has a task force to specifically determine those norms on government use of AI and to identify the ways in which AI contributes to the promise—or peril—of technology in societies worldwide.

Improving digital inclusion

  • “The internet should be open and secure for everyone,” said Sherman. That includes “closing the gender gap online” by “expanding digital literacy” and “promoting access to safe online spaces” that make robust civic participation possible for all. Sherman noted that the FOC will specifically focus on digital inclusion for women and girls, LGBTI+ people, and people with disabilities.
  • Starzak added that in the global effort to cultivate an internet that “builds prosperity,” access to the free flow of information for all is “good for the economy and good for the people.” Attaining that version of the internet will require a “set of controls” to protect people and their freedoms online, she added.
  • Ultimately, there are major benefits to be had from expanded connectivity. According to Sherman, it “can drive economic growth, raise standards of living, create jobs, and fuel innovative solutions” for global challenges such as climate change, food insecurity, and good governance.

Katherine Walla is an associate director of editorial at the Atlantic Council.

Watch the full event

The post What to expect from the world’s democratic tech alliance as the Summit for Democracy unfolds appeared first on Atlantic Council.

]]>
Wendy Sherman on the United States’ priorities as it takes the helm of the Freedom Online Coalition https://www.atlanticcouncil.org/news/transcripts/wendy-sherman-on-the-united-states-priorities-as-it-takes-the-helm-of-the-freedom-online-coalition/ Tue, 28 Mar 2023 14:22:55 +0000 https://www.atlanticcouncil.org/?p=628865 US Deputy Secretary of State Wendy Sherman outlined the priorities for the world's democratic tech alliance, from protecting fundamental freedoms online to building resilience against digital authoritarianism.

The post Wendy Sherman on the United States’ priorities as it takes the helm of the Freedom Online Coalition appeared first on Atlantic Council.

]]>
Watch the event

Event transcript

Uncorrected transcript: Check against delivery

Introduction
Rose Jackson
Director, Democracy & Tech Initiative, Digital Forensic Research Lab

Opening Remarks
Wendy Sherman
Deputy Secretary of State, US Department of State

Panelists
Boye Adegoke
Senior Manager, Grants and Program Strategy, Paradigm Initiative

Juan Carlos Lara
Executive Director, Derechos Digitales

Alissa Starzak
Vice President, Global Head of Public Policy, Cloudflare

Moderator
Khushbu Shah
Nonresident Fellow, Digital Forensic Research Lab

ROSE JACKSON: Hello. My name is Rose Jackson, and I’m the director of the Democracy + Tech Initiative here at the Atlantic Council in Washington, DC.

I’m honored to welcome you here today for this special event, streaming to you in the middle of the Freedom Online Coalition, or the FOC’s first strategy and coordination meeting of the year.

For those of you watching at home or many screens elsewhere, I’m joined here in this room by representatives from thirty-one countries and civil-society and industry leaders who make up the FOC’s advisory network. They’ve just wrapped up the first half of their meeting and wanted to bring some of the conversation from behind closed doors to the community working everywhere to ensure the digital world is a rights-respecting one.

It’s a particularly important moment for us to be having this conversation. As we get ready for the second Summit for Democracy later this week, the world’s reliance and focus on the internet has grown, while agreement [on] how to further build and manage it frays.

I think at this point it’s a bit of a throwaway line that the digital tools mediate every aspect of our lives. But the fact that most of the world has no choice but to do business, engage with their governments, or stay connected with friends and family through the internet makes the rules and norms around how that internet functions a matter of great importance. And even more because the internet is systemic and interconnected, whether it is built and imbued with the universal human rights we expect offline will determine whether our societies can rely on those rights anywhere.

Antidemocratic laws have a tendency of getting copied. Troubling norms are established in silence. And a splintering of approach makes it easier for authoritarians to justify their sovereign policies used to shutter dissent, criminalize speech, and surveil everyone. These are the core democratic questions of our time, and ensuring that the digital ecosystem is a rights-respecting one requires democracies [to row] in the same direction in their foreign policy and domestic actions.

The now twelve-year-old FOC, as the world’s only democratic tech alliance, presents an important space for democratic governments to leverage their shared power to this end, in collaboration with civil society and industry around the world.

We were encouraged last year when Secretary of State Antony Blinken announced at our open summit conference in Brussels that the US would take over as chair of the FOC in 2023 as part of its commitment to reinvest in the coalition and its success. Just over an hour ago, the US announced a new executive order limiting its own use of commercial spyware on the basis of risks to US national security and threats to human rights everywhere really brings home the stakes and potential of this work.

So today we’re honored to have Deputy Secretary of State Wendy Sherman here to share more about the US government’s commitment to these issues and its plans for the coming year as chair.

We’ll then turn to a panel of civil-society and industry leaders from around the world to hear more about how they view the role and importance of the FOC in taking action on everything from internet shutdowns to surveillance tech and generative AI. That session will be led by our nonresident fellow and the former managing editor of Rest of World Khushbu Shah.

Now, before I turn to the deputy secretary, I want to thank the FOC support unit, the US State Department, and our Democracy and Tech team here for making this event possible. And I encourage you in Zoomland to comment on and engage liberally with the content of today’s event on your favorite social media platforms, following at @DFRLab, and using the hashtags #SummitforDemocracy, #S4D too, or #PartnersforDemocracy.

For those tuning in remotely in need of closed captioning, please view today’s program on our YouTube channel through the link provided in the chat.

It is now my distinct honor to pass the podium to Deputy Secretary of State Wendy Sherman, who needs no introduction as one of our nation’s most experienced and talented diplomats.

Deputy Secretary, thank you so much for joining us.

WENDY SHERMAN: Good afternoon. It’s terrific to be with you, and thank you, Rose, for your introduction and for all of the terrific work that the Freedom Online Coalition is doing.

It is fitting to be here at the Atlantic Council for this event because your mission sums up our purpose perfectly: shaping the global future together. That is our fundamental charge in the field of technology and democracy: how we use modern innovations to forge a better future.

That’s what the DFRLab strives to achieve, through your research and advocacy, and that’s what the Freedom Online Coalition, its members, observers, and advisory network seek to accomplish through our work. Thank you for your partnership.

More than five decades ago—seems like a long time ago, but really very short—the internet found its origins in the form of the first online message ever sent, all of two letters in length, delivered from a professor at UCLA to colleagues at Stanford. It was part of a project conceived in university labs and facilitated by government. It was an effort meant to test the outer limits of rapidly evolving technologies and tap into the transformative power of swiftly growing computer networks.

What these pioneers intended at the time was actually to devise a system that could allow people to communicate in the event of a nuclear attack or another catastrophic event. Yet what they created changed everything—how we live and work, how we participate in our economy and in our politics, how we organize movements, how we consume media, read books, order groceries, pay bills, run businesses, conduct research, learn, write, and do nearly everything we can think of.

Change didn’t happen overnight, of course, and that change came with both promise and peril. This was a remarkable feat of scientific discovery, and it upended life as we know it for better, and sometimes, worse.

Over the years, as we went from search engines to social media, we started to face complicated questions as leaders, as parents and grandparents, as members of the global community—questions about how the internet can best be used, how it should be governed, who might misuse it, how it impacts our children’s mental and emotional health, who could access it, and how we can ensure that access is equitable—benefitting people in big cities, rural areas, and everywhere in between. Big-picture questions arose about these tectonic shifts. What would they mean for our values and our systems of governance? Whether it’s the internet as we understand it today or artificial intelligence revolutionizing our world tomorrow, will digital tools create more democracy or less? Will they be deployed to maximize human rights or limit them? Will they be used to enlarge the circle of freedom or constraint and contract it?

For the United States, the Freedom Online Coalition, and like-minded partners, the answer should point in a clear direction. At a basic level, the internet should be open and secure for everyone. It should be a force for free enterprise and free expression. It should be a vast forum that increases connectivity, that expands people’s ability to exercise their rights, that facilitates unfettered access to knowledge and unprecedented opportunities for billions.

Meeting that standard, however, is not simple. Change that happens this fast in society and reaches this far into our lives rarely yields a straightforward response, especially when there are those who seek to manipulate technology for nefarious ends. The fact is where all of us may strive to ensure technology delivers for our citizens, autocratic regimes are finding another means of expression. Where democracies seek to tap into the power of the internet to lift individuals up to their highest potential, authoritarian governments seek to deploy these technologies to divide and disenfranchise, to censor and suppress, to limit freedoms, [to] foment fear and [to] violate human dignity. They view the internet not as a network of empowerment but as an avenue of control. From Cuba and Venezuela to Iran, Russia, the PRC, and beyond, they see new ways to crush dissent through internet shutdowns, virtual blackouts, restricted networks, blocked websites, and more.

Here in the United States, alongside many of you, we have acted to sustain connections to internet-based services and the free flow of information across the globe, so no one is cut off from each other, the outside world, or cut off from the truth. Yet even with these steps, none of us are perfect. Every day, almost everywhere we look, democracies grapple with how to harness data for positive ends, while preserving privacy; how to bring out the best in modern innovations without amplifying their worst possibilities; how to protect the most vulnerable online while defending the liberties we hold dear. It isn’t an easy task, and in many respects, as I’ve said, it’s only getting harder. The growth of surveillance capabilities is forcing us to constantly reevaluate how to strike the balance between using technologies for public safety and preserving personal liberties.

The advent of AI is arriving with a level of speed and sophistication we haven’t witnessed before. It will not be five decades before we know the impact of AI. That impact is happening now. Who creates it, who controls it, [and] who manipulates it will help define the next phase of the intersection between technology and democracy. By the time we realize AI’s massive reach and potential, the internet’s influence might really pale in comparison. The digital sphere is an evolving and is evolving at a pace we can’t fully fathom and in ways at least I can’t completely imagine. Frankly, we have to accept the fact that the FOC’s absolutely vital work can feel like a continuous game of catchup. We have to acknowledge that the guidelines we adopt today might seem outdated as soon as tomorrow.

Now let me be perfectly clear: I am not saying we should throw up our hands and give up. To the contrary, I’m suggesting that this is a massive challenge we have to confront and a generational change we have to embrace. We have to set standards that meet this moment and that lay the foundation for whatever comes next. We have to address what we see in front of us and equip ourselves with the building blocks to tackle what we cannot predict.

To put a spin on a famous phrase, with the great power of these digital tools comes great responsibility to use that power for good. That duty falls on all our shoulders and the stakes could not be higher for internet freedom, for our common prosperity, for global progress, because expanded connectivity, getting the two billion unconnected people online can drive economic growth, raise standards of living, create jobs, and fuel innovative solutions for everything from combating climate change to reducing food insecurity, to improving public health, to promoting good governance and sustainable development.

So we need to double down on what we stand for: an affirmative, cohesive, values-driven, rights-respecting vision for democracy in a digital era. We need to reinforce rules of the road for cyberspace that mirror and match the ideals of the rules-based international order. We need to be ready to adapt our legal and policy approaches for emerging technologies. We need the FOC—alongside partners in civil society, industry, and elsewhere—to remain an essential vehicle for keeping the digital sphere open, secure, interoperable, and reliable.

The United States believes in this cause as a central plank of our democracy and of our diplomacy. That’s why Secretary Blinken established our department’s Bureau of Cyberspace and Digital Policy, and made digital freedom one of its core priorities. That’s why the Biden-Harris administration spearheaded and signed into and onto the principles in the Declaration for the Future of the Internet alongside sixty-one countries ready to advance a positive vision for digital technologies. That’s why we released core principles for tech-platform accountability last fall and why the president called on Congress to take bipartisan action in January.

That’s why we are committed to using our turn as FOC chair as a platform to advance a series of key goals.

First, we will deepen efforts to protect fundamental freedoms, including human rights defenders online and offline, many of whom speak out at grave risk to their own lives and to their families’ safety. We will do so by countering disruptions to internet access, combating internet shutdowns, and ensuring everyone’s ability to keep using technology to advance the reach of freedom.

Second, we will focus on building resilience against the rise of digital authoritarianism, the proliferation of commercial spyware, and the misuse of technology, which we know has disproportionate and chilling impacts on journalists, activists, women, and LGBTQI+ individuals. To that end, just a few hours ago President Biden issued an executive order that for the first time will prohibit our government’s use of commercial spyware that poses a risk to our national security or that’s been misused by foreign actors to enable human rights abuses overseas.

On top of that step, as part of this week’s Summit for Democracy, the members of the FOC and other partners will lay out a set of guiding principles on government use of surveillance technologies. These principles describe responsible practices for the use of surveillance tech. They reflect democratic values and the rule of law, adherence to international obligations, strive to address the disparate effect on certain communities, and minimize the data collected.

Our third objective as FOC chair focuses on artificial intelligence and the way emerging technologies respect human rights. As some try to apply AI to help automate censorship of content and suppression of free expression, FOC members must build a consensus around policies to limit these abuses.

Finally, we will strengthen our efforts on digital inclusion—on closing the gender gap online; on expanding digital literacy and skill-building; on promoting access to safe online spaces and robust civic participation for all, particularly women and girls, LGBTQI+ persons, those with disabilities, and more.

Here’s the bottom line: The FOC’s work is essential and its impact will boil down to what we do as a coalition to advance a simple but powerful idea, preserving and promoting the value of openness. The internet, the Web, the online universe is at its best when it is open for creativity and collaboration, open for innovation and ideas, open for communication and community, debate, discourse, disagreement, and diplomacy.

The same is true for democracy—a system of governance, a social contract, and a societal structure is strongest when defined by open spaces to vote, deliberate, gather, demonstrate, organize, and advocate. This openness could not be more important, because when the digital world is transparent, when democracy is done right, that’s when everyone has a stake in our collective success. That’s what makes everyone strive for a society that is free and fair in our politics and in cyberspace. That’s what we will give—that’s what we’ll give everyone reason to keep tapping into the positive potential of technology to forge a future of endless possibility and boundless prosperity for all.

So good luck with all your remaining work; lots ahead. And thank you so much for everything that you all do. Thank you.

KHUSHBU SHAH: Hello, everybody. Thank you so much for joining us. I’m Khushbu Shah, a journalist and a nonresident fellow at the Atlantic Council’s DFRLab.

We’re grateful to have these three experts here with us today to discuss rights in the digital world and the Freedom Online Coalition’s role in those rights. I’ll introduce you to these three experts.

This is Adeboye Adegoke, who is the senior manager of grants and program strategy at Paradigm Initiative. We have Alissa Starzak, the vice president and global head of public policy at Cloudflare, and Juan Carlos, known as J.C., Lara, who’s the executive director of Derechos Digitales. And so I will mention that both J.C. and Adeboye are also on the FOC’s Advisory Network, which was created as a strong mechanism for ongoing multi-stakeholder engagement.

And so I’ll start with the thirty-thousand-foot view. So we’ve heard—we’ve just heard about the FOC and its continued mission with the United States at the helm as chair this year in an increasingly interconnected and online world. More than five billion people are online around the world. That’s the majority of people [on] this planet. We spend nearly half of our time that we’re awake online, around more than 40 percent.

We as a global group of internet users have evolved in our use of the internet, as you’ve heard, since the creation of the FOC in 2011.

So Adeboye, why do you think now suddenly so many people are suddenly focused on technology as a key democratic issue? And speaking, you know, from your own personal experience in Nigeria, should we be?

ADEBOYE ADEGOKE: Yeah. I mean, I think the reasons are very clear, not just [looking out] to any region of the world, but, you know, generally speaking, I mean, the Cambridge Analytica, you know, issue comes to mind.

But also just speaking, you know, very specifically to my experience, on my reality as a Nigerian and as an African, I mean, we just concluded our general elections, and technology was made to play a huge role in ensuring transparency, you know, the integrity of the elections, which unfortunately didn’t achieve that objective.

But besides that, there are also a lot of concerns around how technology could be manipulated or has been manipulated in order to literally alter potential outcomes of elections. We’re seeing issues of microtargeting; you know, misinformation campaigns around [the] election period to demarcate, you know, certain candidates.

But what’s even most concerning for me is how technology has been sometimes manipulated to totally alter the outcome of the election. And I’ll give you a very clear example in terms of the just-concluded general elections in Nigeria. So technology was supposed to play a big role. Results were supposed to be transmitted to a central server right from the point of voting. But unfortunately, those results were not transmitted.

In fact, as a matter of fact, three or four days after the election, 50 percent of the results were not uploaded. As of the time that the election results were announced, those results were—less than 50 percent of the results had been transmitted, which then begin to, you know, lead to questioning of the integrity of those outcomes. These are supposed to be—elections are supposed to be transmitted, like, on the spot. So, you know, it becomes concerning.

The electoral panel [gave] an excuse that there was a technical glitch around, you know, their server and all of that. But then the question is, was there actually a technical glitch, or was there a compromise or a manipulation by certain, you know, bad actors to be able to alter the outcome of the election? [This] used to be the order of the day in many supposedly, you know, democratic countries, especially from the part of the world that I come from, where people really doubt whether what they see as the outcomes of their election is the actual outcome or somebody just writing something that they want.

So technology has become a big issue in elections. On one side, technology has the potential to improve on [the] integrity of elections. But on the other side, bad actors also have the tendency to manipulate technology to make sure that the opinions or the wishes of the people do not matter at the end of the day. So that’s very important here.

KHUSHBU SHAH: And you just touched on my next question for Alissa and J.C. So, as you mentioned, digital authoritarians have used tech to abuse human rights, limit internet freedoms. We’re seeing this in Russia and Myanmar, Sudan, and Libya. Those are some examples. [The] deputy secretary mentioned a few others. For example, in early 2022, at the start of its invasion of Ukraine, Russia suppressed domestic dissent by closing or forcing into exile the handful of remaining independent media outlets. In at least fifty-three countries, users have faced legal repercussions for expressing themselves online, often leading to prison terms, according to a report from Freedom House. It’s a trend that leaves people on the frontlines defenseless, you know, of course, including journalists and activists alike.

And so, J.C., what have you seen globally? What are the key issues we must keep an eye on? And what—and what are some practical steps to mitigate some of these issues?

JUAN CARLOS LARA: Yeah. I think it’s difficult to think about the practical steps without first addressing what those issues are. And I think Boye was pointing out basically what has been a problem as perceived in many in the body politic, or many even activists throughout the world. But I think it’s important to also note that these broader issues about the threats to democracy, about the threats to human rights, [they] manifest sometimes differently. And that includes how they are seen in my region, in Latin America, where, for instance, the way in which you see censorship might differ from country to country.

While some have been able to pass laws, authoritarian laws that restrict speech and that restrict how expression is represented online and how it’s penalized, some other countries have resorted to the use of existing censorship tools. Like, for instance, some governments [are] using [Digital Millennium Copyright Act] notice and technical mechanisms to delete or to remove some content from the online sphere. So that also becomes a problematic issue.

So when we speak about, like, how do we go into, like, the practical ways to address this, we really need to identify… some low-level practices [that] connect with the higher-level standards that we aspire to for democracies; and how bigger commitments to the rule of law and to fair elections and to addressing and facing human rights threats goes to the lower level of what are actually doing in governments, what people are actually doing when they are presented with the possibility of exercising some power that can affect the human rights of the population in general. So to summarize a bit of that point, we still see a lot of censorship, surveillance, internet blockings, and also, increasingly, the use of emerging technologies as things that might be threatening to human rights.

And while some of those are not necessarily exclusive to the online sphere, they are certainly been evolving—they have been evolving [for] several years. So we really need to address how those are represented today.

KHUSHBU SHAH: Thank you. Alissa, as our industry expert I want to ask you the same question. And especially I want you to maybe touch upon what J.C. was saying about low-level practices that might be practical.

ALISSA STARZAK: You know, I think I actually want to step back and think about all of this, because I think—I think one of the challenges that we’ve seen, and we certainly heard this in Deputy Secretary Sherman’s remarks—is that technology brings opportunities and risks. And some of the challenges, I think, that we’ve touched on are part of the benefit that we saw initially. So the drawbacks that come from having broad access is that you can cut it off.

And I think that as we go forward, thinking about the Freedom Online Coalition and sort of how this all fits together, the idea is to have conversations about what it looks like long term, what are the drawbacks that come from those low-level areas, making sure that there is an opportunity for activists to bring up the things that are coming up, for industry, sort of folks in my world, to do the same. And making sure that there’s an opportunity for governments to hear it in something that actually looks collaborative.

And so I think that’s our big challenge. We have to find a way to make sure [that] those conversations are robust, that there is dialogue between all of us, and [that] we can both identify the risks that come from low-level practices like that and then also figure out how to mitigate them.

KHUSHBU SHAH: Thank you. And so, back to you—both of you. I’d like to hear from you both about, as part of civil society—we can start with you, Adegoke—what role as an organization, such as the Freedom Online Coalition, what kind of role can it play in all of these issues that we’re talking about as it expands and it grows in its own network?

ADEBOYE ADEGOKE: Yeah. So I think the work of the Freedom Online Coalition is very critical in such a time as this. So when you look at most international or global [platforms] where conversations around technology, its impact, are being discussed, human rights is rarely at the center of the issues. And I think that is where the advocacy comes in terms of highlighting and spotlighting, you know, the relevance of human rights of this issue. And as a matter of fact, not just relevance but the importance of human rights to this issue.

I think the work of the FOC is relevant even more to the Global South than probably it is to the Global North because in the Global South you—our engagement with technology, and I mean at the government level, is only from the—it’s likely from the perspective of… economics and… security. [Human rights] is, sadly, in an early part of the conversation. So, you know, with a platform like the FOC, it’s an opportunity to mainstream human rights into the technology, you know, conversation generally, and it’s a great thing that some of us from that part of the world are able to engage at this level and also bring those lessons back to our work, you know, domestically in terms of how we engage the policy process in our countries.

And that’s why it’s very important for the work of FOC to be expanded to—you know, to have real impact in terms of how it is deliberate—in terms of how it is—it is deliberate in influencing not just regional processes, but also national processes, because the end goal—and I think the beauty of all the beautiful work that is being done by the coalition—is to see how that reflects on what governments, in terms of how governments are engaging technology, in terms of how governments are consciously taking into cognizance the human rights implication of, you know, new emerging technologies and even existing technologies. So I think the FOC is very, very important stakeholder in technology conversation globally.

KHUSHBU SHAH: J.C., I want to ask you the same question, especially as Chile recently joined the FOC in recent years. And love to hear what you think.

JUAN CARLOS LARA: Yeah. I think it’s important to also note what Boye was saying in the larger context of when this has happened for the FOC. Since its creation, we have seen what has happened in terms of shutdowns, in terms of war, in terms of surveillance revelations. So it’s important to also connect what the likemindedness of certain governments and the high-level principles have to do with the practice of those same governments, as well as their policy positions both in foreign policy forums and internally, as the deputy secretary was mentioning.

I think it’s—that vital role that Boye was highlighting, it’s a key role but it’s a work in progress constantly. In which way? Throughout the process of the FOC meeting and producing documents and statements, that’s when the advisory network that Boye and myself are members of was created. Throughout that work, we’ve been able to see what happens inside the coalition and what—the discussions they’re having to some degree, because I understand that some of them might be behind closed doors, and what those—how the process of those statements comes to be.

So we have seen that very important role [in] how it’s produced and how it’s presented by the governments and their dignitaries. However, I still think that it’s a work in progress because we still need to be able to connect that with the practice of governments, including those that are members of the coalition, including my own government that recently joined, and how that is presented in internal policy. And at the same time, I think that key role still has a big room—a big role to play in terms of creating those principles; in terms of developing them into increasingly detailed points of action for the countries that are members of; but also then trying to influence other countries, those that are not members of the coalition, in order to create, like, better standards for human rights for all of internet users.

KHUSHBU SHAH: Any thoughts, Alissa?

ALISSA STARZAK: Yeah. You know, I think J.C. touched on something that is—that is probably relevant for everyone who’s ever worked in government which is the reality that governments are complicated and there isn’t one voice, often, and there frequently what you see is that the people who are focused on one issue may not have the same position as people who are working on it from a different angle. And I think the interesting thing for me about the FOC is not that you have to change that as a fundamental reality, but that it’s an opportunity for people to talk about a particular issue with a focus on human rights and take that position back. So everybody sitting in this room who has an understanding of what human rights online might look like, to be able to say, hey, this is relevant to my government in these ways if you’re a government actor, or for civil society to be able to present a position, that is really meaningful because it means that there’s a voice into each of your governments. It doesn’t mean that you’re going to come out with a definitive position that’s always going to work for everyone or that it’s going to solve all the problems, but it’s a forum. And it’s a forum that’s focused on human rights, and it’s focused on the intersection of those two, which really matters.

So, from an FOC perspective, I think it’s an opportunity. It’s not going to ever be the be all and end all. I think we all probably recognize that. But you need—I think we need a forum like this that really does focus on human rights.

KHUSHBU SHAH: An excellent point and brings me to my next question for you three. Let’s talk specifics, speaking of human rights: internet shutdowns. So we’ve mentioned Russia. Iran comes to mind as well during recent months, during protests, and recently, very recently, the Indian government cut tens of millions of people off in the state of Punjab as they search for a Sikh separatist.

So what else can this look like, J.C.? Those are some really sort of very basic, very obvious examples of internet shutdowns. And how can the FOC and its network of partners support keeping people online?

JUAN CARLOS LARA: Yes, thank you for that question because specifically for Latin America, the way in which shutdowns may present themselves is not necessarily a huge cutting off of the internet for many people. It sometimes presents in other ways, like, for instance, we have seen the case of one country in South America in which their telecommunication networks has been basically abandoned, and therefore, all of the possibilities of using the internet are lost not because the government has decided to cut the cable, but rather because it’s let it rot, or because it presents in the form of partially and locally focused cutting off services for certain platforms.

I think the idea of internet shutdowns has provided awareness about the problems that come with losing access to the internet, but that also can be taken by governments to be able to say they have not shut access to the internet; it’s just that there’s either too much demand in a certain area or that a certain service has failed to continue working, or that it’s simply failures by telecommunication companies, or that a certain platform has not complied with its legal or judicial obligations and therefore it needs to be taken off the internet. So it’s important that when we speak about shutdowns we consider the broader picture and not just the idea of cutting off all of the internet.

KHUSHBU SHAH: Adeboye, I’d like to hear what your thoughts are on this in the context of Nigeria.

ADEBOYE ADEGOKE: Yeah. It’s really very interesting. And to the point, you know, he was making about, you know, in terms of when we talk about shutdown, I think the work around [understanding shutdowns] has been great and it’s really helped the world to understand what is happening globally. But just as he said, I think there are also some other forms of exclusion that [happen] because of government actions and inactions that probably wouldn’t fall on that thematic topic of shutdown, but it, in a way, is some sort of exclusionary, you know, policy.

So an example is in some remote areas in Nigeria, for example, for most of the technology companies who are laying cables, providing internet services, it doesn’t make a lot of business sense for them to be, you know, present in those locations. And to make the matter worse for them, the authorities, the local governments, those are imposing huge taxes on those companies to be able to lay their fiber cables into those communities, which means that for the businesses, for the companies it doesn’t make any economic sense to invest in such locations. And so, by extension, those [kinds] of people are shut down from the internet; they are not able to assess communication network and all of that.

But I also think it’s very important to highlight the fact that—I mean, I come from the continent where internet is shut down for the silliest reason that you can imagine. I mean, there have been [shutdowns] because [the] government was trying to prevent students cheating in exams, you know? Shutdowns are common during elections, you know? [Shutdowns] happen because [the] government was trying to prevent gossip. So it’s the silliest of reasons why there have been internet [shutdowns] in the area, you know, in the part of the world that I am from.

But what I think—in the context of the work that the FOC does, I think something that comes to mind is how we are working to prevent future [shutdowns]. I spoke about the election that just ended in Nigeria. One of the things that we did was to, shortly before the election, organize, like, a stakeholder meeting of government representative, of fact checkers, of, you know, the platforms, the digital companies, civil society [organizations], and electoral [observers]… to say that, OK, election is—if you are from Africa, any time election is coming you are expecting a shutdown. So it’s to have a conversation and say: Election is coming. There is going to be a lot of misinformation. There’s going to be heightened risk online. But what do we need to do to ensure that we don’t have to shut down the internet?

So, for Nigeria, we were able to have that conversation a few weeks before the election, and luckily the [internet was] not shut down. So I mean, I would describe that as a win. But just to emphasize that it is helpful when you engage in a platform like the FOC to understand the dimensions that [shutdowns] take across the world. It kind of helps you to prepare for—especially if you were in the kind of tradition that we were to prepare for potential shutdown. And also I think it’s also good to spotlight the work that Access Now has done with respect to spotlighting the issue of shutdown because it helps to get their perspective.

So, for example, I’m from Nigeria. We have never really experienced widespread shutdown in Nigeria, but because we are seeing it happen in our sister—in our neighboring countries—we are kind of conscious of that and were able to engage ahead of elections to see, oh, during election in Uganda, [the] internet was shut down. In Ethiopia, [the] internet was shut down. So it’s likely [the] internet will be shut down in Nigeria. And then to say to the authority: No, you know what? We don’t have to shut down the internet. This is what we can do. This is the mechanism on [the] ground to identify risk online and address those risks. And also, holding technology platform accountable to make sure that they put mechanism in place, to make sure they communicate those mechanisms clearly during elections.

So it’s interesting how much work needs to go into that, but I think it’s… important work. And I think for the FOC, it’s also—it’s also very important to continue to communicate the work that the FOC is doing in that regard so that more and more people become aware of it, and sort of more people are prepared, you know, to mitigate it, especially where you feel is the highest risk of shutdown.

KHUSHBU SHAH: Thank you. I’m going to jump across to the other side of that spectrum, to surveillance tech, to the—to the almost literally—the opposite, and I wanted to start with the news that Deputy Secretary Sherman mentioned, with the news that the Biden administration announced just this afternoon, a new executive order that would broadly ban US federal agencies from using commercially developed spyware that poses threats to human rights and national security.

The deputy secretary also mentioned, Alissa, some guiding principles that they were going to announce later this week with the FOC. What are some—what are some things—what are some principles or what are some ambitions that you would hope to see later this week?

ALISSA STARZAK: So I think there’s a lot coming is my guess. Certainly the surveillance tech piece is an important component, but I think there are lots of broad guidelines.

I actually want to go back to shutdowns for a second, if you don’t mind…. Because I think it’s a really interesting example of how the FOC can work well together and how you take all of the different pieces—even at this table—of what—how you sort of help work on an internet problem or challenge, right? So you have a world where you have activists on the ground who see particular challenges who would then work with their local government. You have industry partners like Cloudflare who can actually show what’s happening. So are there—is there a shutdown? Is there a network disruption? So you can take the industry component of it, and that provides some information for governments, and then governments can work together to sort of pressure other governments to say these aren’t acceptable. These are—these norms—you can’t—no, you can’t shut down because you are worried about gossip, and cheating, and an exam, right? There’s a set of broad international norms that become relevant in that space, and I think you take that as your example. So you have the players—you have the government to government, you have the civil society to government, you have the industry which provides information to government and civil society. And those are the pieces that can get you to a slightly better place.

And so when I look at the norms coming out later this week, what I’m going to be looking for are that same kind of triangulation of using all of the players in the space to come to a better—to come to a better outcome. So whether that’s surveillance tech, sort of understanding from civil society how it has been used, how you can understand it from other tech companies, how you can sort of mitigate against those abuses, working with governments to sort of address their own use of it to make sure that that doesn’t become a forum—all of those pieces are what you want from that model. And I think—so that’s what I’m looking for in the principles that come out. If they have that triangulation, I’m going to be—I’m going to be very happy.

KHUSHBU SHAH: What would you both be looking for, as well? J.C., I’ll start with you.

JUAN CARLOS LARA: Yeah, as part of the [FOC advisory network], of course, there might be some idea of what’s coming in when we speak about principles for governments for the use of surveillance capabilities.

However, there are two things that I think are very important to consider for this type of issue: first of all is that which principles and which rules are adopted by the states. I mean, it’s a very good—it’s very good news that we have this executive order as a first step towards thinking how states refrain from using surveillance technology disproportionately or indiscriminately. That’s a good sign in general. That’s a very good first step. But secondly, within this same idea, we would expect other countries to follow suit and hopefully to expand the idea of bans on spyware or bans on surveillance technology that by itself may pose grave risks to human rights, and not just in the case of this, or that, or the fact that it’s commercial spyware, which is a very important threat including for countries in Latin America who are regular customers for certain spyware producers and vendors.

But separately from that, I think it’s very important to also understand how this ties into the purposes of the Freedom Online Coalition and its principles, and how to have further principles that hopefully pick up on the learnings that we have had for several years of discussion on the deployment of surveillance technologies, especially by academia and civil society. If those are picked up by the governments themselves as principle, we expect that to exist in practice.

One of the key parts of the discussion on commercial spyware is that I can easily think of a couple of Latin American countries that are regular customers. And one of them is an FOC member. That’s very problematic, when we speak about whether they are abiding by these principles and by human rights obligations or not, and therefore whether these principles will generate any kinds of restraint in the use and the procurement of such surveillance tools.

KHUSHBU SHAH: So I want to follow up on that. Do you think that there—what are the dangers and gaps of having this conversation without proposing privacy legislation? I want to ask both of our—

JUAN CARLOS LARA: Oh, very briefly. Of course, enforcement and the fact that rules may not have the institutional framework to operate I think is a key challenge. That is also tied to capacities, like having people with enough knowledge and have enough, of course, exchange of information between governments. And resources. I think it’s very important that governments are also able to enact the laws that they put in the books, that they are able to enforce them, but also to train every operator, every official that might be in contact with any of these issues. So that kind of principle may not just be adopted as a common practice, but also in the enforcement of the law, so get into the books. Among other things, I think capacities and resources are, like—and collaboration—are key for those things.

KHUSHBU SHAH: Alissa, as our industry expert, I’d like to ask you that same question.

ALISSA STARZAK: You know, I think one of the interesting things about the commercial spyware example is that there is a—there is a government aspect on sort of restricting other people from doing certain things, and then there is one that is a restriction on themselves. And so I think that’s what the executive order is trying to tackle. And I think that the restricting others piece, and sort of building agreement between governments that this is the appropriate thing to do, is—it’s clearly with the objective here, right?

So, no, it’s not that every government does this. I think that there’s a reality of surveillance foreign or domestic, depending on what it looks like. But thinking about building rulesets of when it’s not OK, because I think there is—there can be agreement if we work together on what that ruleset looks like. So we—again, this is the—we have to sort of strive for a better set of rules across the board on when we use certain technologies. And I think—clearly, I think what we’ve heard, the executive order, it’s the first step in that process. Let’s build something bigger than ourselves. Let’s build something that we can work across governments for. And I think that’s a really important first step.

ADEBOYE ADEGOKE: OK. Yeah, so—yeah, so, I think, yeah, the executive order, it’s a good thing. Because I was, you know, thinking to myself, you know, looking back to many years ago when in my—in our work when we started to engage our government regarding the issue of surveillance and, you know, human rights implications and all of that, I recall very vividly a minister at the time—a government minister at the time saying that even the US government is doing it. Why are you telling us not to do it? So I think it’s very important.

Leadership is very key. The founding members of the FOC, if you look FOC, the principles and all of that, those tests are beautiful. Those tests are great. But then there has to be a demonstration of—you know, of application of those tests even by the governments leading, you know, the FOC so that it makes the work of people like us easier, to say these are the best examples around and you don’t get the kind of feedback you get many years ago; like, oh, even the US government is doing it. So I think the executive order is a very good place to start from, to say, OK, so this is what the US government is doing right now and this is how it wants to define their engagement with spyware.

But, of course, like, you know, he said, it has to be, you know, expanded beyond just, you know, concerns around spyware. It has to be expanded to different ways in which advanced technology [is] applied in government. I come from a country that has had to deal with the issues of, you know, terrorism very significantly in the past ten years, thereabout, and so every justification you need for surveillance tech is just on the table. So whenever you want to have the human rights conversation, somebody’s telling you that, you want terrorists to kill all of us? You know? So it’s very important to have some sort of guiding principle.

Yeah, we understand [the] importance of surveillance to security challenges. We understand how it can be deployed for good uses. But we also understand that there are risks to human-rights defenders, to journalists, you know, to people who hold [governments] accountable. And those have to be factored into how these technologies are deployed.

And in terms of, you know, peculiar issues that we have to face, basically you are dealing with issues around oversight. You are dealing with issues around transparency. You are dealing with issues around [a] lack of privacy frameworks, et cetera. So you see African governments, you know, acquiring similar technologies, trying, you know, in the—I don’t want to say in the guise, because there are actually real problems where those technologies might be justified. But then, because of the lack of these principles, these issues around transparency, oversight, legal oversight, human-rights considerations, it then becomes problematic, because this too then become—it’s true that it is used against human-rights defenders. It’s true that it is used against opposition political parties. It’s true that it is used against activists and dissidents in the society.

So it’s very important to say that we look at the principle that has been developed by the FOC, but we want to see FOC government demonstrate leadership in terms of how they apply those principles to the reality. It makes our work easier if that happens, to use that as an example, you know, to engage our government in terms of how this is—how it is done. And I think these examples help a lot. It makes the work very easy—I mean, much easier; not very easy.

KHUSHBU SHAH: Well, you mentioned a good example; so the US. So you reminded me of the biometric data that countries share in Central and North America as they monitor refugees, asylum seekers, migrants. Even the US partakes. And so, you know, what can democracies do to address the issue when they’re sometimes the ones leveraging these same tools? Obviously, it’s not the same as commercial spyware, but—so what are the boundaries of surveillance and appropriate behavior of governments?

J.C., can I throw that question to you?

JUAN CARLOS LARA: Happy to. And we saw a statement by several civil-society organizations on the use of biometric data with [regard] to migrants. And I think it’s very important that we address that as a problem.

I really appreciated that Boye mentioned, like, countries leading by example, because that’s something that we are often expecting from countries that commit themselves to high-level principles and that sign on to human-rights instruments, that sign declarations by the Human Rights Council and the General Assembly of the [United Nations] or some regional forums, including to the point of signing on to FOC principles.

I think that it’s very problematic that things like biometric data are being used—are being collected from people that are in situations of vulnerability, as is the case of very—many migrants and many people that are fleeing from situations of extreme poverty and violence. And I think it’s very problematic also that also leads to [the] exchange of information between governments without proper legal safeguards that prevent that data from falling into the hands of the wrong people, or even that prevent that data from being collected from people that are not consenting to it or without legal authorization.

I think it’s very problematic that countries are allowing themselves to do that under the idea that this is an emergency situation without proper care for the human rights of the people who are suffering from that emergency and that situations of migrations are being treated like something that must be stopped or contained or controlled in some way, rather than addressing the underlying issues or rather than also trying to promote forms of addressing the problems that come with it without violating human rights or without infringing upon their own commitments to human dignity and to human privacy and to the freedom of movement of people.

I think it’s—that it’s part of observing legal frameworks and refraining from collecting data that they are not allowed to, but also to obeying their own human-rights commitments. And that often leads to refraining from taking certain action. And in that regard, I think the discussions that there might be on any kind of emergency still needs to take a few steps back and see what countries are supposed to do and what obligations they are supposed to abide [by] because of their previous commitments.

KHUSHBU SHAH: So thinking about what you’ve just said—and I’m going to take a step back. Alissa, I’m going to ask you kind of a difficult question. We’ve been talking about specific examples of human rights and what it means to have online rights in the digital world. So what does it mean in 2023? As we’re talking about all of this, all these issues around the world, what does it mean to have freedom online and rights in the digital world?

ALISSA STARZAK: Oh, easy question. It’s really easy. Don’t worry; we’ve got that. Freedom Online’s got it; you’ve just got to come to their meetings.

No, I think—I think it’s a really hard question, right? I think that we have—you know, we’ve built something that is big. We’ve built something where we have sort of expectations about access to information, about the free flow of information across borders. And I think that, you know, what we’re looking at now is finding ways to maintain it in a world where we see the problems that sometimes come with it.

So when I look at the—at the what does it mean to have rights online, we want to—we want to have that thing that we aspire to, I think that Deputy Secretary Sherman mentioned, the sort of idea that the internet builds prosperity, that the access to the free flow of information is a good thing that’s good for the economy and good for the people. But then we have to figure out how we build the set of controls that go along with it that are—that protect people, and I think that’s where the rule of law does come into play.

So thinking about how we build standards that are respect—that respect human rights in the—when we’re collecting all of the information of what’s happening online, right, like, maybe we shouldn’t be collecting all of that information. Maybe we should be thinking of other ways of addressing the concerns. Maybe we should be building [a] framework that countries can use that are not us, right, or that people at least don’t point to the things that a country does and say, well, if they can do this, I can do this, right, using it for very different purposes.

And I think—I think that’s the kind of thing that we’re moving—we want to move towards, but that doesn’t really answer the underlying question is the problem, right? So what are the rights online? We want as many rights as possible online while protecting security and safety, which is, you know, also—they’re also individual rights. And it’s always a balance.

KHUSHBU SHAH: It seems like what you’re touching on—J.C., would you like to—

JUAN CARLOS LARA: No. Believe me.

KHUSHBU SHAH: Well, it seems like what you’re talking about—and we’re touching—we’ve, like, talked around this—is, like, there’s a—there’s a sense of impunity, right, when you’re on—like in the virtual world, and that has led to what we’ve talked about for the last forty minutes, right, misinformation/disinformation. And if you think about what we’ve all been talking about for the last few weeks, which is AI—and I know there have been some moments of levity. I was thinking about—I was telling Alissa about how there was an image of the pope wearing a white puffer jacket that’s been being shown around the internets, and I think someone pointed out that it was fake, that it was AI-generated. And so that’s one example. Maybe it’s kind of a fun example, but it’s also a little bit alarming.

And I think about the conversation we’re having, and what I really want to ask all of you is, so, how might these tools—like the AI, the issue of AI—further help or hurt [human rights] activists and democracies as we’re going into uncharted territories, as we’re seeing sort of the impact of it in real time as this conversation around it evolves and how it’s utilized by journalists, by activists, by politicians, by academics? And what should the FOC do—I know I’m asking you again—what can the FOC do? What should we aim for to set the online world on the right path for this uncharted territory? I don’t know who wants to start and attempt.

ADEBOYE ADEGOKE: OK, I’ll start. Yeah.

So I think it’s great that, you know, the FOC has, you know, different task [forces] working on different thematic issues, and I know there is a task force on the issue of artificial intelligence and human rights. So I think for me that’s a starting point, you know, providing core leadership on how emerging technology generally impacts… human rights. I think that’s the starting point in terms of what we need to do because, like the deputy secretary said, you know, technology’s moving at such a pace that we can barely catch up on it. So we cannot—we cannot afford to wait one minute, one second before we start to work on this issue and begin to, you know, investigate the human rights implications of all of those issues. So it’s great that the FOC’s doing that work.

I would just say that it’s very important for—and I think this [speaks] generally to the capacities of the FOC. I think the FOC needs to be further capacitated so that this work can be made to bear in real-life issues, in regional, in national engagement so that some of the hard work that has been put into those processes can really reflect in real, you know, national and regional processes.

ALISSA STARZAK: Yeah. So I definitely agree with that.

I think—I think on all of these issues I think we have a reality of trying to figure out what governments do and then what private companies do, or what sort of happens in industry, and sometimes those are in two different lanes. But in some ways figuring out what governments are allowed to do, so thinking about the sort of negative potential uses of AI may be a good start for thinking about what shouldn’t happen generally. Because if you can set a set of norms, if you can start with a set of norms about what acceptable behavior looks like and where you’re trying to go to, you’re at least moving in the direction of the world that you think you want together, right?

So understanding that you shouldn’t be generating it for the purpose of misinformation or, you know, that—for a variety of other things, at least gets you started. It’s a long—it’s going to be a long road, a long, complicated road. But I think there’s some things that can be done there in the FOC context.

JUAN CARLOS LARA: Yes. And I have to agree with both of you. Specifically, because the idea that we have a Freedom Online Coalition to set standards, or to set principles, and a taskforce that can devote some resources, some time, and discussion to that, can also identify where this is actually the part of the promise and which is the part of the peril. And how governments are going to react in a way that promotes prosperity, that promotes interactivity, and promotes commerce—exercise of human rights, the rights of individuals and groups—and which sides of it become problematic from the side of the use of AI tools, for instance, for detecting certain speech for censorship or for identifying people in the public sphere, because they’re working out on the streets, or to collect and process people without consent.

I think because that type of expertise and that type of high political debate can be held at the FOC, that can promote the type of norms that we need in order to understand, like, what’s the role of governments in order to steer this somewhere. Or whether they should refrain from doing certain actions that might—with the good intention of preventing the spread of AI-generated misinformation or disinformation—that may end up stopping these important tools to be used creatively or to be used in constructive ways, or in ways that can allow more people to be active participants of the digital economy.

KHUSHBU SHAH: Thank you. Well, I want to thank all three of you for this robust conversation around the FOC and the work that it’s engaging in. I want to thank Deputy Secretary Sherman and our host here at the Atlantic Council for this excellent conversation. And so if you’re interested in learning more about the FOC, there’s a great primer on it on the DFRLab website. I recommend you check it out. I read it. It’s excellent. It’s at the bottom of the DFRLab’s registration page for this event.

Watch the full event

The post Wendy Sherman on the United States’ priorities as it takes the helm of the Freedom Online Coalition appeared first on Atlantic Council.

]]>
Modernizing critical infrastructure protection policy: Seven perspectives on rewriting PPD21 https://www.atlanticcouncil.org/content-series/tech-at-the-leading-edge/modernizing-critical-infrastructure-protection-policy-seven-perspectives-on-rewriting-ppd21/ Wed, 22 Mar 2023 12:30:00 +0000 https://www.atlanticcouncil.org/?p=625907 In February of 2013, then President Obama signed a landmark executive order - Presidential Policy Directive 21 (PPD 21) - that defined how U.S. Departments and Agencies would provide a unity of government effort to strengthen and maintain US critical infrastructure. Almost a decade later, evolutions in both the threat landscape and the interagency community invite the US government to revise this critical policy.

The post Modernizing critical infrastructure protection policy: Seven perspectives on rewriting PPD21 appeared first on Atlantic Council.

]]>
In February of 2013, then President Obama signed a landmark executive order—Presidential Policy Directive 21 (PPD 21)—that defined how US Departments and Agencies would pursue a unity of government effort to strengthen and maintain US critical infrastructure. Almost a decade later, evolutions in both the threat landscape and the interagency community invite the US government to revise this critical policy.

As the current administration looks to modernize this essential piece of legislation, particular emphasis must be placed on two key steps. First, to deconflict and clarify the specific roles and responsibilities within the ever-growing interagency—particularly, the SRMA-CISA relationship. Second, to help policymakers better understand and work to implement a risk-based approach to critical infrastructure protection—if everything is critical, what gets prioritized?

To dive deeper on this topic, we asked seven experts to offer their perspectives on critical infrastructure and how we can rebalance the interagency to better secure that infrastructure:

If the US government were to change the way it categorizes or prioritizes critical infrastructure, what’s a better alternative to the current approach?

“Over time, the phrase “critical infrastructure” has become overused.  This overuse has led to varying definitions of the phrase, and the analyses conducted to better categorize the concept have led to inconsistent focus and findings across the sectors. The baseline definition— assets, systems, and networks, whether physical or virtual, [that] are considered so vital to the United States that their incapacitation or destruction would have a debilitating effect on security, national economic security, national public health or safety, or any combination thereof – does not lend clarity because there is a definitional tension between infrastructure that is critical for sustaining and supporting Americans’ daily lives and the economy, and infrastructure that might be dangerous (ex. Chemical or nuclear facilities), but not necessarily critical.

The only way to resolve what should consistently be used as the underlying definition for “critical infrastructure” is clarify the goals or desired end state for these national risk management efforts.  For instance, there are stated end goals to support continuity of government objectives, but it is not clear that there are a similar set of national resiliency goals to support the nation’s critical infrastructure. Recent CSAC recommendations (September 2022) made this point directly: “Clear national-level goals in the areas of national security, economic continuity, and health and human safety would help organize public and private critical infrastructure stakeholders in the analysis of what it would take to accomplish those objectives.” Whatever end goal is articulated, it must be sustained consistently for a long time (10 years or more). This will create the continuity necessary to marshal the resources of both industry and government to carry out these goals.   

Government does not need to begin from nothing to carry out this work. The sector structure is in place, and the National Critical Functions are understood. Whatever end goal is articulated, mobilizing the initial analysis by using the current sector structure and what we already know about the critical functions in the following sequential approach:   

  • Foundational/lifeline sector analysis: Energy, communications, transportation, and water & wastewater. All are dependent on these critical functions, and existing analysis has shown that disruption impacts are felt at once; all are precursors to community restoration post-disaster.  
  • Middle level infrastructure: Chemical, financial Services, food and agriculture, healthcare/public health, and information technology (IT). The critical functions performed in these sectors are reliant on foundational infrastructure, are complex systems-of-systems, and are necessary for continuity of the economy/society.  
  • Higher-level infrastructure (end users, producers of goods and services): Commercial & government facilities, critical manufacturing, defense industrial base (DIB), and emergency services. In some ways, these sectors are consumers of infrastructure and not really providers of it.  This is not to suggest that the services provided by these sectors are NOT critical, but that they rely upon infrastructure provided by others.  

With clearly articulated, long-term national goals, leveraging structures and analysis completed to date, the means to identify, categorize and prioritize which infrastructure is “critical” will be a logical outcome of the analysis.”    

Kathryn Condello, Senior Director, National Security/Emergency Preparedness, Lumen Technologies

In theory, what is a Sector Risk Management Agency (SRMA)? In practice, how should a SRMA’s role change depending on what kind of organization plays that role?

“In theory, a SRMA should be the day-to-day, substantively deep operational partner within USG for private sector critical infrastructure partners. These SRMAs should be the entity that is in the trenches with critical infrastructure operators—working to better understand the threat environment, lift up and support those who lack sufficient resources or capabilities, and guide our partners to acceptable and more sustainable levels of risk management and resilience.

In practice, an organization’s resources and capabilities—and the role that they are able to play—varies a lot depending on the type of organization in this role. I’ll provide two examples here. First, some SRMAs—like the Coast Guard—have regulatory capabilities to help apply pressure to owner/operators in their sector to raise their baselines for security. Others, like the Department of Energy (DoE), need to rely on other agencies to do so or use other, more incentive-based programs to achieve these objectives. Second, SRMAs may bring a different balance of resources and substantive sector knowledge to the table. As an example, CISA—which serves as the SRMA for several sectors—may bring far more resources and manpower to the table than another single agency but may lack the deep sector knowledge and partnerships of an organization like DoE.”

Will Loomis, Associate Director, Digital Forensics Research Lab, Cyber Statecraft Initiative, Atlantic Council

Where are some of the biggest existing fault lines in the relationship between CISA and the SRMAs? How might any future revision to PPD-21 better address these?

“Current PPD-21 guidance is based on the model of the 16 critical infrastructure sectors where roles and responsibilities fall under the designated leads for each sector. This model works well when it comes to directing congressional funding to a particular agency or knowing which agency leads the response to an incident in a specific sector.

In reality, significant challenges to the security and safety of the nation’s critical infrastructure are typically complex, multi-faceted events that are rarely limited to just one sector. This holds true for both a single, catastrophic incident and the simple, daily work necessary to mitigate risks. Actions in both situations depend on and have impacts well outside a single sector.

PPD-21 guidance is purposely not prescriptive, which leaves certain elements open to interpretation when it comes to the SRMA’s primacy compared to CISA. Additionally, current guidance does not account for an agency’s capability to fulfill its SRMA responsibilities. The expertise and capabilities of some SRMAs are generally agreed to be more mature than others. I experienced firsthand the friction between different views and capabilities created during my time at CISA as part of the COVID Task Force. Disagreements on roles and responsibilities during the response to ransomware at a hospital or regarding the security of information systems in portions of the vaccine supply chain induced unnecessary challenges during an already difficult national pandemic.

I am not advocating for more detail on roles and responsibilities, since no amount of guidance could cover every situation and account for the differences in each agency’s expertise and capabilities. I do think a different approach where PPD-21 guidance has an increased focus towards national functions and an emphasis on greater collaboration and integration would better serve the ability of federal agencies to fulfill their missions.”

Steve Luczynski, Senior Manager – Critical Infrastructure Security, Accenture Federal Services

What responsibilities should SRMAs be investing in to be better operational partners for the private sector?

“SRMAs should look to prioritize those assets most significant to national security, begin processes to analyze risk, and ultimately buy down that risk utilizing experts within those sectors and cross training them in cyber. It’s time we refocus on nationally critical assets vs. trying to be everything to every asset, almost like a helpdesk approach to critical infrastructure protection. This includes clearly defining roles for state and local entities, as well setting objectives for performance.  Finally, the government should cross train the private sector in a common language for coordination, like Incident Command System to work together better on a day-to-day basis, as well as during response and recovery from cyber events.” 

Megan Samford, Non-Resident Senior Fellow, Cyber Statecraft Initiative, Atlantic Council; VP & Chief Product Security Officer – Energy Management, Schneider Electric

How should any future revision of PPD-21 think holistically about SRMA capabilities?

“In a perfect world there would be a dedicated cybersecurity SME at the federal level for each critical infrastructure sector, either within each SRMA or at CISA as a main technical liaison. In lieu of this reality, with the ‘near-future’ capabilities, SRMAs’ cybersecurity maturity and mandates should capture the entire supply chain—security management of suppliers, enterprise content management, development environment, products and services, upstream supply chain, operational technology (OT), and downstream supply chain—aligned to the CISA Cybersecurity Performance Goals as a baseline. As the SRMAs designate required tools and capabilities at the asset owner level, they should continue vendor-neutral evaluations of designated and required tools and capabilities. These agencies should represent the boots on the ground approach to the reframing sections above. SRMAs also need to identify the level of cybersecurity and risk management that asset owners can afford to own vs. what government can reasonably subsidize and augment. I don’t believe this can be effectively done without addressing the point above. Lastly, SRMAs should reevaluate the definition and efficacy of information sharing capabilities within each sector, as information sharing ≠ situational awareness ≠ incident prevention.

Regardless of commonalities, no two attacks on OT/industrial control systems (ICS) are ever the exact same, making automated response and remediation difficult. Unfortunately, this reality means that every operation and facility must wait to see another organization victimized before there can be shared signatures, detections, and fully-baked intelligence for threat hunting to ensue. In terms of the threat landscape, there is no way to standardize and correlate threat and vulnerability research produced from competitive market leaders. Information sharing lacks trust and verification, has been siloed into sector-specific, private sector, or government agency-specific mechanisms—creating single sources of information without much consensus. This is a major roadblock for efficacy across SRMAs and their situational awareness/strategic planning.”

Danielle Jablanski, Non-Resident Senior Fellow, Cyber Statecraft Initiative, Atlantic Council; OT Cybersecurity Strategist, Nozomi Networks

How can the US government address risks associated with cross-sector interdependencies in the naturally siloed SRMA model?

“When addressing cyber risks to critical infrastructure, the US government—and industry—need to reframe thinking around jurisdiction and impact. The SRMA model hinges on federal agencies, which creates a governance gap and cognitive blind spot for interdependence. In the same way that the National Security Council drives the interagency process, the US government needs a coordinating body to prioritize and manage the competing and corollary agencies. Whether that is CISA or ONCD, one office must take the strategic, systemic view of critical infrastructure.”

Munish Walter-Puri, Senior Director of Critical Infrastructure, Exiger

In any future policy, how could the US government preserve the ability to regularly adjust the boundaries of critical infrastructure classifications or sectors?

Presidential Policy Directive 21 identified 16 critical infrastructure sectors and their associated sector-specific agencies (now called SRMAs) and called upon the Secretary of Homeland Security to “periodically evaluate the need for and approve changes to critical infrastructure sectors” and to “consult with the Assistant to the President for Homeland Security and Counterterrorism before changing a critical infrastructure sector or a designated [SRMA] for that sector.” Since the issuance of PPD-21, changes to the Homeland Security Act have required a reassessment of the current sector structure and SRMA designations at least every five years. The National Defense Authorization Act for Fiscal Year 2021 required the Secretary of Homeland Security to evaluate the sectors and SRMA designations and provide recommendations for revisions to the President. In fulfillment of this mandate, the Department of Homeland Security delivered a report to Congress and the President, assessing that the absence of a statutory basis for the definition of a “sector” has “created a challenge in clarifying and building criteria for clarifying and rationalizing the sector structure.” The report cites the National Infrastructure Protection Plan as the origin of the current operating definition of a “sector”: “[A] logical collection of assets, systems, or networks that provide a common function to the economy, government, or society.”

In evaluating critical infrastructure sector classifications or structure, the federal government should minimize the overall number of sectors to allow for productive engagement to accomplish specific efforts. Focusing on creating structures to enable cross-sector engagement scoped around specific risk management concerns prioritizes the work to be performed with flexibility and who needs to be there to support it. The current statutory requirement to regularly evaluate sector classifications would be sufficient provided the federal government creates a mechanism to convene critical infrastructure owners and operators independent of sector designations. In its September 2022 recommendations to the Director of the Cybersecurity and Infrastructure Security Agency (CISA), the Cybersecurity Advisory Committee Subcommittee on Systemic Risk recommended that CISA “[scope] its national resilience efforts around focus areas like national security, health and human safety, and economic prosperity” with the goal of enabling CISA “to use resources and personnel more efficiently to prioritize the appropriate [National Critical Functions]–and [systemically important entities]–and orient national resilience programming within each scope.” Within each of these focus areas, CISA, in its role as the national coordinator of sector risk management agencies, should periodically assess the challenges facing critical infrastructure owners and operators and identify workstreams to organize relevant entities that measurably contribute to the risk management effort. For example, under the broad focus area of national security, CISA might organize a cross-sector effort to address small unmanned aerial system surveillance of critical infrastructure sites, an issue for which the White House has organized a task force. These assessments should align with the cadence that the Homeland Security Act requires for reassessments of the sector/SRMA designations or in conjunction with the five-year term granted to the CISA Director. The federal government should also ensure that there is a mechanism for leadership of both the Sector Coordinating Councils and Government Coordinating Councils that provides decision-making authority for workstreams as the risk landscape evolves and new challenges arise.”

Jeffrey Baumgartner, Vice President, National Security and Resilience, Berkshire Hathaway Energy

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post Modernizing critical infrastructure protection policy: Seven perspectives on rewriting PPD21 appeared first on Atlantic Council.

]]>
The 5×5—Conflict in Ukraine’s information environment https://www.atlanticcouncil.org/content-series/the-5x5/the-5x5-conflict-in-ukraines-information-environment/ Wed, 22 Mar 2023 04:01:00 +0000 https://www.atlanticcouncil.org/?p=625738 Experts provide insights on the war being waged through the Ukrainian information environment and take away lessons for the future.

The post The 5×5—Conflict in Ukraine’s information environment appeared first on Atlantic Council.

]]>
This article is part of The 5×5, a monthly series by the Cyber Statecraft Initiative, in which five featured experts answer five questions on a common theme, trend, or current event in the world of cyber. Interested in the 5×5 and want to see a particular topic, event, or question covered? Contact Simon Handler with the Cyber Statecraft Initiative at SHandler@atlanticcouncil.org.

Just over one year ago, on February 24, 2022, Russia launched a full-scale invasion of neighboring Ukraine. The ensuing conflict, Europe’s largest since World War II, has not only besieged Ukraine physically, but also through the information environment. Through kinetic, cyber, and influence operations, Russia has placed Ukraine’s digital and physical information infrastructure—including its cell towers, networks, data, and the ideas that traverse them—in its crosshairs as it seeks to cripple Ukraine’s defenses and bring its population under Russian control. 

Given the privately owned underpinnings of the cyber and information domains by technology companies, a range of local and global companies have played a significant role in defending the information environment in Ukraine. From Ukrainian telecommunications operators to global cloud and satellite internet providers, the private sector has been woven into Ukrainian defense and resilience. For example, Google’s Threat Analysis Group reported having disrupted over 1,950 instances in 2022 of Russian information operations aimed at degrading support for Ukraine, undermining its government, and building support for the war within Russia. The present conflict in Ukraine offers lessons for states as well as private companies on why public-private cooperation is essential to building resilience in this space, and how these entities can work together more effectively. 

We brought together a group of experts to provide insights on the war being waged through the Ukrainian information environment and take away lessons for the United States and its allies for the future. 

#1 How has conflict in the information environment associated with the war in Ukraine compared to your prior expectations?

Nika Aleksejeva, resident fellow, Baltics, Digital Forensic Research Lab (DFRLab), Atlantic Council

“As the war in Ukraine started, everyone was expecting to see Russia conducting offensive information influence operations targeting Europe. Yes, we have identified and researched Russia’s coordinated information influence campaigns on Meta’s platforms and Telegram. These campaigns targeted primarily European countries, and their execution was unprofessional, sloppy, and without much engagement on respective platforms.” 

Silas Cutler, senior director for cyber threat research, Institute for Security and Technology (IST)

“A remarkable aspect of this conflict has been how Ukraine has maintained communication with the rest of the world. In the days leading up to the conflict, there was a significant concern that Russia would disrupt Ukraine’s ability to report on events as they unfolded. Instead of losing communication, Ukraine has thrived while continuously highlighting through social media its ingenuity within the conflict space. Both the mobilization of its technical workforce through the volunteer IT_Army and its ability to leverage consumer technology, such as drones, have shown the incredible resilience and creativity of the Ukrainian people.” 

Roman Osadchuk, research associate, Eurasia, Digital Forensic Research Lab (DFRLab), Atlantic Council: 

“The information environment was chaotic and tense even before the invasion, as Russia waged a hybrid war since at least the annexation of Crimea and war in Eastern Ukraine in 2014. Therefore, the after-invasion dynamic did not bring significant surprises, but intensified tension and resistance from Ukrainian civil society and government toward Russia’s attempts to explain its unprovoked invasion and muddle the water around its war crimes. The only things that exceeded expectations were the abuse of fact-checking toolbox WarOnFakes and the intensified globalization of the Kremlin’s attempts to tailor messages about the war to their favor globally.” 

Emma Schroeder, associate director, Cyber Statecraft Initiative, Digital Forensic Research Lab (DFRLab), Atlantic Council

“The information environment has been a central space and pathway throughout which this war is being fought. Russian forces are reaching through that space to attack and spread misinformation, as well as attacking the physical infrastructure underpinning this environment. The behavior, while novel in its scale, is the continuation of Russian strategy in Crimea, and is very much living up to expectations set in that context. What has surpassed expectations is the effectiveness of Ukrainian defenses, in coordination with allies and private sector partners. The degree to which the international community has sprung forward to provide aid and assistance is incredible, especially in the information environment where such global involvement can be so immediate and transformative.” 

Gavin Wilde, senior fellow, Technology and International Affairs Program, Carnegie Endowment for International Peace

“The volume and intensity of cyber and information operations has roughly been in line with my prior expectations, though the degree of private and commercial activity was something that I might not have predicted a year ago. From self-selecting out of the Russian market to swarming to defend Ukrainian networks and infrastructure, the outpouring of support from Western technology and cybersecurity firms was not on my bingo card. Sustaining it and modeling for similar crises are now key.” 

 
#2 What risks do private companies assume in offering support or partnership to states engaged in active conflict?

Aleksejeva: “Fewer and fewer businesses are betting on Russia’s successful economical future. Additionally, supporting Russia in this conflict in any way is morally unacceptable for most Western companies. Chinese and Iranian companies are different. As for Ukraine, supporting it is morally encouraged, but is limited by many practicalities, such as supply chain disruptions amid Russia’s attacks.” 

Cutler: “By providing support during conflict, companies risk becoming a target themselves. Technology companies such as Microsoft, SentinelOne, and Cloudflare, which have publicly reported their support for Ukraine, have been historically targeted by Russian cyber operations and are already familiar with the increased risk. Organizations with pre-conflict commercial relationships may fall under new scrutiny by nationally-aligned hacktivist groups such as Killnet. This support for one side over the other—whether actual or perceived—may result in additional risk.” 

Osadchuk: “An important risk of continuing business as usual [in Russia] is that it may damage a company’s public image and test its declared values, since the continuation of paying taxes within the country-aggressor makes the private company a sponsor of these actions. Another risk for a private company is financial, since the companies that leave a particular market are losing their profits, but this is incomparable to human suffering and losses caused by the aggression. In the case of a Russian invasion, one of the ways to stop the war is to cut funding for and, thus, undermine the Russian war machine and support Ukraine.” 

Schroeder: “Private companies have long provided goods and services to combatants outside of the information environment. The international legal framework restricting combatants to targeting ‘military objects’ provides normative protection, as objects are defined as those ‘whose total or partial destruction, capture or neutralization, in the circumstances ruling at the time, offers a definite military advantage’ in a manner proportional to the military gain foreseen by the operation. This definition, however, is still subject to the realities of conflict, wherein combatants will make those decisions to their own best advantage. In the information environment, this question becomes more complicated, as cyber products and services often do not fall neatly within standard categories and where private companies themselves own and operate the very infrastructure over and through which combatants engage. The United States and its allies, whether on a unilateral of supranational basis, work to better define the boundaries of civilian ‘participation’ in war and conflict, as the very nature of the space means that their involvement will only increase.” 

Wilde: “On one hand, it is important not to falsely mirror onto others the constraints of international legal and normative frameworks around armed conflict to which responsible states strive to adhere. Like Russia, some states show no scruples about violating these frameworks in letter or spirit, and seem unlikely to be inhibited by claims of neutrality from companies offering support to victimized states. That said, clarity about where goods and services might be used for civilian versus military objectives is advisable to avoid the thresholds of ‘direct participation’ in war outlined in International Humanitarian Law.”

#3 What useful lessons should the United States and its allies take away from the successes and/or failures of cyber and information operations in Ukraine?

Aleksejeva: “As for cyber operations, so far, we have not seen successful disruptions achieved by Russia of Ukraine and its Western allies. Yes, we are seeing constant attacks, but cyber defense is much more developed on both sides than before 2014. As for information operations, the United States and its allies should become less self-centered and have a clear view of Russia’s influence activities in the so-called Global South where much of the narratives are rooted in anti-Western sentiment.” 

Cutler: “Prior to the start of the conflict, it was strongly believed that a cyber operation, specifically against energy and communication sectors, would act as a precursor to kinetic action. While a WannaCry or NotPetya-scale attack did not occur, the AcidRain attack against the Viasat satellite communication network and other attacks targeting Ukraine’s energy sector highlight that cyber operations of varying effectiveness will play a role in the lead up to a military conflict.” 

Osadchuk: “First, cyber operations coordinate with other attack types, like kinetic operations on the ground, disinformation, and influence operations. Therefore, cyberattacks might be a precursor of an upcoming missile strike, information operation, or any other action in the physical and informational dimensions, so allies could use cyber to model and analyze multi-domain operations. Finally, preparation for and resilience to information and cyber operations are vital in mitigating the consequences of such attacks; thus, updating defense doctrines and improving cyber infrastructure and social resilience are necessary.” 

Schroeder: “Expectations for operations in this environment have exposed clear fractures in the ways that different communities define as success in a wartime operation. Specifically, there is a tendency to equate success with direct or kinetic battlefield impact. One of the biggest lessons that has been both a success and a failure throughout this war is the role that this environment can play. Those at war, from ancient to modern times, have leveraged every asset at their disposal and chosen the tool they see as the best fit for each challenge that arises—cyber is no different. While there is ongoing debate surrounding this question, if cyber operations have not been effective on a battlefield, that does not mean that cyber is ineffective, just that expectations were misplaced. Understanding the myriad roles that cyber can and does play in defense, national security, and conflict is key to creating an effective cross-domain force. 

Wilde: “Foremost is the need to check the assumption that these operations can have decisive utility, particularly in a kinetic wartime context. Moscow placed great faith in its ability to convert widespread digital and societal disruption into geopolitical advantage, only to find years of effort backfiring catastrophically. In other contexts, better trained and resourced militaries might be able to blend cyber and information operations into combined arms campaigns more effectively to achieve discrete objectives. However, it is worth reevaluating the degree to which we assume offensive cyber and information operations can reliably be counted on to play pivotal roles in hot war.”

More from the Cyber Statecraft Initiative:

#4 How do comparisons to other domains of conflict help and/or hurt understanding of conflict in the information domain?

Aleksejeva: “Unlike conventional warfare, information warfare uses information and psychological operations during peace time as well. By masking behind sock puppet or anonymous social media accounts, information influence operations might be perceived as legitimate internal issues that polarize society. A country might be unaware that it is under attack. At the same time, as the goal of conventional warfare is to break an adversary’s defense line, information warfare fights societal resilience by breaking its unity. ‘Divide and rule’ is one of the basic information warfare strategies.” 

Cutler: “When looking at the role of cyber in this conflict, I think it is critical to examine the history of Hacktivist movements. This can be incredibly useful for understanding the influences and capabilities of groups like the IT_Army and Killnet.” 

Osadchuk: “The information domain sometimes reflects the kinetic events on the ground, so comparing these two is helpful and could serve as a behavior predictor. For instance, when the Armed Forces of Ukraine liberate new territories, they also expose war crimes, civilian casualties, and damages inflicted by occupation forces. In reaction to these revelations, the Kremlin propaganda machine usually launches multiple campaigns to distance themselves, blame the victim, or even denounce allegations as staged to muddy the waters for certain observers.” 

Schroeder: “It is often tricky to carry comparisons over different environments and context, but the practice persists because, well, that is just what people do—look for patterns. The ability to carry over patterns and lessons is essential, especially in new environments and with the constant developments of new tools and technologies. Where these comparisons cause problems is when they are used not as a starting point, but as a predetermined answer.” 

Wilde: “It is problematic, in my view, to consider information a warfighting ‘domain,’ particularly because its physical and metaphorical boundaries are endlessly vague and evolving—certainly relative to air, land, sea, and space. The complexities and contingencies in the information environment are infinitely more than those in the latter domains. However talented we may be at collecting and analyzing millions of relevant datapoints with advanced technology, these capabilities may lend us a false sense of our ability to control or subvert the information environment during wartime—from hearts and minds to bits and bytes.”

#5 What conditions might make the current conflict exceptional and not generalizable?

Aleksejeva: “This war is neither ideological nor a war for territories and resources. Russia does not have any ideology that backs up its invasion of Ukraine. It also has a hard time maintaining control of its occupied territories. Instead, Russia has many disinformation-based narratives or stories that justify the invasion to as many Russian citizens as possible including Kremlin officials. Narratives are general and diverse enough, so everyone can find an explanation of the current invasion—be it the alleged rebirth of Nazism in Ukraine, the fight against US hegemony, or the alleged historical right to bring Ukraine back to Russia’s sphere of influence. Though local, the war has global impact and makes countries around the world pick sides. Online and social media platforms, machine translation tools, and big data products provide a great opportunity to bombard any internet user in any part of the world with pro-Russia massaging often tailored to echo historical, racial, and economic resentments especially rooted in colonial past.” 

Cutler: “During the Gulf War, CNN and other cable news networks were able to provide live coverage of military action as it was unfolding. Now, real-time information from conflict areas is more broadly accessible. Telegram and social media have directly shaped the information and narratives from the conflict zone.” 

Osadchuk: “The main difference is the enormous amount of war content, ranging from professional pictures and amateur videos after missile strikes to drone footage of artillery salvos and bodycam footage of fighting in the frontline trenches—all making this conflict the most documented. Second, this war demonstrates the need for drones, satellite imagery, and open-source intelligence for successful operations, which distances it from previous conflicts and wars. Finally, it is exceptional due to the participation of Ukrainian civil society in developing applications, like the one alerting people about incoming shelling or helping find shelter; launching crowdfunding campaigns for vehicles, medical equipment, and even satellite image services; and debunking Russian disinformation on social media.” 

Schroeder: “One of the key lessons we can take from this war is the centrality of the global private sector to conflict in and through the information environment. From expedited construction of cloud infrastructure for the Ukrainian government to Ukrainian telecommunications companies defending and restoring services along the front lines to distributed satellite devices, providing flexible connectivity to civilians and soldiers alike, private companies have undoubtedly played an important role in shaping both the capabilities of the Ukrainian state and the information battlespace itself. While we do not entirely understand the incentives that drove these actions, an undeniable motivation that will be difficult to replicate in other contexts is the combination of Russian outright aggression and comparative economic weakness. Companies and their directors felt motivated to act due to the first and, likely, free to act due to the second. Private sector centrality is unlikely to diminish and, in future conflicts, it will be imperative for combatants to understand the opportunities and dependencies that exist in this space within their own unique context.” 

Wilde: “My sense is that post-war, transatlantic dynamics—from shared norms to politico-military ties—lent significant tailwinds to marshal resource and support to Ukraine (though not as quickly or amply from some quarters as I had hoped). The shared memory of the fight for self-determination in Central and Eastern Europe in the late 1980s to early 1990s still has deep resonance among the publics and capitals of the West. These are unique dynamics, and the degree to which they could be replicated in other theaters of potential conflict is a pretty open question.”

Simon Handler is a fellow at the Atlantic Council’s Cyber Statecraft Initiative within the Digital Forensic Research Lab (DFRLab). He is also the editor-in-chief of The 5×5, a series on trends and themes in cyber policy. Follow him on Twitter @SimonPHandler.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post The 5×5—Conflict in Ukraine’s information environment appeared first on Atlantic Council.

]]>
Building a shared lexicon for the National Cybersecurity Strategy https://www.atlanticcouncil.org/content-series/tech-at-the-leading-edge/building-a-shared-lexicon-for-the-national-cybersecurity-strategy/ Thu, 16 Mar 2023 12:00:00 +0000 https://www.atlanticcouncil.org/?p=621766 The 2023 National Cybersecurity Strategy, released on March 3, represents the ambitions of the Biden Administration to chart a course within and through the cyber domain, staking out a critical set of questions and themes. These ambitions are reflected within the strategy’s pillars and titled sections, but also key words and phrases scattered throughout the […]

The post Building a shared lexicon for the National Cybersecurity Strategy appeared first on Atlantic Council.

]]>

The 2023 National Cybersecurity Strategy, released on March 3, represents the ambitions of the Biden Administration to chart a course within and through the cyber domain, staking out a critical set of questions and themes. These ambitions are reflected within the strategy’s pillars and titled sections, but also key words and phrases scattered throughout the document. As we and others have said, the success of this strategy will hinge largely on the practical implementation of its boldest ideas. The details of that implementation will depend on how the administration chooses to interpret or define many of the key terms found within the strategy.

To begin the creation of a shared lexicon to interpret these terms and the policy questions and implications that flow from each, this series identifies seven terms used throughout the strategy that represent pivotal ideas and priorities of this administration: “best-positioned actors,” “realign incentives,” “shift liability,” “build in security,” “modernize federal systems,” “privacy,” and “norms of responsible state behavior.” This article digs into the meaning behind these phrases and how they serve as waypoints in debates over the future of cybersecurity policy.

Strategy terms

“Best-positioned actors”

Throughout the National Cybersecurity Strategy, there are various iterations of the idea of “best-positioned actors” to describe and delineate the private actors expected, or at the least encouraged, to play a larger role in building and reinforcing a secure cyberspace. The repetition of this term represents a larger trend within the 2023 NCS: the central role of the private sector. The prior strategy certainly represented a step in this process, but its successor signals a more fundamental move toward addressing the significant role of private sector players in shaping cybersecurity.

According to this strategy, a keystone in this effort will be increased responsibility by the “best positioned actors” within the private sector. But what does this term mean? At its most basic level, a best-positioned company is one whose product(s) or service(s) represents a considerable portion of a key structural point identified within a pillar of US cyber strategy and, therefore, a company whose manner of operation will be decisive in determining cybersecurity outcomes for a large number of users. The strategy explains that “protecting data and assuring the reliability of critical systems must be the responsibility of the owners and operators of the systems that hold our data and make our society function, as well as of the technology providers that build and service these systems.” Though specific sectors or companies are not tied to this category in the strategy, the definition appears to include primarily the owners and operators of both traditional physical infrastructure, especially critical infrastructure, as well as digital infrastructure, like cloud computing services. It may also point to entities who operate as crucial intermediary nodes in the software stack or software development life cycle, whose privileged positions allow the implementation of security protections for downstream resources at scale, such as operating systems, app stores, browsers, and code-hosting platforms.

The strategy appears to distinguish these best-positioned actors as a sub-set within the category of actors whose action, or inaction, has the greatest potential consequences. The strategy further stipulates that a company’s resourcing partially determines its designation as best-positioned. This distinction reflects an issue throughout the digital ecosystem, where an entity responsible for a critical product or service might have insufficient resources, might fall under what Cyber Statecraft Initiative Senior Fellow Wendy Nather terms the “cyber poverty line,” to act as a best-positioned actor. Such entities may not be “best positioned,” but are important to the security and resilience if the products or services they are responsible for are depended on by a significant proportion of technology users or would, if compromised, create a large blast radius of effect because they play a connecting role within a large number of other products and services.

The strategy’s emphasis on shifting responsibility is crucial to reducing the impact of security failures on users and serves to support many of the other concepts around “build-in security,” “privacy,” and “realign incentives.” As a result, who that responsibility shifts to, the “best-positioned actors,” will have material influence on the outcomes of these policies. Establishing a common understanding of what companies fall within that category is imperative.

“Realign incentives”

Another term found throughout the National Cybersecurity Strategy—particularly within Pillar Three (Shape Market Forces)—is various iterations of “incentivizing responsibility.” It describes how the US government can shape the security ecosystem by motivating actors—chiefly the private sector owners, producers, and operators of critical technologies—toward a sense of heightened responsibility in securing US digital infrastructure. The previous strategy discussed incentives at a very high level—how to incentivize investments, innovation, and so on—but lacked a coherent sense of objective. The 2023 strategy moves closer to stating a goal but still falls short of actualizing a plan to achieve it. The repetition of this term represents a larger trend within the 2023 strategy: the desire to shift the onus of security failures away from users and onto the private sector. This term is a major driver to achieve the strategic objectives of the National Cyber Security Strategy.

These incentives are primarily divided into four categories: investment, procurement, regulation, and liability (discussed in Shifting Liability). Investment sits at the heart of Pillar Four (Invest in a Resilience Future), but this approach is common across the pillars. Using investment as an incentive includes creating or building upon existing funds and grant programs for critical and innovative technologies, especially those secure- and resilient-by-design (discussed further in Build in Security). Bridging investment and regulation is the strategy’s emphasis on using federal purchasing power to create positive incentives within the market to adopt stricter cybersecurity design standards.

More prominent throughout the strategy, however, is a regulatory approach that seeks to balance increased resilience with the realities of the free market. This inclusion is important—resilience investment is not maximally efficient. By design a resilient system may have multiple channels for the same information or control. Building resilience into a system may also involve costly engineering and research programs without adding new (and marketable) functionality, and they might even raise the cost of goods sold. Public policy can incentivize these less efficient investments and behaviors, but it may also need to mandate them, especially where markets are most dysfunctional or risk most concentrates. Regulatory tools are intrinsic to a properly functioning market and suffer abuse through neglect or overuse in equal measure.

The strategy hints that making security and resilience the preferred market choice requires making inadequate security approaches more difficult and costly. The strategy recognizes the critical role that private companies play in creating a secure and resilient cyber ecosystem—they are acknowledged even more frequently than allied and partner states. The various approaches to incentivizing responsibility illustrate the careful balancing act that a more robust public-private relationship will require, creating both opportunity and consequence for the private sector.

The strategy tasks the federal government with creating regulation responsibly, with “modern and nimble regulatory frameworks for cybersecurity tailored for each sector’s risk profile, harmonized to reduce duplication, complementary to public-private collaboration, and cognizant of the cost of implementation.” This specific and flexible approach is progress in the government’s approach to regulation, yet it raises questions as to the capability of the US government to create and regularly update a suite of regulatory statutes with sufficient agility. Finding specific and actionable ways to realign incentives and responsibilities will be essential to achieving the goals set by the 2023 strategy. However, to achieve this goal, it is essential to better identify both what these regulations seek to achieve and how to best design them to fit, bypassing the debate about Regulation: Friend or Foe.

“Shift liability”

The 2023 National Cybersecurity Strategy has an entire subsection dedicated to software liability—one of the strategy’s most explicit endorsements of a specific, new policy mechanism to shift responsibility and realign incentives for better cybersecurity. Creating a clear framework for software products and service liability would incentivize vendors to uphold baseline standards for secure software development and production, to protect themselves from legal action in response to damages incurred by issues with their product.

In the US legal landscape, software, by itself, is rarely considered a product (in contrast to physical goods with embedded software, such as smart TVs or smart cars). This limits the ability of a user to bring claims under traditional product-liability law against the manufacturer in the event of a security flaw or other problem with the software. In addition, many software vendors disclaim liability by contract—when a consumer clicks “I Agree” on a software license to install a program, they often agree to a contract that forfeits their right to sue the maker. Indeed, the strategy explicitly calls out this tactic.

Taken in tandem, these facts mean that software manufacturers often can insulate themselves from legal liability caused by failures of their products, removing a strong incentive that has motivated physical-goods manufacturers to put their products through rigorous safety testing. The Federal Trade Commission (FTC) retains broad enforcement powers against unfair and deceptive practices, which it has used to bring judgements against businesses for abysmal security failures, and certain authorities to regulate security practices in specific software-reliant sectors like the financial-services industry. However, a broader liability framework specific to software is conspicuously absent.

The strategy, recognizing that even the best-intentioned software manufacturers cannot anticipate all potential security vulnerabilities, leads with a safe harbor-based approach, in which software manufacturers are insulated from security-related product liability claims if they have adhered to a set of baseline secure development practices. This is a negligence liability standard—where manufacturers are held accountable only if they fail to meet an accepted baseline of adequate care—in contrast to a strict liability standard, in which manufacturers are liable for harms regardless of the precautions they took. The National Cybersecurity Strategy also makes explicit mention of the need to protect open-source developers from any form of liability for participating in open-source development, given that open-source software is more akin to free speech than to the offering of a final product. This recognition is both correct and important in light of the different paradigm within which open-source development operates and its incredibly common integration in most software products.

The strategy does not explicitly state whether such a standard should be enforced solely by an executive branch agency, such as the FTC, or whether the intent of the framework would be to allow individuals to directly sue software manufacturers whose products harmed them through a private right of action. The acknowledgement of the need to refine the software liability framework is a crucial step toward the strategy’s goals of realigning public-private incentives for security and resilience. The strategy is silent on whether existing federal authorities would be sufficient, through the FTC or even the Department of Justice’s Civil Cyber Fraud Initiative, or if a private right of action is still necessary (see here for more context on this distinction and liability as a cybersecurity policy issue). This could be a defining question, especially where it may involve congressional action to back up such a program versus merely sustain it.

“Build in security”

While discussing ways to shape market forces for improved security and resilience, the National Cybersecurity Strategy dedicates two sections of Pillar Three (Shape Market Forces) to adapt federal grants and other incentives to “build in security” throughout the cyber ecosystem. This is one of the more mature interpretations of the document’s focus on reshaping incentives and responsibilities to improve security. As far as individual technologies and products are concerned, vendor incentives to rush to market can leave security features as an afterthought or add on—worse, they can remove security considerations from design processes entirely. The implementation of secure-by-design technology is especially important in light of the interconnectedness of this space, as the integration of new technology alongside old systems can create points of weakness and transitive risk.

While much policymaking discussion considers how to punish or disincentivize poor practices, rewarding security incorporated at the outset of design is as useful. Software which is built to be difficult to compromise (versus layered with post-facto security features) can be easier, and sometimes cheaper, to defend in daily use and offer vendors and users both a more defensible product. These benefits are manifold when such standards are in place early in the development of an industry, as seen in the administration’s desire to implement a National Cyber-Informed Engineering Strategy for the new generation of clean energy infrastructure. The challenge will lie in whether the administration can define what it means to build in security (i.e., is it a set of specific practices, such as using memory-safe languages? or a set of process considerations which must be accounted for and documented?) with enough specificity to build policy incentive structures such as regulation around the concept.

The next logical step is to consider how to build in security not just for granular products but for systems writ large. The ever-increasing complexity of cloud infrastructure and other large-scale networked systems is an enormous strain on vendors and service providers, which have already gone to great lengths to engineer processes and software around navigating that complexity. Unchecked, those systems and their increasing importance will put users and government on their heels, forcing them to defend an extremely sophisticated and inherently insecure landscape.

Government is well-positioned to create incentives to help industry avoid race-to-the-bottom market pressures that lead towards insecurity and unmanaged complexity, and the strategy does well to tee up that priority even if it views the cyber landscape through a somewhat narrow product lens. Moving toward incentivizing secure design, architectural review processes, and buying down risk at the systems scale can convert “building in security” from an operational feature of federal funding to a strategic reshaping of the cyber landscape.

“Modernize federal systems”

Section Five within Pillar One (Defend Critical Infrastructure) of the National Cybersecurity Strategy focuses on modernizing what it terms the federal enterprise. The recognition of the federal civilian executive branch agencies (FCEB) as a singular enterprise from a security perspective is valuable and hints at broader themes for the Office of the National Cyber Director’s (ONCD) conception of modernization: streamlined points of contact, better coordinated security posturing and policymaking, and more evenly distributed and accessible resourcing and tooling among other gains.

At the most abstract level, modernization can be considered appropriately adjusting the federal enterprise to the challenges inherent in digital security: complexity, speed, and scale. Perhaps the most important contribution of the strategy here is the simple recognition that the federal government is outmatched—with infrastructure that has so far proven inadequate. The strategy’s approach to modernization commits to alleviating the government’s dependence on legacy systems that create too porous a foundation for US cybersecurity. Specific adaptations mentioned include the implementation of zero-trust architecture, a migration to cloud-based services, and progress toward “quantum-resistant cryptography-based environments.” Notably, “zero trust” remains a phrase of the moment after it did a starring turn in Executive Order 14028 and its use as a rhetorical catch-all for “modern” security tools and approaches has only increased.

The strategy directly appoints the Office of Management and Budget (OMB), in coordination with Cybersecurity and Infrastructure Security Agency (CISA), as the lead planner for FCEB cybersecurity planning and the custodian of shared services for constituent agencies. Though direct implementation plans are not laid out within this document, the specific tasking of the OMB to lead this process, assuming that the office receives the necessary resources, does create accountability and measurability for the pillar.

Another key component of FCEB modernization is a parallel workforce modernization. Any and all plans to create a modern, resilient federal cyber environment will require fostering a talented, diverse cyber workforce. The ONCD is spearheading this effort, and work on a workforce-specific strategy is underway. The National Cyber Strategy’s treatment of the cyber workforce provides a strong foundation for ONCD’s more detailed plan to address what is a significant problem for the US government. In that strategy, there is indeed opportunity to go further, not just to build the cyber workforce necessary for the problems of today, but to ensure that workforce development is conducted in parallel with government efforts to reshape its cyber environment into one that is more secure-by-design.

Modernization of federal systems is a gargantuan challenge, and one that will never be complete. To effectuate real change, modernization must become an engrained and cyclical process. This process does not have to mean the pursuit of the most cutting-edge technology for wholesale implementation across the FCEB, but must prioritize raising the baseline of security by targeting widespread dependencies and reduce risk for the most insecure and critical system components.

“Privacy”

One of the central themes of the inaugural National Cyber Director’s tenure was that cybersecurity must amount to more than creating an absence of threats. Securing the devices and services surrounding us should enable their use toward positive social, political, and economic ends. The security of data on these devices and running through these services is as much a question of protection against its appropriation and misuse by entities to whom it was entrusted as it is a question of preventing theft by malicious adversaries.

It is only a little surprising then, and very much welcome, to see the National Cybersecurity Strategy repeatedly highlight the importance of privacy as a key component of the United States’ cyber posture. Security and privacy are tightly intermeshed, as both a practical issue, where security features can function as guarantors of some privacy policies and protections, and as a policy issue—witness certain European Union (EU) member states agita over US surveillance and intelligence collection authorities as they impact the privacy of EU data and the perceived security of US-based cloud services. The inclusion of privacy is an overdue recognition of the fact that, if we succeed at preventing adversaries from stealing data from US networks, but then allow the same data to be freely bought and sold on the open web, we have gained little protection from espionage or targeting.

The recurring inclusion of privacy also marks an overdue move to collectively wield tools of both cybersecurity policy and corporate accountability in concert—taking the efforts of entities like the FTC, the Securities and Exchange Commission (SEC), and CISA together to drive change in private sector behavior. The strategy supports “legislative efforts to impose robust, clear limits on the ability to collect, use, transfer, and maintain personal data and provide strong protections for sensitive data like geolocation and health information,” but stops short of acknowledging that Congress’ ongoing failure to pass a comprehensive federal privacy law is harming US national cyber posture. Given such a law would likely include mandatory minimum-security standards for entities processing personal data, a privacy law would also provide new enforcement tools for the executive branch to penalize companies for poor security practices, going a long way towards creating incentives to fix some of the market failures identified by the administration throughout the strategy. The strategy also arrives as the intelligence community and Congress more publicly recognize the national security importance of data security and the risks posed by the widespread proliferation of surveillance tools.

Privacy has many definitions, but perhaps the most significant implied here is control over information and the right to exercise that control in the service of individual liberty. Strengthening users’ control over the data they produce, its use in digital technologies, and the integrity of those technologies against harm is a means of giving greater power back to users. These acknowledgments are fundamentally important—however, without going further, policy risks falling back into the broken “notice and choice” model of privacy, which has demonstrated its insufficiency in the proliferation of cookie banners under GDPR. The strategy would have gone further if it had acknowledged the need to preclude companies from collecting, processing, and reselling consumer data beyond the minimum required to deliver requested goods and services, which would more fundamentally limit the collection and propagation of Americans’ data.

The embrace of privacy as a key component of cyber posture is a large step, but the strategy still lacks concrete operational plans for implementing this vision. Hopefully, this is a sign of policy action still to come. Using this strategy as another important marker, policymakers should continue to address cybersecurity and privacy issues by bringing individual users back into the conversation and restoring a measure of ownership over their digital footprint along the way.

“Norms of responsible state behavior”

Within the 2023 National Cybersecurity Strategy, the drafters highlight the need for the United States and its likeminded allies and partners to work toward a free, fair, and open cyber domain aligned with US cyber norms and values. This concept, as a guiding principle for strategy, is not new, and indeed, was a central pillar of the 2018 strategy. The continued emphasis placed on norms and values-guided cyber strategy signals the ongoing importance of this conversation.

This strategy specifically calls out the Declaration of the Future of the Internet (DFI) as creating a foundation for “a common, democratic vision for an open, free, global, interoperable, reliable, and secure digital future.” The strategy also highlights the importance of international institutions and agreements in developing a framework and set of norms for this vision, including the United Nations (UN) Group of Governmental Experts and Open-Ended Working Group and the Budapest Convention on Cybercrime.

While there is agreement among the United States and allies on a set of cyber norms, these norms do not encompass all of state behavior in cyberspace. Important differences in approach might impede the level of cooperation sought by the United States and its allies. One such tension, briefly mentioned, is the question of data localization requirements. Pillar Five (Forge International Partnerships) discusses a series of goals surrounding international collaboration. These include counter-threat coalitions, partner capacity building, and supply chain security. This pillar also discusses many existing efforts toward enhancing international cooperation, yet lacks a clear, cohesive set of actions for moving the United States and the global cyber ecosystem toward an “open, free, global, interoperable, reliable, and secure Internet.” Without such a bridge, US allies and partners around the globe, especially those with immature or nonexistent relationships with the US government on cyber issues, might struggle to move toward the kind of cyber ecosystem the US government seeks to create.

As the US government builds on and operationalizes the strategy, the cyber norms and values used as its frame will require clear specification as more than just platitudes. The internet is not a topic merely of foreign policy, and there are opportunities throughout the document to better connect discussion of shifting responsibility and securing the internet together, including these important normative dimensions through domestic implementation. It is simple to claim the pursuit of a free, fair, open, and secure cyber domain. However, if norms are truly to serve as the foundation of cyber strategy, the US government must do more than allude—it must lead the way in integrating specific ideals into its strategy, operations, and tactics.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post Building a shared lexicon for the National Cybersecurity Strategy appeared first on Atlantic Council.

]]>
Atkins on ICS Pulse Podcast https://www.atlanticcouncil.org/insight-impact/in-the-news/atkins-on-ics-pulse-podcast/ Fri, 10 Mar 2023 20:47:33 +0000 https://www.atlanticcouncil.org/?p=641195 On March 7, IPSI Nonresident Senior Fellow Victor Atkins was interviewed on the Industrial Cybersecurity Pulse Podcast on protecting critical infrastructure.

The post Atkins on ICS Pulse Podcast appeared first on Atlantic Council.

]]>

On March 7, IPSI Nonresident Senior Fellow Victor Atkins was interviewed on the Industrial Cybersecurity Pulse Podcast on protecting critical infrastructure.

The post Atkins on ICS Pulse Podcast appeared first on Atlantic Council.

]]>
How will the US counter cyber threats? Our experts mark up the National Cybersecurity Strategy https://www.atlanticcouncil.org/content-series/tech-at-the-leading-edge/the-us-national-cybersecurity-strategy-mark-up/ Sat, 04 Mar 2023 02:15:46 +0000 https://www.atlanticcouncil.org/?p=576755 On March 2, the White House released the 2023 US National Cybersecurity Strategy. Read along with CSI staff, fellows, and experts for commentary on the document and its relationship with larger cybersecurity policy issues.

The post How will the US counter cyber threats? Our experts mark up the National Cybersecurity Strategy appeared first on Atlantic Council.

]]>
On March 2, the Biden administration released its 2023 National Cybersecurity Strategy (NCS), an attempt to chart a course through the stormy waters of cyberspace, where the private sector, peer-competitor states, and nonstate actors navigate around and with each other in ways growing more complex—and dangerous—by the day. The Atlantic Council’s Cyber Statecraft Initiative (CSI), which is housed within the Digital Forensic Research Lab, gathered a group of experts from government and private-sector cyber backgrounds to dive into the document and offer context, commentary, and concerns to help decipher the strategy. Commenters include Maia Hamin, Trey Herr, Danielle Jablanski, Amelie Koran, Will Loomis, Jeff Moss, Katie Nickels, Marc Rogers, Stewart Scott, and Chris Wysopal.

CSI’s key takeaways from the strategy

  1. The strategy offers the much-needed beginnings of an ambitious shift in US cybersecurity policy, but it often falls short on implementation details and addressing past failures. The actionable outputs it does identify are fundamentally cautious.
  2. The strategy’s greatest virtues might be its focus on the pressing need to grapple with market incentives driving insecurity and to reallocate responsibility for security.
  3. By deferring rigorous treatment of allied and partner states’ role in its strategic vision for cybersecurity, the strategy gives short shrift to cybersecurity’s fundamentally global nature across all pillars.

NCS table of contents

A steady course in stormy seas: How to read the Biden administration’s new cyber strategy

Far before the age of steam, in the earliest days of sailing ships, captains knew to keep their vessels close to shore. Out in deeper water lay the vicissitudes of storms and faithless winds. Safety lay in the often more arduous, lengthier voyages hugging the coastline. Trading speed for the safety of their ship, crew, and cargo, captains steered carefully through the rocks on a conservative course to their destination. Sailors might tell tales of the exotic lands they planned to visit, but reliable routes close to shore kept them far from the perils of such journeys.

The 2023 National Cybersecurity Strategy (NCS), released March 2, reflects this cautious reality in the actual commitments it makes under a bolder vision to “rebalance the responsibility to defend cyberspace” and “realign incentives to favor long-term investments.” The strategy’s greatest contribution in years to come will likely hinge on its success reframing cyber policy toward explicit discussion of the market—and its failure to adequately distribute responsibility and risk while still clinging to weak incentives for good security practices. This will serve future policy efforts well and open discussions about material changes in the complexity and defensibility of digital technologies. A market lens for cyber policy also serves to integrate privacy into mainstream cybersecurity discussions and heartily embraces the notion that it is more than just defense against external compromise that determines the security of users and data. The strategy also charts out new horizons in its acknowledgement of the need to address software product liability while protecting open-source developers.

But in its discussion of a liability regime, and throughout, the strategy often hews close to safe harbors, steering away from the specific actions and policies that would implement the thornier parts of its vision. The document’s focus on the market, for instance, is weakened by the absence of efforts to trace the source of market failings. Missing too are efforts to further unpack barriers to federal information-technology modernization or the complex web of cyber authorities that have left security requirements fragmented and inconsistent across sectors.  The document also does little to integrate the international perspective across its discussion of threats or technologies, leaving the topic largely in a single, final pillar (the strategy is organized into five such pillars).

This was a singular opportunity to better address the global business environment in which technology vendors and consumers operate, and the geopolitical significance attached to questions of technology design and security. One need only look through the rapid expansion of activity in the Committee on Foreign Investment in the United States or the recent flurry of debate around TikTok to see the deeply international nature of the market in which the strategy seeks to drive “security and resilience.” The isolation of international issues ignores the reality of global US security partnerships and insufficiently addresses the reality of defense cooperation in cyberspace with both foreign states and private companies.

The Office of the National Cyber Director was handed a mammoth task in drafting this administration’s NCS. The young office could easily have foundered, beset by the interagency demons of the deep. Instead, it seems this captain and crew chose to remain in sight of land while charting in florid prose what could be in these grand adventures. The result is an important framework with some novel and useful policy activities, but also with questions that the cyber policy community must work to answer in the years to come. Important ideas, such as an affirmative statement about what the balance of responsibility for security should look like across the technology ecosystem, are here established in principle—flags left to be carried forward by others. In light of the fraught political winds the drafting team navigated, the result is commendable, but a frank recognition of how much work remains is also important. This text may serve to fire the imaginations of a generation of sailors yet to leave port, but we must ensure they do indeed set sail for distant shores and capture some of the promise presented here.

Authors and contributors

Maia Hamin is an associate director with the Atlantic Council’s Cyber Statecraft Initiative under the Digital Forensic Research Lab (DFRLab). She works on the Initiative’s Systems Security portfolio, which focuses on policy for open-source software, cloud, and other technologies with important systemic security effects.

Trey Herr is the director of the Atlantic Council’s Cyber Statecraft Initiative. His team works on cybersecurity and geopolitics including cloud computing, the security of the internet, supply chain policy, cyber effects on the battlefield, and growing a more capable cybersecurity policy workforce.

Danielle Jablanski is a nonresident fellow at the Cyber Statecraft Initiative and an operational technology (OT) cybersecurity strategist at Nozomi Networks, responsible for researching global cybersecurity topics and promoting OT and industrial control systems (ICS) cybersecurity awareness throughout the industry. Jablanski serves as a staff and advisory board member of the nonprofit organization Building Cyber Security, leading cyber-physical standards development, education, certifications, and labeling authority to advance physical security, safety, and privacy in the public and private sectors. Since January 2022, Jablanski has also served as the president of the North Texas Section of the International Society of Automation, organizing monthly member meetings, training, and community engagements.

Amelie Koran is a nonresident senior fellow at the Cyber Statecraft Initiative and the current director of external technology partnerships for Electronic Arts, Inc. Koran has a wide and varied background of nearly thirty years of professional experience in technology and leadership in the public and private sectors. During her career, she has supported work across various government agencies and programs including the US Department of the Interior, Treasury Department, and the Office of the Inspector General in the Department of Health and Human Services. In the private sector, she has held various roles including those at the Walt Disney Company, Splunk, Constellation Energy (now Exelon), Mandiant, and Xerox.  

Will Loomis is an associate director with the Cyber Statecraft Initiative. In this role, he manages a wide range of projects at the nexus of geopolitics and national security with cyberspace.

Jeff Moss is a nonresident senior fellow with the Cyber Statecraft Initiative. He is also the founder and creator of both the Black Hat Briefings and DEF CON, two of the most influential information security conferences in the world, attracting over ten thousand people from around the world to learn the latest in security technology from those researchers who create it. DEF CON just had its thirtieth anniversary.

Katie Nickels is the director of intelligence for Red Canary as well as a SANS certified instructor for FOR578: Cyber Threat Intelligence and a nonresident senior fellow for the Cyber Statecraft Initiative. She has worked on cyber threat intelligence (CTI), network defense, and incident response for over a decade for the US Department of Defense, MITRE, Raytheon, and ManTech.

Marc Rogers is currently CSO for Qnetsecurity. He formerly worked at Okta, Cloudflare, Lookout, and Vectra. Rogers is a well-known security researcher (Tesla Model S, TouchID, Google Glass), senior advisor to IST, a member of the Ransomware Taskforce, and co-founder of the CTI League.

Emma Schroeder is an associate director with the Cyber Statecraft Initiative. Her focus in this role is on developing statecraft and strategy for cyberspace that is useful for both policymakers and practitioners.

Stewart Scott is an associate director with the Cyber Statecraft Initiative. He works on the Initiative’s systems security portfolio, which focuses on software supply chain risk management and open source software security policy.

Chris Wysopal is the co-founder and CTO of Veracode, an application security technology provider for software developers. He was one of the original software vulnerability researchers in the 1990’s. He has testified in Congress on the topic of government cybersecurity.

The post How will the US counter cyber threats? Our experts mark up the National Cybersecurity Strategy appeared first on Atlantic Council.

]]>
Makings of the Market: Seven perspectives on offensive cyber capability proliferation https://www.atlanticcouncil.org/content-series/tech-at-the-leading-edge/makings-of-the-market-seven-perspectives-on-offensive-cyber-capability-proliferation/ Wed, 01 Mar 2023 05:01:00 +0000 https://www.atlanticcouncil.org/?p=614128 The marketplace for offensive cyber capabilities continues to grow globally. Their proliferation poses an expanding set of risks to national security and human rights, these capabilities also have legitimate use in state security and defense. To dive deeper on this topic, we asked seven experts to offer their perspectives.

The post Makings of the Market: Seven perspectives on offensive cyber capability proliferation appeared first on Atlantic Council.

]]>
The marketplace for offensive cyber capabilities (OCC)—the combination of tools; vulnerabilities; and skills, including technical, organizational, and individual capacities used to conduct offensive cyber operations—continues to grow globally. These capabilities, once developed primarily by a small handful of states, are now available for purchase from this international private market, both legal and illegal, to a widening array of both state and nonstate actors. These capabilities, and their proliferation, pose an expanding set of risks to national security and human rights around the globe.

However, these capabilities also have legitimate use in state security and defense—the boundaries of which are ill-defined. Many states have clear incentives to participate in this market, to acquire these capabilities, and more types of actors are able to find financial opportunity as this market grows. Regulation, transparency, and reshaping of this market are necessary to counter the threats this unbounded proliferation poses, and states, independently and in cooperation, have the impetus and the opportunity to do so.

To dive deeper on this topic, we asked seven experts to offer their perspectives on these threats and how policymakers can help counter them: 

Briefly, what are the principal equities/interests in the proliferation of cyber capabilities?

“There are five main players interested in the proliferation of cyber capabilities: capability vendors, governments, middlemen and resellers, large technology companies, and civil society organizations.  

Capability vendors (i.e., zero-day brokers, Access-as-a-Service firms, spyware vendors etc.) sell capabilities to governments, occasionally through middlemen or resellers (especially if they do not have pre-existing relationships with people in government technology acquisition programs). These capabilities usually involve abusing platforms and services offered by tech companies—like breaking into phones, exploiting chat platforms, or hosting malware on cloud services. Some of the operations using these capabilities target legitimate national security threats, but others will target civil society organizations, especially if the government has a wide definition of national security and little outside accountability. The privatization of this industry also means that governments who previously could not afford to build spying capabilities at home can now do so, cheaply.  

Because all players are operating in a space full of secrecy and information asymmetry, each part of the system can and will be abused. Some capability vendors sell to governments they shouldn’t sell to, some middlemen will repackage and resell vulnerabilities they’ve already sold to others, and some governments will abuse these tools to target vulnerable populations or engage in “spyware diplomacy”—allowing their domestic spyware companies to sell to a foreign government in order to curry diplomatic favor. Western governments, large technology firms, and civil society have overlapping interests in this space: curbing the abuse caused by its inherent secrecy and thereby see fewer abuses of human rights, fewer countries engaging in cyber operations, and fewer actors abusing technology services.”  

Winnona DeSombre-Bernsen, non-resident fellow, Cyber Statecraft Initiative, Digital Forensic Research Lab (DFRLab), Atlantic Council

What benefits and risks do companies like Zerodium, along with similar middlemen, pose as their role grows larger in the proliferation of offensive cyber capabilities?

“Zerodium and other middlemen operate as market makers, buying and selling the same product in a marketplace. Market making is not inherently bad—the problem arises when they connect vendors to customers both internationally and domestically without providing any transparency to the people they’re buying from or selling to.  While these firms enable ways to capture supply from sources that wouldn’t be able to reach buyers directly or would be averse to a direct relationship, they also result in a murky supply chain. There is a lack of understanding around vulnerability sourcing, who the talent is, and who else they’re selling things to. Because of that, governments are unable to drive the direction of the supply chain for future assurance.  

Zerodium is able to operate this way because government customers appreciate the lack of transparency: historically, independent exploits have been written by seedy individuals, and the less government has to interact with them, the better. However, this is no longer the case. Exploits are now available from reputable individuals and companies. If government customers continue to want this ambiguity, they will continue to enable brokers like Zerodium to operate outside of the best interests of the US market. Increased transparency from all parties will make sure offensive cyber capabilities end up in the proper hands.” 

Sophia D’Antoine, founder and managing partner, Margin Research

If there is a legitimate state interest in shaping the flow of offensive cyber capabilities to friendly states, how does this activity differ from conventional arms sales? Is the US government signaling differently in the two spaces?

“The differences between conventional arms and offensive cyber capabilities are immense. Deniable ambiguity muddies every step of the way in attempting to meaningfully curtail the sale of offensive cyber capabilities. First, offensive cyber capabilities are often multi-role by nature; they are tools of network breaching, surveillance, and potential attack depending on how they are used. Second, their footprint is substantially smaller than their physical counterparts, which makes interdiction—or threat thereof—challenging to impossible. Third, offensive cyber capabilities require relatively little except for high quality personnel to produce reliable outputs. While experts may be in relatively short supply, manufacturing and supply chains are much thinner and therefore harder to subject to scrutiny, transparency, and enforcement.  

Only a great deal of collaborative international intent and investment can even remotely make a dent in shaping the flow of offensive cyber capabilities. Efforts will need to include incentivizing the positive actors to continue participating responsibly, disincentivizing sales to less desirable users, creating a culture of due diligence on sale and use, exerting diplomatic pressure on “flexible” nations willing to host unscrupulous sellers or creating a pipeline of expat talent, and a stronger accounting of key human talent in this space and their doings. Considering the quantity of actors benefiting from the existing ambiguities, it is not clear to me that the motivation even exists to support a shift like that, let alone to invest in it strategically.”  
 

Dr. Daniel Moore, cyber warfare researcher, author of “Offensive Cyber Operations”

There has been a lot of focus on Israel and NSO Group, but there are plenty of other countries home to similar activities. What kind of effects might Israel changing the character of its regulation of these firms have on where similar companies choose to do business?

“Despite receiving most of the attention, Israel is far from the only nation with a bustling digital surveillance industry. Indeed, over the last decade, the Israeli government has implemented additional controls on the export of hacking tools which has caused some local companies to consider moving abroad. The most common destination for this relocation effort so far has been Cyprus, but there is also some expansion in the Middle East and Asia – especially into the United Arab Emirates and Singapore.  

As Western governments continue to move towards tighter controls on the sale and development of hacking tools, they will likely face internal pressures from their own defense and intelligence communities which may effectively temper rapid change. The sale of military-related products has long been seen as a key tool in the nation state diplomatic toolbox for building and maintaining relations between foreign partners. 

Assuming some level of significant regulatory progress in the future, however, I expect to see more spyware companies move into tax haven territories that offer greater corporate secrecy. This is already beginning to occur. While the shift so far is limited and only anecdotal, it may lead to a situation where these companies are harder to identify, track and regulate.”  

Christopher Bing, media fellow, Alperovitch Institute

Besides jurisdiction and the fact that many states want some of these companies to operate, what are policymakers biggest challenges to imposing penalties and positive shaping the behavior of companies across the marketplace for offensive cyber capabilities?

“The challenges are varied and speak to the core of transferring concepts from the physical to the digital world. The nature of the asset, i.e., data—which can be easily transmitted and transformed—makes transfers difficult to detect or trace across national boundaries. These traits coupled with the complex and global nature of the ecosystem comprised of varying cultures and legal jurisdictions create an intricate mix. Beyond these foundational aspects there is then: 

  • The strategic national advantage and agency that offensive cyber capabilities provide. 
  • Pace of policy response historically against dynamic, fast changing and modular ecosystems reliant on technical definitions at a trans-government level.  
  • An assumption that only companies and not individuals with no legal entity are capable of being material market or capability shapers and makers. 
  • Lack of transparency, insight, and monitorability of this global ecosystem when compared to physical equivalents such as small arms, chemical and radiological weapons etc. 
  • Lack of evidence that an ecosystem which in part has its roots in counterculture, creativity, and anti-authoritarianism can be sufficiently shaped and controlled globally to achieve policy aims. 
  • Ways in which software can be broken down into component parts distributed across many suppliers so as to not provide described functionality as written in legislation and yet reassembled elsewhere in the destination country to provide said functionality. 
  • Existence of alternative financial systems which are resilient to Western government-imposed sanctions in situations of non-compliance or disagreement. 
  • Existence of vast and growing amount of capability as open source which can be integrated to provide capability further lowering bar of entry. 

These examples highlight the complexities and competing forces in a market which are only now starting to be contested. Any one of these could be material in its own right but when combined, highlight the enormity and complexity of the challenge to policymakers. Especially so when we recognize this list isn’t comprehensive. 

However, this does not mean we should not try and learn from previous lessons as we look to address the challenge.” 

Ollie Whitehouse, founder, BinaryFirefly

For governments and corporations, there is generally more public awareness of this proliferation and its impacts but so far that attention has translated to only limited action from both groups. What role should different kinds of companies play in raising awareness, shaping, and providing appropriate incentives or disincentives to this market for offensive cyber capabilities?

“Microsoft recognizes the urgency of the threat posed by cyber mercenaries and the proliferation of offensive cyber capabilities and believes that progress can only happen through strong multistakeholder partnerships. Therefore, we welcome the growing number of governments that are taking action. The charges brought in the United States against former US intelligence and military personnel accused of being cyber mercenaries is one such example. The European Parliament’s investigation of spyware use in Europe is another. These developments follow years of work by non-governmental organizations (NGOs), which tirelessly support and draw attention to the victims of cyber mercenaries—innocent citizens around the world.  

Similarly, industry recognizes its own role in addressing this issue, but acknowledges that more needs to be done. The volume of abuse connected with this market is increasing exponentially and indeed, it seems likely that the current public revelations may only be the tip of the iceberg. Companies have a key role to play and should focus efforts around:  

  1. Taking steps to counter cyber mercenaries’ use of products and services to harm people 
  2. Identifying ways to actively counter the cyber mercenary market 
  3. Investing in cybersecurity awareness of customers, users and the general public 
  4. Protecting customers and users by maintaining the integrity and security of products and services and  
  5. Developing processes for handling valid legal requests for information.

Some transformative business practices include adhering to established corporate responsibility principles grounded in the protection of human rights and adopting policies that ensure private sector transparency.” 

Monica Ruiz, program manager, Digital Diplomacy, Microsoft

How do you expect the clients present in the market for offensive cyber capabilities to change over the next 3 years?

“The market for offensive cyber capabilities has already demonstrated its ability to grow  to meet ever-expanding demand. The affordability of these capabilities, relative to the cost of building them domestically, allows governments previously unable to procure surveillance capabilities the avenue to do so. The PEGA committee inquiry particularly calls out governments like Hungary and Greece who do not have large cyber operations capabilities but were able to purchase spyware for political suppression among other uses. 

Even in cases where governments have attempted to crack down on companies operating within their countries, like Israel, the talent pool shifts to other states like Cyprus, North Macedonia, and Turkey to circumvent regulation. Growth is thus driven by demand and not limited by any highly effective regulatory scheme. The future of real governance over this market is dependent on governments, technology companies, and civil society partners enacting scalable and transparent policies for both vendors and clients. Done right, the international community can still effectively shape this market to greatly reduce widespread human rights abuses and national security harms.”  

Jen Roberts, program assistant, Cyber Statecraft Initiative, Digital Forensic Research Lab (DFRLab), Atlantic Council

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post Makings of the Market: Seven perspectives on offensive cyber capability proliferation appeared first on Atlantic Council.

]]>
Tech innovation helps Ukraine even the odds against Russia’s military might https://www.atlanticcouncil.org/blogs/ukrainealert/tech-innovation-helps-ukraine-even-the-odds-against-russias-military-might/ Tue, 28 Feb 2023 22:50:23 +0000 https://www.atlanticcouncil.org/?p=618100 Over the past year, Ukrainians have demonstrated their ability to defeat Russia using a combination of raw courage and innovative military tech, writes Ukraine's Digital Transformation Minister Mykhailo Fedorov.

The post Tech innovation helps Ukraine even the odds against Russia’s military might appeared first on Atlantic Council.

]]>
For more than a year, Ukraine has been fighting for its life against a military superpower that enjoys overwhelming advantages in terms of funding, weapons, and manpower. One of the few areas were Ukraine has managed to stay consistently ahead of Russia is in the use of innovative military technologies.

Today’s Ukraine is often described as a testing ground for new military technologies, but it is important to stress that Ukrainians are active participants in this process who are in many instances leading the way with new innovations. The scale of Russia’s invasion and the intensity of the fighting mean that concepts can often go from the drawing board to the battlefield in a matter of months or sometimes even days. Luckily, Ukraine has the tech talent and flexibility to make the most of these conditions.

With the war now entering its second year, it is clear that military tech offers the best solutions to the threats created by Russia’s invasion. After all, success in modern warfare depends primarily on data and technology, not on the number of 1960s tanks you can deploy or your willingness to use infantry as cannon fodder.

Russian preparations for the current full-scale invasion of Ukraine have been underway for much of the past two decades and have focused on traditional military thinking with an emphasis on armor, artillery, and air power. In contrast, the rapidly modernizing Ukrainian military has achieved a technological leap in less than twelve months. Since the invasion began, Ukraine has demonstrated a readiness to innovate that the more conservative Russian military simply cannot match.

Stay updated

As the world watches the Russian invasion of Ukraine unfold, UkraineAlert delivers the best Atlantic Council expert insight and analysis on Ukraine twice a week directly to your inbox.

Modern weapons supplied by Ukraine’s international partners have played a crucial role in the Ukrainian military’s battlefield victories during the first year of the war. Likewise, Western countries have also supported Ukraine with a range of tech solutions and assistance. At the same time, Ukrainians have repeatedly demonstrated their ability to develop and adapt new technologies suited to the specific circumstances of Russia’s ongoing invasion. Ukraine has used everthing from drones and satellite imagery to artificial intelligence and situational awareness tools in order to inflict maximum damage on Russian forces while preserving the lives of Ukrainian service personnel and civilians.

Drones deserve special attention as the greatest game-changers of Russia’s war in Ukraine. Thanks to the widespread and skillful use of air reconnaissance drones, the Ukrainian military has been able to monitor vast frontline areas and coordinate artillery. Meanwhile, strike drones have made it possible to hit enemy positions directly.

The critical role of drones on the battlefield has helped fuel a wartime boom in domestic production. Over the past six months, the number of Ukrainian companies producing UAVs has increased more than fivefold. This expansion will continue. The full-scale Russian invasion of Ukraine is fast evolving into the world’s first war of robots. In order to win, Ukraine needs large quantities of drones in every conceivable category.

This helps to explain the thinking behind the decision to launch the Army of Drones initiative. This joint project within the framework of the UNITED24 fundraising platform involves the General Staff of the Ukrainian Armed Forces, the State Special Communications Service, and the Ministry of Digital Transformation. Within the space of six months, the Army of Drones initiative resulted in the acquisition of over 1,700 drones worth tens of millions of dollars. This was possible thanks to donations from individuals and businesses in 76 countries.

Ukraine is currently developing its own new types of drones to meet the challenges of the Russian invasion. For example, Ukraine is producing new kinds of naval drone to help the country guard against frequent missile attacks launched from Russian warships. Ukrainian tech innovators are making significant progress in the development of maritime drones that cost hundreds of thousands of dollars and can potentially target and deter or disable warships costing many millions.

Ukrainian IT specialists are creating software products to enhance the wartime performance of the country’s armed forces. One good example is Delta, a comprehensive situational awareness system developed by the Innovation Center within Ukraine’s Defense Ministry. This tool could be best described as “Google maps for the military.” It provides real-time views of the battlefield in line with NATO standards by integrating data from a variety of sources including aerial reconnaissance, satellite images, and drone footage.

Such systems allow the Ukrainian military to become increasingly data-driven. This enables Ukrainian commanders to adapt rapidly to circumstances and change tactics as required. The system saves lives and ammunition while highlighting potential opportunities for Ukraine to exploit. This approach has already proven its effectiveness in the defense of Kyiv and during the successful counteroffensives to liberate Kharkiv Oblast and Kherson.

Ukraine has also launched a special chatbot that allows members of the public to report on the movements of enemy troops and military hardware. Integrated within the widely used Diia app, this tool has attracted over 460,000 Ukrainian users. The reports they provide have helped to destroy dozens of Russian military positions along with tanks and artillery.

In addition to developing its own military technologies, Ukraine has also proven extremely adept at taking existing tech solutions and adapting them to wartime conditions. One prominent example is Starlink, which has changed the course of the war and become part of Ukraine’s critical infrastructure. Satellite communication is one of Ukraine’s competitive advantages, providing connections on the frontlines and throughout liberated regions of the country while also functioning during blackouts. Since the start of the Russian invasion, Ukraine has received over 30,000 Starlink terminals.

Ukraine’s effective use of military technologies has led some observers to suggest that the country could become a “second Israel.” This is a flattering comparison, but in reality, Ukraine has arguably even greater potential. Within the next few years, Ukraine is on track to become a nation with top tier military tech solutions.

Crucial decisions setting Ukraine on this trajectory have already been made. In 2023, efforts will focus on the development of a military tech ecosystem with a vibrant startup sector alongwide a strong research and development component. There are already clear indications of progress, such as the recent creation of strike drone battalions within the Ukrainian Armed Forces.

The war unleashed by Russia in February 2022 has now entered its second year. Putin had expected an easy victory. Instead, his faltering invasion has highlighted Ukraine’s incredible bravery while also showcasing the country’s technological sophistication. Ukrainians have demonstrated their ability to defeat one of the world’s mightiest armies using a combination of raw courage and modern innovation. This remarkable success offers lessons for military strategy and security policy that will be studied for decades to come.

Mykhailo Fedorov is Ukraine’s Deputy Prime Minister and Minister of Digital Transformation.

Further reading

The views expressed in UkraineAlert are solely those of the authors and do not necessarily reflect the views of the Atlantic Council, its staff, or its supporters.

The Eurasia Center’s mission is to enhance transatlantic cooperation in promoting stability, democratic values and prosperity in Eurasia, from Eastern Europe and Turkey in the West to the Caucasus, Russia and Central Asia in the East.

Follow us on social media
and support our work

The post Tech innovation helps Ukraine even the odds against Russia’s military might appeared first on Atlantic Council.

]]>
A parallel terrain: Public-private defense of the Ukrainian information environment https://www.atlanticcouncil.org/in-depth-research-reports/report/a-parallel-terrain-public-private-defense-of-the-ukrainian-information-environment/ Mon, 27 Feb 2023 05:01:00 +0000 https://www.atlanticcouncil.org/?p=615692 The report analyzes Russia’s continuous assaults against the Ukrainian information environment, and examines how Russian offensives and Ukrainian defense both move through this largely privately owned and operated environment. The report highlights key questions that must emerge around the growing role that private companies play in conflict.

The post A parallel terrain: Public-private defense of the Ukrainian information environment appeared first on Atlantic Council.

]]>

Executive summary

In the year since the Russian invasion of Ukraine, the conventional assault and advances into Ukrainian territory have been paralleled by a simultaneous invasion of the Ukrainian information environment. This environment, composed of cyber infrastructure, both digital and physical, and the data, networks, and ideas that flow through and across it, is more than a domain through which the combatants engage or a set of tools by which combatants interact—it is a parallel territory that Russia is intent on severing from the global environment and claiming for itself.

Russian assaults on the Ukrainian information environment are conducted against, and through, largely privately owned infrastructure, and Ukrainian defense in this space is likewise bound up in cooperative efforts with those infrastructure owners and other technology companies providing aid and assistance. The role of private companies in this conflict seems likely to grow, along with the scale, complexity, and criticality of the information infrastructure they operate.

Examining and mitigating the risks related to the involvement of private technology companies in the war in Ukraine is crucial and looking forward, the United States government must also examine the same questions with regard to its own security and defense:

  1. What is the complete incentive structure behind a company’s decision to provide products or services to a state at war?
  2. How dependent are states on the privately held portions of the information environment, including infrastructure, tools, knowledge, data, skills, and more, for their own national security and defense?
  3. How can the public and private sectors work together better as partners to understand and prepare these areas of reliance during peace and across the continuum of conflict in a sustained, rather than ad hoc, nature?

Russia’s war against Ukraine is not over and similar aggressions are likely to occur in new contexts and with new actors in the future. By learning these lessons now and strengthening the government’s ability to work cooperatively with the private sector in and through the information space, the United States will be more effective and resilient against future threats.

Introduction

Russia’s invasion of Ukraine in 2022 held none of the illusory cover of its 2014 operation; instead of “little green men” unclaimed by Moscow, Putin built up his forces on Ukraine’s border for the entire international community to see. His ambitions were clear: To remove and replace the elected government of Ukraine with a figurehead who would pull the country back under Russia’s hold, whether through literal absorption of the state or by subsuming the entire Ukrainian population under Russia’s political and information control. In the year since the Russian invasion, Ukraine’s defense has held back the Russian war machine with far greater strength than many thought possible in the early months of 2022. President Zelenskyy, the Ukrainian government, and the Ukrainian people have repeatedly repelled Russian attempts to topple the state, buttressed in part by the outpouring of assistance from not just allied states, but also local and transnational private sector companies.

Amidst the largest conventional land war in Europe since the fall of the Third Reich, both Russia and Ukraine have directed considerable effort toward the conflict’s information environment, defined as the physical and digital infrastructure over and through which information moves, the tools used to interact with that information, and information itself. This is not only a domain through which combatants engage, but a parallel territory that the Kremlin seeks to contest and claim. Russian efforts in this realm, to destroy or replace Ukraine’s underpinning infrastructure and inhibit the accessibility and reach of infrastructure and tools within the environment, are countered by a Ukrainian defense that prioritizes openness and accessibility.

The information environment, and all the components therein, is not a state or military dominated environment; it is largely owned, operated, and populated by private organizations and individuals around the globe. The Ukrainian information environment, referring to Ukrainian infrastructure operators, service providers, and users, is linked to and part of a global environment of state and non-state actors where the infrastructure and the terrain is largely private. Russian operations within the Ukrainian information environment are conducted against, and through, this privately owned infrastructure, and the Ukrainian defense is likewise bound up in cooperative efforts with those infrastructure owners and other technology companies that are providing aid and assistance. These efforts have contributed materially, and in some cases uniquely, to Ukraine’s defense.

The centrality of this environment to the conduct of this war, raises important questions about the degree to which states and societies are dependent on information infrastructure and functionalities owned and operated by private actors, and especially transnational private actors. Although private sector involvement in the war in Ukraine has generally been positive, the fact that the conduct of war and other responsibilities in the realm of statehood are reliant on private actors leads to new challenges for these companies, for the Ukrainian government, and for the United States and allies.

The United States government must improve its understanding of, and facility for, joint public-private action to contest over and through the information environment. The recommendations in this report are intended to facilitate the ability of US technology companies to send necessary aid to Ukraine, ensure that the US government has a complete picture of US private-sector involvement in the war in Ukraine, and contribute more effectively to the resilience of the Ukrainian information environment. First, the US government should issue a directive providing assurance and clarification as to the legality of private sector cyber, information, capacity building, and technical aid to Ukraine. Second, a task force pulling from agencies and offices across government should coordinate to track past, current, and future aid from the private sector in these areas to create a better map of US collaboration with Ukraine across the public and private sectors. Third, the US government should increase its facilitation of private technology aid by providing logistical and financial support.

These recommendations, focused on Ukraine’s defense, are borne of and provoke larger questions that will only become more important to tackle. The information environment and attempts to control it have long been a facet of conflict, but the centrality of privately owned and operated technology—and the primacy of some private sector security capabilities in relation to all but a handful of states—pose increasingly novel challenges to the United States and allied policymaking communities. Especially in future conflicts, the risks associated with private sector action in defense of, or directly against, a combatant could be significantly greater and multifaceted, rendering existing cooperative models insufficient.

The Russian information offensive

The Russian Federation Ministry of Foreign Affairs defines information space—of which cyberspace is a part—as “the sphere of activity connected with the formation, creation, conversion, transfer, use, and storage of information and which has an effect on individual and social consciousness, the information infrastructure, and information itself.1 Isolating the Ukrainian information space is key to both the short- and long-term plans of the Russian government. In the short term, the Kremlin pursues efforts to control both the flow and content of communications across the occupied areas.2 In the longer term, occupation of the information environment represents an integral step in Russian plans to occupy and claim control over the Ukrainian population.

In distinct opposition to the global nature of the information environment, over the past decade or so, the Kremlin has produced successive legislation “to impose ‘sovereignty’ over the infrastructure, content, and data traversing Russia’s ‘information space,’” creating a sectioned-off portion of the internet now known as RuNet.3 Within this space, the Russian government has greater control over what information Russian citizens see and a greater ability to monitor what Russian citizens do online.4 This exclusionary interpretation is an exercise in regime security against what the Kremlin perceives as constant Western information warfare against it.5 As Gavin Wilde, senior fellow with the Carnegie Endowment for International Peace, writes, the Russian government views the information environment “as an ecosystem to be decisively dominated.”6

To the Kremlin, domination of the information environment in Ukraine is an essential step toward pulling the nation into its fold and under its control. Just as Putin views information domination as critical to his regime’s exercise of power within Russia, in Ukraine, Russian forces systematically conduct offensives against the Ukrainian information environment in an attempt to create a similar model of influence and control that would further enable physical domination. This strategy is evident across the Kremlin’s efforts to weaken the Ukrainian state for the last decade at least. In the 2014 and 2022 invasions, occupied, annexed, and newly “independent” regions of Ukraine were variously cut off from the wider information space and pulled into the restricted Russian information space.  

The Crimean precedent – 2014 

The Russian invasion of Ukraine did not begin in 2022, but in 2014. Examining this earlier Russian incursion illustrates the pattern of Russian offensive behavior in and through the information environment going back nearly a decade—a combination of physical, cyber, financial, and informational maneuvers that largely target or move through private information infrastructure. In 2014, although obfuscated behind a carefully constructed veil of legitimacy, Russian forces specifically targeted Ukrainian information infrastructure to separate the Crimean population from the Ukrainian information environment, and thereby the global information environment, and filled that vacuum with Russian infrastructure and information. 

The Russian invasion of eastern Ukraine in 2014 was a direct response to the year-long Euromaidan Revolution, which took place across Ukraine in protest of then-President Viktor Yanukovych’s decision to spurn closer relations with the European Union and ignore growing calls to counter Russian influence and corruption within the Ukrainian government. These protests were organized, mobilized, and sustained partially through coordination, information exchange, and message amplification over social media sites like Facebook, Twitter, YouTube, and Ustream—as well as traditional media.7 In February 2014, after Yanukovych fled to Russia, the Ukrainian parliament established a new acting government and announced that elections for a new president would be held in May. Tensions immediately heightened, as Russian forces began operating in Crimea with the approval of Federal Assembly of Russia at the request of “President” Yanukovych, although Putin denied that they were anything other than “local self-defense forces.”8 On March 21, Putin signed the annexation of Crimea.9

During the February 2014 invasion of Crimea, the seizure and co-option of Ukrainian physical information infrastructure was a priority. Reportedly, among the first targets of Russian special forces was the Simferopol Internet Exchange Point (IXP), a network facility that enables internet traffic exchange.10 Ukraine’s state-owned telecommunications company Ukrtelecom reported that armed men seized its offices in Crimea and tampered with fiber-optic internet and telephone cables.11 Following the raid, the company lost the “technical capacity to provide connection between the peninsula and the rest of Ukraine and probably across the peninsula, too.”12 Around the same time, the head of the Security Service of Ukraine (SBU), Valentyn Nalivaichenko, reported that the mobile phones of Ukrainian parliament members, including his own, were blocked from connecting through Ukrtelecom networks in Crimea.13

Over the next three years, and through the “progressive centralization of routing paths and monopolization of Internet Service market in Crimea … the topology of Crimean networks has evolved to a singular state where paths bound to the peninsula converge to two ISPs (Rosetelecom and Fiord),” owned and operated by Russia.14 Russian forces manipulated the Border Gateway Protocol (BGP)—the system that helps connects user traffic flowing from ISPs to the wider internet—modifying routes to force Crimean internet traffic through Russian systems, “drawing a kind of ‘digital frontline’ consistent with the military one.”15 Residents of Crimea found their choices increasingly limited, until their internet service could only route through Russia, instead of Ukraine, subject to the same level of censorship and internet controls as in Russia. The Russian Federal Security Service (FSB) monitored communications from residents of Crimea, both within the peninsula and with people in Ukraine and beyond.16 Collaboration between ISPs operating in Crimea through Russian servers and the FSB appears to be a crucial piece of this wider monitoring effort. This claim was partially confirmed by a 2018 Russian decree that forbade internet providers from publicly sharing any information regarding their cooperation with “the authorized state bodies carrying out search and investigative activities to ensure the security of the Russian Federation.”17

From March to June 2014, Russian state-owned telecom company Rostelcom began and completed construction of the Kerch Strait cable, measuring 46 kilometers (about 28.5 miles) and costing somewhere between $11 and $25 million, to connect the Crimean internet with the Russian RuNet.18 Rostelcom, using a local agent in Crimea called Miranda Media, became the main transit network for several Crimean internet service providers (ISPs), including KCT, ACS-Group, CrimeaCom, and CRELCOM in a short period of time.19 There was a slower transition of customers from the Ukrainian company Datagroup to Russian ISPs, but nonetheless, the number of Datagroup customers in Crimea greatly decreased throughout 2014. According to one ISP interviewed by Romain Fontugne, Ksenia Ermoshina, and Emile Aben, “the Kerch Strait cable was used first of all for voice communication … The traffic capacity of this cable was rather weak for commercial communications.”20 But by the end of 2017, remnant usage of Ukrainian ISPs had virtually disappeared, following the completion of a second, better internet cable through the Kerch Strait and a series of restrictions placed on Russian social media platforms, news outlets, and a major search engine by Ukrainian President Poroshenko.21 The combination of the new restrictions, and the improved service of Russian ISPs encouraged more Crimeans to move away from Ukrainian ISPs. 

Russia’s efforts to control the information environment within Crimea, and the Russian government’s ability to monitor communications and restrict access to non-Russian approved servers, severely curtailed freedom of expression and belief—earning the region zero out of four in this category from Freedom House.22 Through physical, and formerly private, information infrastructure, Russia was able to largely take control of the information environment within Crimea. 

A parallel occupation – 2022 

Digital information infrastructure 

Just as in 2014, one of the first priorities of invading Russian forces in 2022 was the assault of key Ukrainian information infrastructure, including digital infrastructure. Before, during, and following the invasion, Russian and Russian-aligned forces targeted Ukrainian digital infrastructure through cyber operations, ranging in type, target, and sophistication. Through some combination of Ukrainian preparedness, partner intervention, and Russian planning shortfalls, among other factors, large-scale cyber operations disrupting Ukrainian critical infrastructure, such as those seen previously in 2015 with BlackEnergy and NotPetya, did not materialize.23 This could be because such cyber operations require significant time and resources, and similar ends can be more cheaply achieved through direct, physical means. Russian cyber operators, however, have not been idle.  

Preceding the physical invasion, there was a spate of activity attributed to both Russian and Russian-aligned organizations targeting a combination of state and private organizations.24 From January 13 to 14, for example, hackers briefly took control of seventy Ukrainian government websites, including the Ministries of Defense and Foreign Affairs, adding threatening messages to the top of these official sites.25 The following day, January 15, Microsoft’s Threat Intelligence Center reported the discovery of wiper malware, disguised as ransomware, in dozens of Ukrainian government systems, including agencies which “provide critical executive branch or emergency response function,” and an information technology firm that services those agencies.26 A month later, on February 15, Russian hackers targeted several websites with distributed denial of service (DDoS) attacks, forcing Ukrainian defense ministry and armed forces websites, as well as those of PrivatBank and Oschadbank, offline.27  Around the same time, according to Microsoft’s special report on Ukraine, “likely” Russian actors were discovered in the networks of unidentified critical infrastructure in Odessa and Sumy.28 The day before the invasion, cybersecurity companies ESET and Symantec reported that a new destructive wiper was spreading across Ukrainian, Latvian, and Lithuanian networks, as a second round of DDoS attacks again took down a spate of government and financial institution websites.“29 This activity centered around information—with defacements sending a clear threat to the Ukrainian government and population, DDoS attacks impairing accurate communication, and wiper malware degrading Ukrainian data—and gaining access to Ukrainian data for Russia. Although many of these operations targeted Ukrainian government networks, the attacks moved through or against privately operated infrastructure and, notably, the first public notification and detailing of several of these operations was undertaken by transnational technology companies.  

After February 24, Russian cyber activity continued and the targets included a number of private information infrastructure operators. A March hack of Ukrtelecom—Ukraine’s largest landline operator, which also provides internet and mobile services to civilians and the Ukrainian government and military—resulted in a collapse of the company’s network to just 13 percent capacity, the most severe disruption in service the firm recorded since the invasion began.30 Another such operation targeted Triolan—a Ukrainian telecommunications provider—on February 24 in tandem with the physical offensive and a second time on March 9. These incursions on the Triolan network took down key nodes and caused widespread service outages. Following the March 9 attack, the company was able to restore service, but these efforts were complicated by the need to physically access some of the equipment located in active conflict zones.31 These attacks against Ukraine-based information infrastructure companies caused service outages that were concurrent with the physical invasion and afterwards, restricted communications among Ukrainians and impeded the population’s ability to respond to current and truthful information. 

This unacceptable cyberattack is yet another example of Russia’s continued pattern of irresponsible behaviour in cyberspace, which also formed an integral part of its illegal and unjustified invasion of Ukraine.1

Council of the European Union

These types of operations, however, were not restricted to Ukraine-based information infrastructure. A significant opening salvo in Russia’s invasion was a cyber operation directed against ViaSat, a private American-based satellite internet company that provides services to users throughout the world, including the Ukrainian military.32 Instead of targeting the satellites in orbit, Russia targeted the modems in ViaSat’s KA-SAT satellite broadband network that connected users with the internet.33 Specifically, Russia exploited a “misconfiguration in a VPN [virtual private network] appliance to gain remote access to the trusted management segment of the KA-SAT network.”34 From there, the attackers were able to move laterally though the network to the segment used to manage and operate the broader system.35 They then “overwrote key data in flash memory on the modems,” making it impossible for the modems to access the broader network.36 Overall, the effects of the hack were short-lived, with ViaSat reporting the restoration of connectivity within a few days after shipping approximately 30,000 new modems to affected customers.37

SentinelOne, a cybersecurity firm, identified the malware used to wipe the modems and routers of the information they needed to operate.38 The firm assessed “with medium-confidence“ that AcidRain, the malware used in the attack, had ”developmental similarities” with an older malware, VPNFilter, that the Federal Bureau of Investigation and the US Department of Justice have previously linked to the Russian government.39  The United States, United Kingdom, and European Union all subsequently attributed the ViaSat hack to Russian-state backed actors.40

The effectiveness of the operation is debated, although the logic of the attack is straightforward. Russia wanted to constrain, or preferably eliminate, an important channel of communication for the Ukrainian military during the initial stages of the invasion. Traditional, land-based radios, which the Ukrainian military relies on for most of their communications, only work over a limited geographic range, therefore making it more difficult to use advanced, long-range weapons systems.41 It should be expected that landline and conventional telephony would suffer outages during the opening phases of the war and struggle to keep up with rapidly moving forces.

Initially, it was widely reported that the Russian strike on ViaSat was effective. On March 15, a senior Ukrainian cybersecurity official, Viktor Zhora, was quoted saying that the attack on ViaSat caused “a really huge loss in communications in the very beginning of the war.42 When asked follow-up questions about his quote, Zhora said at the time that he was unable to elaborate, leading journalists and industry experts to believe that the attack had impacted the Ukrainian military’s ability to communicate.43 However, several months later, on September 26, Zhora revised his initial comments, stating that the hack would have impacted military communications if satellite communications had been the Ukrainian military’s principal medium of communication. However, Zhora stated that the Ukrainian military instead relies on landlines for communication, with satellites as a back-up method. He went on to say that “in the case land lines were destroyed, that could be a serious issue in the first hours of war.”44 The tension, and potential contradictions, in Zhora’s comments underlines the inherent complications in analyzing cyber operations during war: long-term consequences can be difficult to infer from short-term effects, and countries seek to actively control the narratives surrounding conflict.  

The effectiveness of the ViaSat hack boils down to how the Ukrainian military communicates, and how adaptable it was in the early hours of the invasion. However, it is apparent how such a hack could impact military effectiveness. If Russia, or any other belligerent, was able to simultaneously disrupt satellite communications while also jamming or destroying landlines, forces on the frontlines would be at best poorly connected with their superiors. In such a scenario, an army would be cut off from commanders in other locations and would not be able to report back or receive new directives; they would be stranded until communications could be restored.  

The ViaSat hack had a military objective: to disrupt Ukrainian military access to satellite communications. But the effects were not limited to this objective. The operation had spillover effects that rippled across Europe. In Germany, nearly 6,000 wind turbines were taken offline, with roughly 2,000 of those turbines remaining offline for nearly a month after the initial hack due to the loss of remote connectivity.45 In France, modems used by emergency services vehicles, including firetrucks and ambulances, were also affected.46

ViaSat is not a purely military target. It is a civilian firm that counts the Ukrainian military as a customer. The targeting of civilian infrastructure with dual civilian and military capability and use has occurred throughout history and has been the center of debate in international law, especially when there are cross-border spillover effects in non-combatant countries. Both the principle of proportionality and international humanitarian law require the aggressor to target only military objects, defined as objects “whose total or partial destruction, capture or neutralization, in the circumstances ruling at the time, offers a definite military advantage” in a manner proportional to the military gain foreseen by the operation. 47 What this means in practice, however, is that the aggressor determines whether they deem a target to be a military object and a beneficial target and, therefore, what is legitimate. Konstantin Vorontsov, the Head of the Russian Delegation to the United Nations, attempted to justify Russian actions in October 2022 by saying that the use of civilian space infrastructure to aid the Ukrainian war effort may be a violation of the Outer Space Treaty, thereby rendering this infrastructure a legitimate military target.48 Similar operations like that against ViaSat are likely to be the new norm in modern warfare. As Mauro Vignati, the adviser on new digital technologies of warfare at the Red Cross, said in November 2022, insofar as private companies own and operate the information infrastructure of the domain, including infrastructure acting as military assets, “when war start[s], those companies, they are inside the battlefield.”49

Physical information infrastructure 

In February 2022, as Russian forces moved to seize airfields and key physical assets in Ukraine, they simultaneously assaulted the physical information infrastructure operating within and beneath the Ukrainian information environment. Russian forces targeted this infrastructure, largely privately operated, by taking control of assets where possible and destroying them where not, including through a series of Russian air strikes targeting Ukrainian servers, cables, and cell phone towers.50 As of June 2022, about 15 percent of Ukrainian information infrastructure had been damaged or destroyed; by July, 12.2 percent of homes had lost access to mobile communication services, 11 percent of base stations for mobile operators were out of service, and approximately 20 percent of the country’s telecommunications infrastructure was damaged or destroyed.51 By August “the number of users connecting to the Internet in Ukraine [had] shrunk by at least 16 percent nationwide.”52

In some areas of Ukraine, digital blackouts were enforced by Russian troops to cut the local population off from the highly contested information space. In Mariupol, the last cell tower connecting the city with the outside world was tirelessly tended by two Kyivstar engineers, who kept it alive with backup generators that they manually refilled with gasoline. Once the Russians entered the city, however, the Ukrainian soldiers who had been protecting the cell tower location left to engage with the enemy, leaving the Kyivstar engineers alone to tend to their charge. For three days the engineers withstood the bombing of the city until March 21, when Russian troops disconnected the tower and it went silent.53

Russian forces coerced Ukrainian occupied territories onto Russian ISPs, once again through Rostelcom’s local agent Miranda Media, and onto Russian mobile service providers.54 Information infrastructure in Ukraine is made up of overlapping networks of mobile service and ISPs, a legacy of the country’s complicated post-Soviet modernization process. This complexity may have been a boon for its resilience. Russian forces, observed digital-rights researcher Samuel Woodhams, “couldn’t go into one office and take down a whole region … There were hundreds of these offices and the actual hardware was quite geographically separated.55 Across eastern Ukraine, including Kherson, Mlitopol, and Mariupol, the Russians aimed to subjugate the physical territory, constituent populations, and Ukrainian information space. In Kherson, Russian forces entered the offices of a Ukrainian ISP and at gunpoint, forced staff to transfer control to them.56

Russian bombardment of telecommunications antennas in Kiev
Russian bombardment of telecommunications antennas in Kiev (Attribution: Mvs.gov.ua)

Routing the internet and communications access of occupied territories through Russia meant that Moscow could suppress communications to and from these occupied areas, especially through social media and Ukrainian news sites, sever access to essential services in Ukraine, and flood the populations with its own propaganda, as was proved in Crimea in 2014. Moving forward, Russia could use this dependency to “disconnect, throttle, or restrict access to the internet” in occupied territories, cutting off the occupied population from the Ukrainian government and the wider Ukrainian and international community.57

The Kremlin’s primary purpose in the invasion of Ukraine was and is to remove the Ukrainian government and, likely, install a pro-Russian puppet government to bring to an end an independent Ukraine.58 Therefore, isolating the information environment of occupied populations, in concert with anti-Ukrainian government disinformation, such as the multiple false allegations that President Zelenskyy had fled the country and abandoned the Ukrainian people,59 were a means to sway the allegiances, or at least dilute the active resistance, of the Ukrainian people.60 Without connectivity to alternative outlets, the occupying Russians could promote false and largely uncontested claims about the progress of the war. In early May 2022 for example, when Kherson lost connectivity for three days, the deputy of the Kherson Regional Council, Serhiy Khlan, reported that the Russians “began to spread propaganda that they were in fact winning and had captured almost all of Mykolaiv.”61 

Russia used its assault on the information environment to undermine the legitimacy of the Ukrainian government and its ability to fulfill its governmental duties to the Ukrainian people. Whether through complete connectivity blackouts or through the restrictions imposed by Russian networks, the Russians blocked any communications from the Ukrainian government to occupied populations—not least President Zelenskyy’s June 13, 2022 address, intended most for those very populations, in which he promised to liberate all occupied Ukrainian land and reassured those populations that they had not been forgotten. Zelenskyy acknowledged the Russian barrier between himself and Ukrainians in occupied territories, saying, “They are trying to make people not just know nothing about Ukraine… They are trying to make them stop even thinking about returning to normal life, forcing them to reconcile.”62

Isolating occupied populations from the Ukrainian information space is intended, in large part, said Stas Prybytko, the head of mobile broadband development within the Ukrainian Ministry of Digital Transformation, to “block them from communicating with their families in other cities and keep them from receiving truthful information.”63 Throughout 2022, so much of what the international community knew about the war came—through Twitter, TikTok, Telegram, and more—from Ukrainians themselves. From videos of the indiscriminate Russian shelling of civilian neighborhoods to recordings tracking Russian troop movements, Ukrainians used their personal devices to capture and communicate the progress of the war directly to living rooms, board rooms, and government offices around the world.64The power of this distributed information collection and open-source intelligence relies upon mobile and internet access. The accounts that were shared after Ukrainian towns and cities were liberated from Russian occupation lay bare just how much suffering, arrest, torture, and murder was kept hidden from international view by the purposeful isolation of the information environment and the constant surveillance of Ukrainians’ personal devices.65 The war in Ukraine has highlighted the growing impact of distributed open source intelligence during the conduct of war that is carried out by civilians in Ukraine and by the wider open source research community though various social media and messaging platforms.66 

Russian operations against, especially transnational, digital infrastructure companies can mostly be categorized as disruption, degradation, and information gathering, which saw Russian or Russian-aligned hackers moving in and through the Ukrainian information environment. The attacks against Ukrainian physical infrastructure, however, are of a slightly different character. Invading forces employed physically mediated cyberattacks, a method defined by Herb Lin as “attacks that compromise cyber functionality through the use of or the threat of physical force” to pursue the complete destruction or seizure and occupation of this infrastructure.67 Both ends begin with the same purpose: to create a vacuum of information between the Ukrainian government, the Ukrainian people, and the global population, effectively ending the connection between the Ukrainian information environment and the global environment. But the seizure of this infrastructure takes things a step beyond: to occupy the Ukrainian information environment and pull its infrastructure and its people into an isolated, controlled Russian information space. 

Reclaiming the Ukrainian information environment 

Preparation of the environment 

The Russian assault on the Ukrainian information environment is far from unanswered. Russian efforts have been countered by the Ukrainian government in concert with allied states and with technology companies located both within and outside Ukraine. Russia’s aim to pull occupied Ukrainian territory onto Russian networks to be controlled and monitored has been well understood, and Ukraine has been hardening its information infrastructure since the initial 2014 invasion. Ukraine released its Cyber Security Strategy in 2016, which laid out the government’s priorities in this space, including the defense against the range of active cyber threats they face, with an emphasis on the “cyber protection of information infrastructure.”68 The government initially focused on centralizing its networks in Kyiv to make it more difficult “for Russian hackers to penetrate computers that store critical data and provide services such as pension benefits, or to use formerly government-run networks in the occupied territories to launch cyberattacks on Kyiv.”69

As part of its digitalization and security efforts, the Ukrainian government also sought out new partners, both public and private, to build and bolster its threat detection and response capabilities. Before and since the 2022 invasion, the Ukrainian government has worked with partner governments and an array of technology companies around the world to create resilience through increased connectivity and digitalization. 

Bolstering Ukrainian connectivity 

Since the 2014 invasion and annexation of Crimea, Ukraine-serving telecommunications operators have developed plans to prepare for future Russian aggression. Lifecell, the third largest Ukrainian mobile telephone operator, prepared its network for an anticipated Russian attack. The company shifted their office archives, documentation, and critical network equipment from eastern to western Ukraine, where it would be better insulated from violence, added additional network redundancy, and increased the coordination and response capabilities of their staff.70 Similarly, Kyivstar and Vodafone Ukraine increased their network bandwidth to withstand extreme demand. In October 2021, these three companies initiated an infrastructure sharing agreement to expand LTE (Long Term Evolution) networks into rural Ukraine and, in cooperation with the Ukrainian government, expanded the 4G telecommunications network to bring “mobile network coverage to an estimated 91.6 per cent of the population.”71 

The expansion and improvement of Ukrainian telecommunications continued through international partnerships as well. Datagroup, for example, announced a $20 million partnership in 2021 with Cisco, a US-based digital communications company, to modernize and expand the bandwidth of its extensive networks.72 Since the February 2022 invasion, Cisco has also worked with the French government to provide over $5 million of secure, wireless networking equipment and software, including firewalls, for free to the Ukrainian government.73

This network expansion is an integral part of the Ukrainian government’s digitalization plans for the country, championed by President Zelenskyy. Rather than the invasion putting an end to these efforts, Deputy Prime Minister and Minister for Digital Transformation Mykhailo Fedorov claimed that during the war “digitalization became the foundation of all our life. The economy continues to work … due to digitalization.74 The digital provision of government services has created an alternate pathway for Ukrainians to engage in the economy and with their government. The flagship government initiative Diia, launched in February 2020, is a digital portal through which the 21.7 million Ukrainian users can access legal identification, make social services payments, register a business, and even register property damage from Russian missile strikes.75 The Russian advance and consequent physical destruction that displaced Ukrainians means that the ability to provide government services through alternate and resilient means is more essential than ever, placing an additional premium on defending Ukrainian information infrastructure. 

Backing up a government 

As Russian forces built up along Ukraine’s borders, Ukrainian network centralization may have increased risk, despite the country’s improved defense capabilities. In preparation for the cyber and physical attacks against the country’s information infrastructure, Fedorov moved to amend Ukrainian data protection laws to allow the government to store and process data in the cloud and worked closely with several technology companies, including Microsoft, Amazon Web Services, and Google, to effect the transfer of critical government data to infrastructure hosted outside the country.76 Cloud computing describes “a collection of technologies and organizational processes which enable ubiquitous, on-demand access to a shared pool of configurable computing resources.”77 Cloud computing is dominated by the four hyperscalers—Amazon, Microsoft, Google, and Alibaba—that provide computing and storage at enterprise scale and are responsible for the operation and security of data centers all around the world, any of which could host customer data according to local laws and regulations.78 

According to its April 2022 Ukraine war report, Microsoft “committed at no charge a total of $107 million of technology services to support this effort” and renewed the relationship in November, promising to ensure that “government agencies, critical infrastructure and other sectors in Ukraine can continue to run their digital infrastructure and serve citizens through the Microsoft Cloud” at a value of about $100 million.79 Amazon and Google have also committed to supporting cloud services for the Ukrainian government, for select companies, and for humanitarian organizations focused on aiding Ukraine.80 In accordance with the Ukrainian government’s concerns, Russian missile attacks targeted the Ukrainian government’s main data center in Kyiv soon after the invasion, partially destroying the facility, and cyberattacks aggressively tested Ukrainian networks.81    

Unlike other lines of aid provided by the international community to strengthen the defense of the Ukrainian information environment, cloud services are provided only by the private sector.82 While this aid has had a transformative effect on Ukrainian defense, that transformative quality has also raised concerns. Microsoft, in its special report on Ukraine, several times cites its cloud services as one of the determining factors that limited the effect of Russian cyber and kinetic attacks on Ukrainian government data centers, and details how their services, in particular, were instrumental in this defense.83 In this same report, Microsoft claims to be most worried about those states and organizations that do not use cloud services, and provides corroborating data.84 Microsoft and other technology companies offering their services at a reduced rate, or for free, are acting—at least in part—out of a belief in the rightness of the Ukrainian cause. However, they are still private companies with responsibilities to shareholders or board members, and they still must seek profit. Services provided, especially establishing information infrastructure like Cloud services, are likely to establish long-term business relationships with the Ukrainian government and potentially with other governments and clients, who see the effectiveness of those services illustrated through the defense of Ukraine. 

Mounting an elastic defense  

Working for wireless 

Alongside and parallel to the Ukrainian efforts to defend and reclaim occupied physical territory is the fight for Ukrainian connectivity. Ukrainian telecommunications companies have been integral to preserving connectivity to the extent possible. In March 2022, Ukrainian telecom operators Kyivstar, Vodafone Ukraine, and Lifecell made the decision to provide free national mobile roaming services across mobile provider networks, creating redundancy and resilience in the mobile network to combat frequent service outages.85 The free mobile service provided by these companies is valued at more than UAH 980 million (USD 26.8 million).86 In addition, Kyivstar in July 2022 committed to the allocation of UAH 300 million (about USD 8.2 mil) for the modernization of Ukraine’s information infrastructure in cooperation with the Ukrainian Ministry of Digital transformation.87 The statements that accompanied the commitments from Kyivstar and Lifecell—both headquartered in Ukraine—emphasized each company’s dedication to Ukrainian defense and their role in it, regardless of the short-term financial impact.88 These are Ukrainian companies with Ukrainian infrastructure and Ukrainian customers, and their fate is tied inextricably to the outcome of this war. 

As Russian forces advanced and attempted to seize control of information infrastructure, in at least one instance, Ukrainian internet and mobile service employees sabotaged their own equipment first. Facing threats of imprisonment and death from occupying Russians, employees in several Ukrtelecom facilities withstood pressure to share technical network details and instead deleted key files from the systems. According to Ukrtelecom Chief Executive Officer Yuriy Kurmaz, “The Russians tried to connect their control boards and some equipment to our networks, but they were not able to reconfigure it because we completely destroyed the software.”89 Without functional infrastructure, Russian forces struggled to pull those areas onto Russian networks.  

The destruction of telecommunications infrastructure has meant that these areas and many others along the war front are, in some areas, without reliable information infrastructure, either wireless or wired. While the Ukrainian government and a bevy of local and international private sector companies battle for control of on-the-ground internet and communications infrastructure, they also pursued new pathways to connectivity.

Searching for satellite 

Two days after the invasion, Deputy Prime Minister Fedorov tweeted at Elon Musk, the Chief Executive Officer of SpaceX, that “while you try to colonize Mars — Russia try [sic] to occupy Ukraine! While your rockets successfully land from space — Russian rockets attack Ukrainian civil people! We ask you to provide Ukraine with Starlink stations and to address sane Russians to stand.”90 Just another two days later, Fedorov confirmed the arrival of the first shipment of Starlink stations.91  

Starlink, a network of low-orbit satellites working in constellations operated by SpaceX, relies on satellite receivers no larger than a backpack that are easily installed and transported. Because Russian targeting of cellular towers made communications coverage unreliable, says Fedorov, the government “made a decision to use satellite communication for such emergencies” from American companies like SpaceX.92 Starlink has proven more resilient than any other alternative throughout the war. Due to the low orbit of Starlink satellites, they can broadcast to their receivers at relatively higher power than satellites in higher orbits. There has been little reporting on successful Russian efforts to jam Starlink transmissions, and the Starlink base stations—the physical, earthbound infrastructure that communicates directly with the satellites—are located on NATO territory, ensuring any direct attack on them would be a significant escalation in the war.93

Starlink has been employed across sectors almost since the war began. President Zelenskyy has used the devices himself when delivering addresses to the Ukrainian people, as well as to foreign governments and populations.94 Fedorov has said that sustained missile strikes against energy and communication infrastructure have been effectively countered through the deployment of Starlink devices that can restore connection where it is most needed. He even called the system “an essential part of critical infrastructure.”95   

Starlink has also found direct military applications. The portability of these devices means that Ukrainian troops can often, though not always, stay connected to command elements and peer units while deployed.96 Ukrainian soldiers have also used internet connections to coordinate attacks on Russian targets with artillery-battery commanders.97 The Aerorozvidka, a specialist air reconnaissance unit within the Ukrainian military that conducts hundreds of information gathering missions every day, has used Starlink devices in areas of Ukraine without functional communications infrastructure to “monitor and coordinate unmanned aerial vehicles, enabling soldiers to fire anti-tank weapons with targeted precision.”98 Reports have also suggested that a Starlink device was integrated into an unmanned surface vehicle discovered near Sevastopol, potentially used by the Ukrainian military for reconnaissance or even to carry and deliver munitions.99 According to one Ukrainian soldier, “Starlink is our oxygen,” and were it to disappear, “our army would collapse into chaos.”100

The initial package of Starlink devices included 3,667 terminals donated by SpaceX and 1,333 terminals purchased by the United States Agency for International Development (USAID).101 SpaceX initially offered free Starlink service for all the devices, although the offer has already been walked back by Musk, and then reversed again. CNN obtained proof of a letter sent by Musk to the Pentagon in September 2022 stating that SpaceX would be unable to continue funding Starlink service in Ukraine. The letter requested that the Pentagon pay what would amount to “more than $120 million for the rest of the year and could cost close to $400 million for the next 12 months.” It also clarified that the vast majority of the 20,000 Starlink devices sent to Ukraine were financed at least in part by outside funders like the United States, United Kingdom, and Polish governments.102

After the letter was sent, but before it became public, Musk got into a Twitter spat with Ukrainian diplomat Adrij Melnyk after the former wrote a tweet on October 3 proposing terms of peace between Russia and Ukraine. Musk’s proposal included Ukraine renouncing its claims to Crimea and pledging to remain neutral, with the only apparent concession from Russia a promise to ensure water supply in Crimea. The plan was rejected by the public poll Musk included in the tweet, and Melnyk replied and tagged Musk, saying “Fuck off is my very diplomatic reply to you @elonmusk.”103 After CNN released the SpaceX letter to the Pentagon, Musk seemingly doubled down on his decision to reduce SpaceX funding at first. He responded on October 14 to a tweet summarizing the incident, justifying possible reduced SpaceX assistance stating, “We’re just following his [Melnyk’s] recommendation,” even though the letter was sent before the Twitter exchange. Musk then tweeted the following day, “The hell with it … even though Starlink is still losing money & other companies are getting billions of taxpayer $, we’ll just keep funding Ukraine govt for free.”104 Two days later, in response to a Politico tweet reporting that the Pentagon was considering covering the Starlink service costs, Musk stated that “SpaceX has already withdrawn its request for funding.”105 Musk’s characterization of SpaceX’s contribution to the war effort has sparked confusion and reprimand, with his public remarks often implying that his company is entirely footing the bill when in fact, tens of millions of dollars’ worth of terminals and service are being covered by several governments every month.  

The Starlink saga, however, was not over yet. Several weeks later in late October, 1,300 Starlink terminals in Ukraine, purchased in March 2020 by a British company for use in Ukrainian combat-related operations, were disconnected, allegedly due to lack of funding, causing a communications outage for the Ukrainian military.106 Although operation was restored, the entire narrative eroded confidence in SpaceX as a guarantor of flexible connectivity in Ukraine. In November 2022, Federov noted that while Ukraine has no intention of breaking off its relationship with Starlink, the government is exploring working with other satellite communications operators.107 Starlink is not the only satellite communications network of its kind, but its competitors have not yet reached the same level of operation. Satellite communications company OneWeb, based in London with ties to the British military, is just now launching its satellite constellation, after the Russian invasion of Ukraine required the company to change its launch partner from Roscosmos to SpaceX.108 The US Space Development Agency, within the United States Space Force, will launch the first low earth orbit satellites of the new National Defense Space Architecture in March 2023. Other more traditional satellite companies cannot provide the same flexibility as Starlink’s small, transportable receivers.

UA Support Forces use Starlink
UA Support Forces use Starlink (Attribution: Mil.gov.ua)

With the market effectively cornered for the moment, SpaceX can dictate the terms, including the physical bounds, of Starlink’s operations, thereby wielding immense influence on the battlefield. Starlink devices used by advancing Ukrainian forces near the front, for example, have reported inconsistent reliability.109 Indeed CNN reported on February 9th that this bounding was a deliberate attempt to separate the devices from direct military use, as SpaceX President Gwynne Shotwell explained “our intent was never to have them use it for offensive purposes.”110 The bounding decision, similar to the rationale behind the company’s decision to refuse to activate Starlink service in Crimea, was likely made to contain escalation, especially escalation by means of SpaceX devices.111

But SpaceX is not the only satellite company making decisions to bound the area of operation of their products to avoid playing—or being perceived to play—a role in potential escalation. On March 16, 2022, Minister Fedorov tweeted at DJI, a Chinese drone producer, “@DJIGlobal are you sure you want to be a partner in these murders? Block your products that are helping russia to kill the Ukrainians!”112 DJI responded directly to the tweet the same day, saying “If the Ukrainian government formally requests that DJI set up geofencing throughout Ukraine, we will arrange it,” but pointed out that such geofencing would inhibit all users of their product in Ukraine, not just Russians.113

While Russia continues to bombard the Ukrainian electrical grid, Starlink terminals have grown more expensive for new Ukrainian consumers, increasing from $385 earlier this year to $700, although it is unclear if this price increase also affected government purchasers.114 According to Andrew Cavalier, a technology industry analyst with ABI Research, the indispensability of the devices gives “Musk and Starlink a major head start [against its competitors] that its use in the Russia–Ukraine war will only consolidate.”115 Indeed, the valuation of SpaceX was $127 million in May 2022, and the company raised $2 billion in the first seven months of 2022.116 For SpaceX, the war in Ukraine has been an impressive showcase of Starlink’s capabilities and has proven the worth of its services to future customers. The company recently launched a new initiative, Starshield, intended to leverage “SpaceX’s Starlink technology and launch capability to support national security efforts. While Starlink is designed for consumer and commercial use, Starshield is designed for government use.”117 It is clear that SpaceX intends to capitalize on the very public success of its Starlink network in Ukraine.

Reclaiming Territory 

The Russian assault is not over, but Ukraine has reclaimed “54 percent of the land Russia has captured since the beginning of the war” and the front line has remained relatively stable since November 2022.118 Videos and reports from reclaimed territory show the exultation of the liberated population. As Ukrainian military forces reclaim formerly occupied areas, the parallel reclamation of the information environment, by or with Ukrainian and transnational information infrastructure operators, follows quickly. 

In newly liberated areas, Starlink terminals are often the first tool for establishing connectivity. In Kherson, the first regional capital that fell to the Russian invasion and reclaimed by Ukrainian troops on November 11, 2022, residents lined up in public spaces to connect to the internet through Starlink.119 The Ministry of Digital Transformation provided Starlink devices to the largest service providers, Vodaphone and Kyivstar, to facilitate communication while their engineers repaired the necessary infrastructure for reestablishing mobile and internet service.120 A week after Kherson was recaptured, five Kyivstar base stations were made operational and Vodaphone had reestablished coverage over most of the city.121

Due to the importance of reclaiming the information space, operators are working just behind Ukrainian soldiers to reconnect populations in reclaimed territories to the Ukrainian and global information environment as quickly as possible, which means working in very dangerous conditions. In the Sumy region, a Ukrtelecom vehicle pulling up to a television tower drove over a land mine, injuring three of the passengers and killing the driver.122 Stanislav Prybytko, the head of the mobile broadband department in the Ukrainian Ministry of Digital Transformation, says “It’s still very dangerous to do this work, but we can’t wait to do this, because there are a lot of citizens in liberated villages who urgently need to connect.”123 Prybytko and his eleven-person team have been central to the Ukrainian effort to stitch Ukrainian connectivity back together. The team works across a public-private collaborative, coordinating with various government officials and mobile service providers to repair critical nodes in the network and to reestablish communications and connectivity.124 According to Ukrainian government figures, 80 percent of liberated settlements have partially restored internet connection, and more than 1,400 base stations have been rebuilt by Ukrainian mobile operators since April 2022.125

Key Takeaways 

The information environment is a key domain through which this war is being contested. The Russian government has demonstrated for over a decade the importance it places on control of the information environment, both domestically and as part of campaigns to expand the Russian sphere of influence abroad. Yet, despite this Russian focus, the Ukrainian government has demonstrated incredible resilience against physical assaults, cyberattacks, and disinformation campaigns against and within the Ukrainian information environment and has committed to further interlacing government services and digital platforms.  

The centrality of this environment to the conduct of this war means that private actors are necessarily enmeshed in the conflict. As providers of products and services used for Ukrainian defense, these companies are an important part of the buttressing structure of that defense. The centrality of private companies in the conduct of the war in Ukraine brings to light new and increasingly important questions about what it means for companies to act as information infrastructure during wartime, including:  

  1. What is the complete incentive structure behind a company’s decision to provide products or services to a state at war? 
  2. How dependent are states on the privately held portions of the information environment, including infrastructure, tools, knowledge, data, skills, and more, for their own national security and defense?  
  3. How can the public and private sectors work together better as partners to understand and prepare these areas of reliance during peace and across the continuum of conflict in a sustained, rather than ad hoc, nature? 

Incentives 

The war in Ukraine spurred an exceptional degree of cooperation and aid from private companies within Ukraine and from around the globe. Much of public messaging around the private sector’s assistance of Ukrainian defense centers around the conviction of company leadership and staff that they were compelled by a responsibility to act. This is certainly one factor in their decision. But the depth of private actor involvement in this conflict demands a more nuanced understanding of the full picture of incentives and disincentives that drive a company’s decision to enter into new, or expand upon existing, business relationships with and in a country at war. What risks, for example, do companies undertake in a war in which Russia has already demonstrated its conviction that private companies are viable military targets? The ViaSat hack was a reminder of the uncertainty that surrounds the designation of dual-use technology, and the impact that such designations have in practice. What role did public recognition play in companies’ decisions to provide products and services, and how might this recognition influence future earnings potential? For example, while their remarks differed in tone, both Elon Musk on Twitter and Microsoft in its special report on Ukraine publicly claimed partial credit for the defense of Ukraine.  

As the war continues into its second year, these questions are important to maintaining Ukraine’s cooperation with these entities. With a better understanding of existing and potential incentives, the companies, the United States, and its allies can make the decision to responsibly aid Ukraine much easier.  

Dependencies 

Private companies play an important role in armed conflict, operating much of the infrastructure that supports the information environment through which both state and non-state actors compete for control. The war in Ukraine has illustrated the willingness of private actors, from Ukrainian telecommunications companies to transnational cloud and satellite companies, to participate as partners in the defense of Ukraine. State dependence on privately held physical infrastructure is not unique to the information environment, but state dependence on infrastructure that is headquartered and operated extraterritorially is a particular feature. 

Prior to and throughout the war, the Ukrainian government has coordinated successfully with local telecommunication companies to expand, preserve, and restore mobile, radio, and internet connectivity to its population. This connectivity preserved what Russia was attempting to dismantle—a free and open Ukrainian information environment through which the Ukrainian government and population can communicate and coordinate. The Ukrainian government has relied on these companies to provide service and connectivity, working alongside them before and during the war to improve infrastructure and to communicate priorities. These companies are truly engaging as partners in Ukrainian defense, especially because this information infrastructure is not just a medium through which Russia launches attacks but an environment that Russia is attempting to seize control of. This dependence has not been unidirectional—the companies themselves are inextricably linked to this conflict through their infrastructure, employees, and customers in Ukraine. Each is dependent to some degree on the other and during times of crisis, their incentives create a dynamic of mutual need. 

The Ukrainian government has also relied on a variety of transnational companies though the provision of technology products or services and information infrastructure. As examined in this report, two areas where the involvement of these companies has been especially impactful are cloud services and satellite internet services. Cloud services have preserved data integrity and security by moving information to data centers distributed around the world, outside of Ukrainian territory and under the cyber-protection of those cloud service companies. Satellite services have enabled flexible and resilient connectivity, once again located and run primarily outside of Ukraine. These companies can provide essential services within the information environment and the physical environment of Ukraine, but are not fundamentally reliant on the integrity of the country. This dynamic is heightened by the fact that cloud service providers like Microsoft, Amazon Web Services, Google, and satellite internet service providers like Space X’s Starlink are operating within a market with global reach and very few competitors. While these companies and others have made the laudable decision to contribute to Ukrainian defense, the fact is that had they not, there are only a few, if any, other companies with comparable capabilities and infrastructure at scale. Additionally, there’s very little Ukraine or even the US government could have done to directly provide the same capabilities and infrastructure.  

Coordination 

Built into the discussions around dependency and incentives is the need for government and the private companies who own and operate information infrastructure to coordinate with each other from a more extensive foundation. While coordination with Ukrainian companies and some transnational companies emerged from sustained effort, many instances of private sector involvement were forged on an ad hoc basis and therefore could not be planned on in advance. The ad hoc approach can produce rapid results, as seen by Minister Fedorov’s tweet at Elon Musk and receipt of Starlink devices just days later. While this approach has been wielded by the Ukrainian government, and the Ministry for Digital Transformation in particular, to great effect, this very same example illustrates the complexity of transforming ad hoc aid into sustainable partnerships. Sustainability is especially important when states are facing threats outside of open war, across the continuum of insecurity and conflict where many of these capabilities and infrastructures will continue to be relied upon. Security and defense in the information environment requires states to work in coordination with a diverse range of local and transnational private actors. 

Recommendations 

Key recommendations from this paper ask the US government, in coordination with the Ukrainian government, to better understand the incentives that surround private sector involvement, to delineate states’ dependency on private information infrastructure, and to improve long-term public-private coordination through three pathways: 

  • Define support parameters. Clarify how private technology companies can and should provide aid 
  • Track support. Create a living database to track the patterns of technological aid to Ukraine from US private companies 
  • Facilitate support requests. Add to the resilience of the Ukrainian information environment by facilitating US private aid.  

Define support parameters 

Private information infrastructure companies will continue to play a key role in this war. However, there are a number of unresolved questions regarding the decisions these companies are making about if, and how, to provide support to the Ukrainian government to sustain its defense. A significant barrier may be the lack of clarity about the risks of partnership in wartime, which may disincentivize action or may alter existing partnerships. Recent SpaceX statements surrounding the bounding of Starlink use is an example, at least in part, of just such a risk calculous in action. The US government and its allies should release a public directive clarifying how companies can ensure that their involvement is in line with US and international law—especially for dual-use technologies. Reaffirming, with consistent guidelines, how the United States defines civilian participation in times of war will be crucial for ensuring that such actions do not unintentionally legitimize private entities as belligerents and legitimate targets in wartime. At the direction of the National Security Advisor, the US Attorney General and Secretary of State, working through the Office of the Legal Advisor at the State Department, should issue public guidance on how US companies can provide essential aid to Ukraine while avoiding the designation of legitimate military target or combatant under the best available interpretation of prevailing law. 

Track support 

While a large amount of support for Ukraine has been given directly by or coordinated through governments, many private companies have started providing technological support directly to the Ukrainian government. Some private companies, especially those with offices or customers in Ukraine, got in touch directly with, or were contacted by, various Ukrainian government offices, often with specific requests depending on the company’s products and services.126 

However, the US government does not have a full and complete picture of this assistance, which limits the ability of US policymakers to track the implications of changing types of support or the nature of the conflict. Policymakers should have access to not only what kind of support is being provided by private US companies, but also the projected period of involvement, what types of support are being requested and denied by companies (in which case, where the US government may be able to act as an alternative provider), and what types of support are being supplied by private sector actors without a significant government equity or involvement. A more fulsome mapping of this assistance and its dependency structure would make it possible for policymakers and others to assess its impact and effectiveness. This data, were it or some version of it publicly available, would also help private companies providing the support to better understand how their contributions fit within the wider context of US assistance and to communicate the effect their products or services are having to stakeholders and shareholders. Such information may play a role in a company’s decision to partner or abstain in the future.

The US government should create a collaborative task force to track US-based private sector support to Ukraine. Because of the wide equities across the US government in this area, this team should be led by the State Department’s Bureau of Cyberspace and Digital Policy and include representatives from USAID, the Department of Defense’s Cyber Policy Office, the National Security Agency’s Collaboration Center, and the Cybersecurity and Infrastructure Security’s Joint Cyber Defense Collaborative. This task force should initially focus on creating a picture of public-private support to Ukraine from entities within the United States, but its remit could extend to work with allies and partners, creating a fulsome picture of international public-private support.

Facilitate support requests 

Tracking the technical support that is requested, promised, and delivered to the Ukrainian government is an important first step toward gaining a better understanding of the evolving shape of the critical role that the private sector is increasingly playing in conflict. But closer tracking, perhaps by an associated body, could go further by acting as a process facilitator. Government offices and agencies have long been facilitators of private aid, but now states are increasingly able to interact with, and request support from, private companies directly, especially for smaller quantities or more specific products and services. While this pathway can be more direct and efficient, it also requires a near constant churn of request, provision, and renewal actions from private companies and Ukrainian government officials.  

Private organizations have stepped into this breach, including the Cyber Defense Assistance Collaboration (CDAC), founded by Greg Rattray and Matthew Murray, now a part of the US-based non-profit CRDF Global. CDAC works with a number of US private technology companies, as well as the National Security and Defense Council of Ukraine and the Ukrainian think tank Global Cyber Cooperative Center, to match the specific needs of Ukrainian government and state-owned enterprises with needed products and services offered by companies working in coordination.127

The growth and reach of this effort demonstrate the potential impact that a government-housed, or even a government-sponsored mechanism, could have in increasing the capacity to facilitate requests from the Ukrainian government, decreasing the number of bureaucratic steps required by Ukrainian government officials while increasing the amount and quality of support they receive. In addition, government facilitation would ease progress toward the previously stated recommendations by building in clarity around what kind of support can be provided and putting facilitation and aid tracking within a single process. As discussed above, this facilitation should start with a focus on US public-private support, but can grow to work alongside similar allied efforts. This could include, for example, coordination with the United Kingdom’s Foreign, Commonwealth and Development Office (FCDO) program, which “enables Ukrainian agencies to access the services of commercial cybersecurity companies.”128 Crucially, this task force, helmed by the State Department’s Bureau of Cyberspace and Digital Policy, would act as a facilitator, not as a restricting body. Its mission in this task would be to make connections and provide information.  

In line with tracking, US government facilitation would enable government entities to communicate where assistance can be most useful, such as shoring up key vulnerabilities or ensuring that essential defense activities are not dependent on a single private sector entity, and ideally, avoiding dependency on a single source of private sector assistance. A company’s financial situation or philanthropic priorities are always subject to change, and the US government should be aware of such risks and create resilience through redundancy.  

Central to this resilience will be the provision of support to bolster key nodes in the Ukrainian telecommunications infrastructure network against not just cyber attacks but also against physical assault, including things like firewalls, mine clearing equipment, and power generators. Aiding the Ukrainian government in the search for another reliable partner for satellite communication devices that offer similar flexibility as Starlink is also necessary, and a representative from the Pentagon has confirmed that such a process is underway, following Musk’s various and contradictory statements regarding the future of SpaceX’s aid to Ukraine back in October.129 Regardless, the entire SpaceX experience illustrates the need to address single dependencies in advance whenever possible. 

A roadblock to ensuring assistance redundancy is the financial ability of companies to provide products and services to the Ukrainian government without charge or to the degree necessary. While the US government does provide funding for private technological assistance (as in the Starlink example), creating a pool of funding that is tied to the aforementioned task force and overseen by the State Department’s Bureau of Cyberspace and Digital Policy, would enable increased flexibility for companies to cover areas of single dependence, even in instances that would require piecemeal rather than one-to-one redundancy. As previously discussed, many companies are providing support out of a belief that it is the right thing to do, both for their customers and as members of a global society. However, depending on whether that support is paid or provided for free, or publicly or privately given, a mechanism that provides government clarity on private sector support, tracks the landscape of US private support to Ukraine, and facilitates support requests would make it easier for companies to make the decision to start or continue to provide support when weighed against the costs and potential risks of offering assistance.

Looking forward and inward 

The questions that have emerged from Ukraine’s experience of defense in and through the information environment are not limited to this context. Private companies have a role in armed conflict and that role seems likely to grow, along with the scale, complexity, and criticality of the information infrastructures they own and operate. Companies will, in some capacity, be participants in the battlespace. This is being demonstrated in real time, exposing gaps that the United States and its allies and partners must address in advance of future conflicts.

Russia’s war on Ukraine has created an environment in which both public and private assistance in support of Ukrainian information infrastructure is motivated by a common aversion toward Russian aggression, as well as a commitment to the stability and protection of the Ukrainian government and people. This war is not over and despite any hopes to the contrary, similar aggressions will occur in new contexts, and with new actors in the future. It is crucial that in conjunction with examining and mitigating the risks related to the involvement of private technology companies in the war in Ukraine, the US government also examines these questions regarding its own national security and defense.

The information environment is increasingly central to not just warfighting but also to the practice of governance and the daily life of populations around the world. Governments and populations live in part within that environment and therefore atop infrastructure that is owned and operated by the private sector. As adversaries seek to reshape the information environment to their own advantage, US and allied public and private sectors must confront the challenges of their existing interdependence. This includes defining in what form national security and defense plans in and through the information environment are dependent upon private companies, developing a better understanding of the differing incentive structures that guide private sector decision-making, and working in coordination with private companies to create a more resilient information infrastructure network through redundancy and diversification. It is difficult to know what forms future conflict and future adversaries will take, or the incentives that may exist for companies in those new contexts, but by better understanding the key role that private information and technology companies already play in this domain, the United States and allies can better prepare for future threats.

About the Authors 

Emma Schroeder is an associate director with the Atlantic Council’s Cyber Statecraft Initiative, within the Digital Forensic Research Lab, and leads the team’s work studying conflict in and through cyberspace. Her focus in this role is on developing statecraft and strategy for cyberspace that is useful for both policymakers and practitioners. Schroeder holds an MA in History of War from King’s College London’s War Studies Department and also attained her BA in International Relations & History from the George Washington University’s Elliott School of International Affairs. 

Sean Dack was a Young Global Professional with the Cyber Statecraft Initiative during the fall of 2022. He is now a Researcher at the NATO Parliamentary Assembly, where he focuses on the long-term strategic and economic implications of Russia’s invasion of Ukraine. Dack graduated from Johns Hopkins School of Advanced International Studies in December 2022 with his MA in Strategic Studies and International Economics. 

Acknowledgements 

The authors thank Justin Sherman, Gregory Rattray, and Gavin Wilde for their comments on earlier drafts of this document, and Trey Herr and the Cyber Statecraft team for their support. The authors also thank all the participants, who shall remain anonymous, in multiple Chatham House Rule discussions and one-on-one conversations about the issue.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

1    ”The Ministry of Foreign Affairs of the Russian Federation, Convention on International Information Security (2011), https://carnegieendowment.org/files/RUSSIAN-DRAFT-CONVENTION-ON-INTERNATIONAL-INFORMATION-SECURITY.pdf.
2    To learn more about Russian disinformation efforts against Ukraine and its allies, check out the Russian Narratives Reports from the Atlantic Council’s Digital Forensic Research Lab: Nika Aleksejeva et al., Andy Carvin ed., “Narrative Warfare: How the Kremlin and Russian News Outlets Justified a War of Aggression against Ukraine,” Atlantic Council, February 22, 2023, https://www.atlanticcouncil.org/in-depth-research-reports/report/narrative-warfare/; Roman Osadchuk et al., Andy Carvin ed., “Undermining Ukraine: How the Kremlin Employs Information Operations to Erode Global Confidence in Ukraine,” Atlantic Council, February 22, 2023, https://www.atlanticcouncil.org/in-depth-research-reports/report/undermining-ukraine/.
3    Previously, the term RuNet described Russian language portions of the global internet accessible anywhere in the world. However, since Russia passed a domestic internet law in May 2019, RuNet has come to refer to a technically isolated version of the internet that services users within the borders of Russia. Gavin Wilde and Justin Sherman, No Water’s Edge: Russia’s Information War and Regime Security, Carnegie Endowment for International Peace, January 4, 2023, https://carnegieendowment.org/2023/01/04/no-water-s-edge-russia-s-information-war-and-regime-security-pub-88644; Justin Sherman, Reassessing Runet: Russian Internet Isolation and Implications for Russian Cyber Behavior, Atlantic Council, July 7, 2022, https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/reassessing-runet-russian-internet-isolation-and-implications-for-russian-cyber-behavior/.
4    Adam Satariano and Valerie Hopkins, “Russia, Blocked from the Global Internet, Plunges into Digital Isolation,” New York Times, March 7, 2022, https://www.nytimes.com/2022/03/07/technology/russia-ukraine-internet-isolation.html.
5    Gavin Wilde and Justin Sherman, No Water’s Edge: Russia’s Information War and Regime Security, Carnegie Endowment for International Peace, January 4, 2023, https://carnegieendowment.org/2023/01/04/no-water-s-edge-russia-s-information-war-and-regime-security-pub-88644; Stephen Blank, “Russian Information Warfare as Domestic Counterinsurgency,” American Foreign Policy Interests 35, no. 1 (2013): 31–44, https://doi.org/10.1080/10803920.2013.757946.
6    Gavin Wilde, Cyber Operations in Ukraine: Russia’s Unmet Expectations, Carnegie Endowment for International Peace, December 12, 2022, https://carnegieendowment.org/2022/12/12/cyber-operations-in-ukraine-russia-s-unmet-expectations-pub-88607.
7    Tetyana Bohdanova, “Unexpected Revolution: The Role of Social Media in Ukraine’s Euromaidan Uprising,” European View 13, no. 1: (2014), https://doi.org/10.1007/s12290-014-0296-4; Megan MacDuffee Metzger, and Joshua A. Tucker. “Social Media and EuroMaidan: A Review Essay,” Slavic Review 76, no. 1 (2017): 169–91, doi:10.1017/slr.2017.16
8    Jonathon Cosgrove, “The Russian Invasion of the Crimean Peninsula 2014–2015: A Post-Cold War Nuclear Crisis Case Study,” Johns Hopkins (2020), 11–13, https://www.jhuapl.edu/Content/documents/RussianInvasionCrimeanPeninsula.pdf.
9    Steven Pifer, Ukraine: Six Years after the Maidan, Brookings, February 21, 2020, https://www.brookings.edu/blog/order-from-chaos/2020/02/21/ukraine-six-years-after-the-maidan/.
10    Kenneth Geers, ed., Cyber War in Perspective: Russian Aggression Against Ukraine (Tallinn: NATO CCD COE Publications, 2015), 9; Keir Giles, “Russia and Its Neighbours: Old Attitudes, New Capabilities,” in Geers, Cyber War in Perspective, 25; ‘Кримські регіональні підрозділи ПАТ «Укртелеком» офіційно повідомляють про блокування невідомими декількох вузлів зв’язку на півострові’ [Ukrtelekom officially reports blocking of communications nodes on peninsula by unknown actors], Ukrtelekom, February 28, 2014, http://www.ukrtelecom.ua/presscenter/news/official?id=120327.
11    Pavel Polityuk and Jim Finkle, “Ukraine Says Communications Hit, MPs Phones Blocked,” Reuters, March 4, 2014, https://www.reuters.com/article/ukraine-crisis-cybersecurity/ukraine-says-communications-hit-mps-phones-blocked-idINL6N0M12CF20140304.
12    Jen Weedon, “Beyond ‘Cyber War’: Russia’s Use of Strategic Cyber Espionage and Information Operations in Ukraine,” in Geers, Cyber War in Perspective, 76; Liisa Past, “Missing in Action: Rhetoric on Cyber Warfare,” in Geers, Cyber War in Perspective, 91; “Ukrtelecom’s Crimean Sub-Branches Officially Report that Unknown People Have Seized Several Telecommunications Nodes in the Crimea,” Ukrtelecom, February 28, 2014, http://en.ukrtelecom.ua/about/news?id=120467; “Feb. 28 Updates on the Crisis in Ukraine,” New York Times, February 28, 2014, https://archive.nytimes.com/thelede.blogs.nytimes.com/2014/02/28/latest-updates-tensions-in-ukraine/?_r=0; “The Crimean Regional Units of PJSC ‘Ukrtelecom’ Officially Inform About the Blocking by Unknown Persons of Several Communication Nodes on the Peninsula,” Ukrtelecom, February 28, 2014, https://web.archive.org/web/20140305001208/, http://www.ukrtelecom.ua/presscenter/news/official?id=120327.
13    Polityuk and Finkle, “Ukraine Says Communications Hit”; John Leyden, “Cyber Battle Apparently under Way in Russia–Ukraine Conflict,” The Register, April 25, 2018, https://www.theregister.com/2014/03/04/ukraine_cyber_conflict/.
14    Fontugne, Ermoshina, and Aben, “The Internet in Crimea.”
15    Frédérick Douzet et al., “Measuring the Fragmentation of the Internet: The Case of the Border Gateway Protocol (BGP) During the Ukrainian Crisis,” 2020 12th International Conference on Cyber Conflict (CyCon), Tallinn, Estonia, May 26–29, 2020, 157-182, doi: 10.23919/CyCon49761.2020.9131726; Paul Mozur et al., “‘They Are Watching’: Inside Russia’s Vast Surveillance State,” New York Times, September 22, 2022, https://www.nytimes.com/interactive/2022/09/22/technology/russia-putin-surveillance-spying.html
16    Yaropolk Brynykh and Anastasiia Lykholat, “Occupied Crimea: Victims and Oppressors,” Freedom House, August 30, 2018, https://freedomhouse.org/article/occupied-crimea-victims-and-oppressors.
17    Halya Coynash, “Internet Providers Forced to Conceal Total FSB Surveillance in Occupied Crimea and Russia,” Kyiv Post, February 2, 2018, https://www.kyivpost.com/article/opinion/op-ed/halya-coynash-internet-providers-forced-conceal-total-fsb-surveillance-occupied-crimea-russia.html.
18    Joseph Cox, “Russia Built an Underwater Cable to Bring Its Internet to Newly Annexed Crimea,” VICE, August 1, 2014, https://www.vice.com/en/article/ypw35k/russia-built-an-underwater-cable-to-bring-its-internet-to-newly-annexed-crimea.
19    Cox, “Russia Built an Underwater Cable.”
20    Romain Fontugne, Ksenia Ermoshina, and Emile Aben, “The Internet in Crimea: A Case Study on Routing Interregnum,” 2020 IFIP Networking Conference, Paris, France, June 22–25, 2020, https://hal.archives-ouvertes.fr/hal-03100247/document.
21    Sebastian Moss, “How Russia Took over the Internet in Crimea and Eastern Ukraine,” Data Center Dynamics, January 12, 2023, https://www.datacenterdynamics.com/en/analysis/how-russia-took-over-the-internet-in-crimea-and-eastern-ukraine/; “Ukraine: Freedom on the Net 2018 Country Report,” Freedom House, 2019, https://freedomhouse.org/country/ukraine/freedom-net/2018.
22    “Crimea: Freedom in the World 2020 Country Report,” Freedom House, https://freedomhouse.org/country/crimea/freedom-world/2020.
23    Kim Zetter, “Inside the Cunning, Unprecedented Hack of Ukraine’s Power Grid,” Wired, March 3, 2016, https://www.wired.com/2016/03/inside-cunning-unprecedented-hack-ukraines-power-grid/; Andy Greenberg, “The Untold Story of Notpetya, the Most Devastating Cyberattack in History,” Wired, August 22, 2018, https://www.wired.com/story/notpetya-cyberattack-ukraine-russia-code-crashed-the-world/.
24    “Special Report: Ukraine An Overview of Russia’s Cyberattack Activity in Ukraine,” Microsoft Digital Security Unit, April 27, 2022, https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE4Vwwd; Kyle Fendorf and Jessie Miller, “Tracking Cyber Operations and Actors in the Russia–Ukraine War,” Council on Foreign Relations, March 24, 2022, https://www.cfr.org/blog/tracking-cyber-operations-and-actors-russia-ukraine-war.
25    Jakub Przetacznik and Simona Tarpova, “Russia’s War on Ukraine: Timeline of Cyber-Attacks,” European Parliament, June 2022, https://www.europarl.europa.eu/RegData/etudes/BRIE/2022/733549/EPRS_BRI(2022)733549_EN.pdf; Catalin Cimpanu, “Hackers Deface Ukrainian Government Websites,” The Record, January 14, 2022, https://therecord.media/hackers-deface-ukrainian-government-websites/.
26    Tom Burt, “Malware Attacks Targeting Ukraine Government,” Microsoft, January 15, 2022, https://blogs.microsoft.com/on-the-issues/2022/01/15/mstic-malware-cyberattacks-ukraine-government/.
27    Roman Osadchuk, Russian Hybrid Threats Report: Evacuations Begin in Ukrainian Breakaway Regions, Atlantic Council, February 18, 2022, https://www.atlanticcouncil.org/blogs/new-atlanticist/russian-hybrid-threats-report-evacuations-begin-in-ukrainian-breakaway-regions/#cyberattack; Sean Lyngaas and Tim Lister, “Cyberattack Hits Websites of Ukraine Defense Ministry and Armed Forces,” CNN, February 15, 2022, https://www.cnn.com/2022/02/15/world/ukraine-cyberattack-intl/index.html.
28    Microsoft, “Special Report Ukraine.”
29    ESET Research: Ukraine Hit by Destructive Attacks Before and During the Russian Invasion with HermeticWiper and IsaacWiper,” ESET, March 1, 2022, https://www.eset.com/int/about/newsroom/press-releases/research/eset-research-ukraine-hit-by-destructive-attacks-before-and-during-the-russian-invasion-with-hermet/; “Ukraine: Disk-Wiping Attacks Precede Russian Invasion,” Symantec Threat Hunter Team, February 24, 2022, https://symantec-enterprise-blogs.security.com/blogs/threat-intelligence/ukraine-wiper-malware-russia; “Ukraine Computers Hit by Data-Wiping Software as Russia Launched Invasion,” Reuters, February 24, 2022, https://www.reuters.com/world/europe/ukrainian-government-foreign-ministry-parliament-websites-down-2022-02-23/.
30    Britney Nguyen, “Telecom Workers in Occupied Parts of Ukraine Destroyed Software to Avoid Russian Control over Data and Communications,” Business Insider, June 22, 2022, https://www.businessinsider.com/telecom-workers-ukraine-destroyed-software-avoid-russian-control-2022-6; Net Blocks (@netblocks), “Confirmed: A major internet disruption has been registered across #Ukraine on national provider #Ukrtelecom; real-time network data show connectivity collapsing …,” Twitter, March 28, 2022, 10:38 a.m., https://twitter.com/netblocks/status/1508453511176065033; Net Blocks (@netblocks), “Update: Ukraine’s national internet provider Ukrtelecom has confirmed a cyberattack on its core infrastructure. Real-time network data show an ongoing and …,” Twitter, March 28, 2022 11:25 a.m., https://twitter.com/netblocks/status/1508465391244304389; Andrea Peterson, “Traffic at Major Ukrainian Internet Service Provider Ukrtelecom Disrupted,” The Record, March 28, 2022, https://therecord.media/traffic-at-major-ukrainian-internet-service-provider-ukrtelecom-disrupted/; James Andrew Lewis, Cyber War and Ukraine, Center for Strategic and International Studies, January 10, 2023, https://www.csis.org/analysis/cyber-war-and-ukraine.
31    Thomas Brewster, “As Russia Invaded, Hackers Broke into A Ukrainian Internet Provider. Then Did It Again As Bombs Rained Down,” Forbes, March 10, 2022, https://www.forbes.com/sites/thomasbrewster/2022/03/10/cyberattack-on-major-ukraine-internet-provider-causes-major-outages/?sh=51d16b9c6573.
32    “Global Communications: Services, Solutions and Satellite Internet,” ViaSat, accessed November 14, 2022, http://data.danetsoft.com/viasat.com; Matt Burgess, “A Mysterious Satellite Hack Has Victims Far beyond Ukraine,” Wired, March 23, 2022, https://www.wired.com/story/viasat-internet-hack-ukraine-russia/.
33    Michael Kan, “ViaSat Hack Tied to Data-Wiping Malware Designed to Shut down Modems,” PCMag, March 31, 2022, https://www.pcmag.com/news/viasat-hack-tied-to-data-wiping-malware-designed-to-shut-down-modems.
34    “Ka-Sat Network Cyber Attack Overview,” ViaSat, September 12, 2022, https://news.viasat.com/blog/corporate/ka-sat-network-cyber-attack-overview.
35    Lee Mathews, “ViaSat Reveals How Russian Hackers Knocked Thousands of Ukrainians Offline,” Forbes, March 31, 2022, https://www.forbes.com/sites/leemathews/2022/03/31/viasat-reveals-how-russian-hackers-knocked-thousands-of-ukrainians-offline/?sh=4683638b60d6; ViaSat, “Ka-Sat Network.”
36    ViaSat, “Ka-Sat Network.”
37    Andrea Valentina, “Why the Viasat Hack Still Echoes,” Aerospace America, November 2022, https://aerospaceamerica.aiaa.org/features/why-the-viasat-hack-still-echoes.
38    Juan Andres Guerrero-Saade and Max van Amerongen, “Acidrain: A Modem Wiper Rains down on Europe,” SentinelOne, April 1, 2022, https://www.sentinelone.com/labs/acidrain-a-modem-wiper-rains-down-on-europe/.
39    Guerrero-Saade and Van Amerongen, “Acidrain.”
40    Joe Uchill, “UK, US, and EU Attribute Viasat Hack Against Ukraine to Russia,” SC Media, June 23, 2022, https://www.scmagazine.com/analysis/threat-intelligence/uk-us-and-eu-attribute-viasat-hack-against-ukraine-to-russia; David E. Sanger and Kate Conger, “Russia Was Behind Cyberattack in Run-Up to Ukraine War, Investigation Finds,” New York Times, May 10, 2022, https://www.nytimes.com/2022/05/10/us/politics/russia-cyberattack-ukraine-war.html.
41    Kim Zetter, “ViaSat Hack ‘Did Not’ Have Huge Impact on Ukrainian Military Communications, Official Says,” Zero Day, September 26, 2022, https://zetter.substack.com/p/viasat-hack-did-not-have-huge-impact; “Satellite Outage Caused ‘Huge Loss in Communications’ at War’s Outset—Ukrainian Official,” Reuters, March 15, 2022, https://www.reuters.com/world/satellite-outage-caused-huge-loss-communications-wars-outset-ukrainian-official-2022-03-15/.
42    ”Reuters, “Satellite Outage.”
43    Sean Lyngaas, “US Satellite Operator Says Persistent Cyberattack at Beginning of Ukraine War Affected Tens of Thousands of Customers, CNN, March 30, 2022, https://www.cnn.com/2022/03/30/politics/ukraine-cyberattack-viasat-satellite/index.html.
44    Zetter, “ViaSat Hack.”
45    Burgess, “A Mysterious Satellite Hack” Zetter, “ViaSat Hack”; Valentino, “Why the ViaSat Hack.”
46    Jurgita Lapienytė, “ViaSat Hack Impacted French Critical Services,” CyberNews, August 22, 2022, https://cybernews.com/news/viasat-hack-impacted-french-critical-services/
47    International Committee of the Red Cross, Protocol Additional to the Geneva Conventions of 12 August 1949, and Relating to the Protection of Victims of International Armed Conflicts (Protocol I), 1125 UNTS 3 (June 8, 1977), accessed January 18, 2023, https://www.refworld.org/docid/3ae6b36b4.html; Zhanna L. Malekos Smith, “No ‘Bright-Line Rule’ Shines on Targeting Commercial Satellites,” The Hill, November 28, 2022, https://thehill.com/opinion/cybersecurity/3747182-no-bright-line-rule-shines-on-targeting-commercial-satellites/; Anaïs Maroonian, “Proportionality in International Humanitarian Law: A Principle and a Rule,” Lieber Institute West Point, October 24, 2022, https://lieber.westpoint.edu/proportionality-international-humanitarian-law-principle-rule/#:~:text=The%20rule%20of%20proportionality%20requires,destruction%20of%20a%20military%20objective; Travis Normand and Jessica Poarch, “4 Basic Principles,” The Law of Armed Conflict, January 1, 2017, https://loacblog.com/loac-basics/4-basic-principles/.
48    “Statement by Deputy Head of the Russian Delegation Mr. Konstantin Vorontsov at the Thematic Discussion on Outer Space (Disarmament Aspects) in the First Committee of the 77th Session of the Unga,” Permanent Mission of the Russian Federation to the United Nations, October 26, 2022, https://russiaun.ru/en/news/261022_v.
49    Mauro Vignati, “LABScon Replay: Are Digital Technologies Eroding the Principle of Distinction in War?” SentinelOne, November 16, 2022, https://www.sentinelone.com/labs/are-digital-technologies-eroding-the-principle-of-distinction-in-war/
50    Matt Burgess, “Russia Is Taking over Ukraine’s Internet,” Wired, June 15, 2022, https://www.wired.com/story/ukraine-russia-internet-takeover/.
51    Nino Kuninidze et al., “Interim Assessment on Damages to Telecommunication Infrastructure and Resilience of the ICT Ecosystem in Ukraine.”
52    Adam Satariano and Scott Reinhard, “How Russia Took Over Ukraine’s Internet in Occupied Territories,” The New York Times, August 9, 2022, https://www.nytimes.com/interactive/2022/08/09/technology/ukraine-internet-russia-censorship.html; https://time.com/6222111/ukraine-internet-russia-reclaimed-territory/  
53    Thomas Brewster, “The Last Days of Mariupol’s Internet,” Forbes, March 31, 2022, https://www.forbes.com/sites/thomasbrewster/2022/03/31/the-last-days-of-mariupols-internet/.
54    Matt Burgess, “Russia Is Taking over Ukraine’s Internet,” Wired, June 15, 2022, https://www.wired.com/story/ukraine-russia-internet-takeover/; Satariano and Reinhard, “How Russia Took.”
55    ”Vera Bergengruen, “The Battle for Control over Ukraine’s Internet,” Time, October 18, 2022, https://time.com/6222111/ukraine-internet-russia-reclaimed-territory/.
56    Herbert Lin, “Russian Cyber Operations in the Invasion of Ukraine,” Cyber Defense Review (Fall 2022): 35, https://cyberdefensereview.army.mil/Portals/6/Documents/2022_fall/02_Lin.pdf, Herb Lin, “The Emergence of Physically Mediated Cyberattacks?,” Lawfare, May 21, 2022, https://www.lawfareblog.com/emergence-physically-mediated-cyberattacks; “Invaders Use Blackmailing and Intimidation to Force Ukrainian Internet Service Providers to Connect to Russian Networks,” State Service of Special Communications and Information Protection of Ukraine, May 13, 2022, https://cip.gov.ua/en/news/okupanti-shantazhem-i-pogrozami-zmushuyut-ukrayinskikh-provaideriv-pidklyuchatisya-do-rosiiskikh-merezh; Satariano and Reinhard, “How Russia Took.”
57    Gian M. Volpicelli, “How Ukraine’s Internet Can Fend off Russian Attacks,” Wired, March 1, 2022, https://www.wired.com/story/internet-ukraine-russia-cyberattacks/; Satariano and Reinhard, “How Russia Took.” 
58    David R. Marples, “Russia’s War Goals in Ukraine,” Canadian Slavonic Papers 64, no. 2–3 (March 2022): 207–219, https://doi.org/10.1080/00085006.2022.2107837.
59    David Klepper, “Russian Propaganda ‘Outgunned’ by Social Media Rebuttals,” AP News, March 4, 2022, https://apnews.com/article/russia-ukraine-volodymyr-zelenskyy-kyiv-technology-misinformation-5e884b85f8dbb54d16f5f10d105fe850; Marc Champion and Daryna Krasnolutska, “Ukraine’s TV Comedian President Volodymyr Zelenskyy Finds His Role as Wartime Leader,” Japan Times, June 7, 2022, https://www.japantimes.co.jp/news/2022/02/26/world/volodymyr-zelenskyy-wartime-president/;“Российское Телевидение Сообщило Об ‘Бегстве Зеленского’ Из Киева, Но Умолчало Про Жертвы Среди Гражданских,” Агентство, October 10, 2022, https://web.archive.org/web/20221010195154/https://www.agents.media/propaganda-obstreli/.
60    To learn more about Russian disinformation efforts against Ukraine and its allies, check out the Russian Narratives Reports from the Atlantic Council’s Digital Forensic Research Lab:  Nika Aleksejeva et al., Andy Carvin ed., “Narrative Warfare: How the Kremlin and Russian News Outlets Justified a War of Aggression against Ukraine,” Atlantic Council, February 22, 2023, https://www.atlanticcouncil.org/in-depth-research-reports/report/narrative-warfare/; Roman Osadchuk et al., Andy Carvin ed., “Undermining Ukraine: How the Kremlin Employs Information Operations to Erode Global Confidence in Ukraine,” Atlantic Council, February 22, 2023, https://www.atlanticcouncil.org/in-depth-research-reports/report/undermining-ukraine/.
61    Олександр Янковський, “‘Бояться Спротиву’. Для Чого РФ Захоплює Мобільний Зв’язок Та Інтернет На Херсонщині?,” Радіо Свобода, May 7, 2022, https://www.radiosvoboda.org/a/novyny-pryazovya-khersonshchyna-okupatsiya-rosiya-mobilnyy-zvyazok-internet/31838946.html
62    Volodymyr Zelenskyy, “Tell People in the Occupied Territories about Ukraine, That the Ukrainian Army Will Definitely Come—Address by President Volodymyr Zelenskyy,” President of Ukraine Official Website, June 13, 2022, https://www.president.gov.ua/en/news/govorit-lyudyam-na-okupovanih-teritoriyah-pro-ukrayinu-pro-t-75801. 
63    Satariano and Reinhard, “How Russia Took.”
64    Michael Sheldon, “Geolocating Russia’s Indiscriminate Shelling of Kharkiv,” DFRLab, March 1, 2022, https://medium.com/dfrlab/geolocating-russias-indiscriminate-shelling-of-kharkiv-deaccc830846; Michael Sheldon, “Kharkiv Neighborhood Experienced Ongoing Shelling Prior to February 28 Attack,” DFRLab, February 28, 2022, https://medium.com/dfrlab/kharkiv-neighborhood-experienced-ongoing-shelling-prior-to-february-28-attack-f767230ad6f6https://maphub.net/Cen4infoRes/russian-ukraine-monitor; Michael Sheldon (@Michael1Sheldon), “Damage to civilian houses in the Zalyutino neighborhood of Kharkiv. https://t.me/c/1347456995/38991 …,” Twitter, February 27, 2022, 4:15 p.m., https://twitter.com/Michael1Sheldon/status/1498044130416594947; Michael Sheldon, “Missile Systems and Tanks Spotted in Russian Far East, Heading West,” DFRLab, January 27, 2022, https://medium.com/dfrlab/missile-systems-and-tanks-spotted-in-russian-far-east-heading-west-6d2a4fe7717a; Jay in Kyiv (@JayinKyiv), “Not yet 24 hours after Ukraine devastated Russian positions in Kherson, a massive Russian convoy is now leaving Melitopol to replace them. This is on Alekseev …,” Twitter, July 12, 2022, 7:50 a.m., https://twitter.com/JayinKyiv/status/1546824416218193921; “Eyes on Russia Map,” Centre for Information Resilience, https://eyesonrussia.org/
65    Katerina Sergatskova, What You Should Know About Life in the Occupied Areas in Ukraine, Wilson Center, September 14, 2022, https://www.wilsoncenter.org/blog-post/what-you-should-know-about-life-occupied-areas-ukraine; Jonathan Landay, “Village near Kherson Rejoices at Russian Rout, Recalls Life under Occupation,” Reuters, November 12, 2022, https://www.reuters.com/world/europe/village-near-kherson-rejoices-russian-rout-recalls-life-under-occupation-2022-11-11/.
66    Andrew Salerno-Garthwaite, “OSINT in Ukraine: Civilians in the Kill Chain and the Information Space,” Global Defence Technology 137 (2022), https://defence.nridigital.com/global_defence_technology_oct22/osint_in_ukraine; “How Has Open-Source Intelligence Influenced the War in Ukraine?” Economist, August 30, 2022, https://www.economist.com/ukraine-osint-pod; Gillian Tett, “Inside Ukraine’s Open-Source War,” Financial Times, July 22, 2022, https://www.ft.com/content/297d3300-1a65-4793-982b-1ba2372241a3; Amy Zegart, “Open Secrets,” Foreign Affairs, January 7, 2023, https://www.foreignaffairs.com/world/open-secrets-ukraine-intelligence-revolution-amy-zegart?utm_source=twitter_posts&utm_campaign=tw_daily_soc&utm_medium=social
67    Lin, “The Emergence.”
68    “Cyber Security Strategy of Ukraine,” Presidential Decree of Ukraine, March 15, 2016, https://ccdcoe.org/uploads/2018/10/NationalCyberSecurityStrategy_Ukraine.pdf.
69    Eric Geller, “Ukraine Prepares to Remove Data from Russia’s Reach,” POLITICO, February 22, 2022, https://www.politico.com/news/2022/02/22/ukraine-centralized-its-data-after-the-last-russian-invasion-now-it-may-need-to-evacuate-it-00010777.  
70    Kuninidze et al., “Interim Assessment.”
71    Kuninidze et al., “Interim Assessment.”
72    “Datagroup to Invest $20 Million into a Large-Scale Network Modernization Project in Partnership with Cisco,” Datagroup, April 8, 2021, https://www.datagroup.ua/en/novyny/datagrup-investuye-20-mln-dolariv-u-masshtabnij-proyekt-iz-m-314.
73    Lauriane Giet, “Eutech4ukraine—Cisco’s Contribution to Bring Connectivity and Cybersecurity to Ukraine and Skills to Ukrainian Refugees,” Futurium, June 22, 2022, https://futurium.ec.europa.eu/en/digital-compass/tech4ukraine/your-support-ukraine/ciscos-contribution-bring-connectivity-and-cybersecurity-ukraine-and-skills-ukrainian-refugees; “Communiqué de Presse Solidarité Européenne Envers l’Ukraine: Nouveau Convoi d’Équipements Informatiques,” Government of France, May 25, 2022, https://minefi.hosting.augure.com/Augure_Minefi/r/ContenuEnLigne/Download?id=4FFB30F8-F59C-45A0-979E-379E3CEC18AF&filename=06%20-%20Solidarit%C3%A9%20europ%C3%A9enne%20envers%20l%E2%80%99Ukraine%20-%20nouveau%20convoi%20d%E2%80%99%C3%A9quipements%20informatiques.pdf
74    ”Atlantic Council, “Ukraine’s Digital Resilience: A conversation with Deputy Prime Minister of Ukraine Mykhailo Fedorov,” December 2, 2022, YouTube video, https://www.youtube.com/watch?v=Vl75e0QU6uE.
75    “Digital Country—Official Website of Ukraine,” Ukraine Now (Government of Ukraine), accessed January 17, 2023, https://ukraine.ua/invest-trade/digitalization/; Atlantic Council, “Ukraine’s Digital Resilience.”
76    Brad Smith, “Extending Our Vital Technology Support for Ukraine,” Microsoft, November 3, 2022, https://blogs.microsoft.com/on-the-issues/2022/11/03/our-tech-support-ukraine/; “How Amazon Is Assisting in Ukraine,” Amazon, March 1, 2022, https://www.aboutamazon.com/news/community/amazons-assistance-in-ukraine; Phil Venables, “How Google Cloud Is Helping Those Affected by War in Ukraine,” Google, March 3, 2022, https://cloud.google.com/blog/products/identity-security/how-google-cloud-is-helping-those-affected-by-war-in-ukraine.
77    Simon Handler, Lily Liu, and Trey Herr, Dude, Where’s My Cloud? A Guide for Wonks and Users, Atlantic Council, July 7, 2022, https://www.atlanticcouncil.org/in-depth-research-reports/report/dude-wheres-my-cloud-a-guide-for-wonks-and-users/.
78    Handler, Liu, and Herr, “Dude, Where’s My Cloud?” 
79    Brad Smith, “Defending Ukraine: Early Lessons from the Cyber War,” Microsoft On the Issues, November 2, 2022, https://blogs.microsoft.com/on-the-issues/2022/06/22/defending-ukraine-early-lessons-from-the-cyber-war/; Smith, “Extending Our Vital Technology.”
80    Amazon, “How Amazon Is Assisting”; Sebastian Moss, “Ukraine Awards Microsoft and AWS Peace Prize for Cloud Services and Digital Support,” Data Center Dynamics, January 12, 2023, https://www.datacenterdynamics.com/en/news/ukraine-awards-microsoft-and-aws-peace-prize-for-cloud-services-digital-support/; Venables, “How Google Cloud”; Kent Walker, “Helping Ukraine,” Google, March 4, 2022, https://blog.google/inside-google/company-announcements/helping-ukraine/.
81    Catherine Stupp, “Ukraine Has Begun Moving Sensitive Data Outside Its Borders,” Wall Street Journal, June 14, 2022, https://www.wsj.com/articles/ukraine-has-begun-moving-sensitive-data-outside-its-borders-11655199002; Atlantic Council, “Ukraine’s Digital Resilience”; Smith, “Defending Ukraine.”
82    Nick Beecroft, Evaluating the International Support to Ukrainian Cyber Defense, Carnegie Endowment for International Peace, November 3, 2022, https://carnegieendowment.org/2022/11/03/evaluating-international-support-to-ukrainian-cyber-defense-pub-88322.
83    Smith, “Defending Ukraine,” 5, 6, 9.
84    Smith, “Defending Ukraine,” 3, 11.
85    Thomas Brewster, “Bombs and Hackers Are Battering Ukraine’s Internet Providers. ‘Hidden Heroes’ Risk Their Lives to Keep Their Country Online,” Forbes, March 15, 2022, https://www.forbes.com/sites/thomasbrewster/2022/03/15/internet-technicians-are-the-hidden-heroes-of-the-russia-ukraine-war/?sh=be5da1428844.
86    Kuninidze et al., “Interim Assessment,” 40.
87     Kuninidze et al., “Interim Assessment,”40; ““Київстар Виділяє 300 Мільйонів Гривень Для Відновлення Цифрової Інфраструктури України,” Київстар, July 4, 2022, https://kyivstar.ua/uk/mm/news-and-promotions/kyyivstar-vydilyaye-300-milyoniv-gryven-dlya-vidnovlennya-cyfrovoyi.
88    Київстар, “Київстар Виділяє”; “Mobile Connection Lifecell—Lifecell Ukraine,” Lifecell UA, accessed January 17, 2023, https://www.lifecell.ua/en/.
89    Ryan Gallagher, “Russia–Ukraine War: Telecom Workers Damage Own Equipment to Thwart Russia,” Bloomberg, June 21, 2022), https://www.bloomberg.com/news/articles/2022-06-21/ukrainian-telecom-workers-damage-own-equipment-to-thwart-russia.
90    Mykhailo Fedorov (@FedorovMykhailo), Twitter, February 26, 2022, 7:06 a.m., https://twitter.com/FedorovMykhailo/status/1497543633293266944?s=20&t=c9Uc7CDXEBr-e5-nd2hEtw.
91    Mykhailo Fedorov (@FedorovMykhailo), “Starlink — here. Thanks, @elonmusk,” Twitter, February 28, 2022, 3:19 p.m., https://twitter.com/FedorovMykhailo/status/1498392515262746630?s=20&t=vtCM9UqgWRkfxfrEHzYTGg
92    Atlantic Council, “Ukraine’s Digital Resilience.”
93    “How Elon Musk’s Satellites Have Saved Ukraine and Changed Warfare,” Economist, January 5, 2023, https://www.economist.com/briefing/2023/01/05/how-elon-musks-satellites-have-saved-ukraine-and-changed-warfare.
94    Alexander Freund, “Ukraine Using Starlink for Drone Strikes,” Deutsche Welle, March 27, 2022, https://www.dw.com/en/ukraine-is-using-elon-musks-starlink-for-drone-strikes/a-61270528.
95    Mykhailo Fedorov (@FedorovMykhailo), “Over 100 cruise missiles attacked 🇺🇦 energy and communications infrastructure. But with Starlink we quickly restored the connection in critical areas. Starlink …,” Twitter, October 12, 2022 3:12 p.m., https://twitter.com/FedorovMykhailo/status/1580275214272802817.
96    Rishi Iyengar, “Why Ukraine Is Stuck with Elon (for Now),” Foreign Policy, November 22, 2022, https://foreignpolicy.com/2022/11/22/ukraine-internet-starlink-elon-musk-russia-war/.
97    Economist, “How Elon Musk’s.”
98    Freund, “Ukraine Using Starlink”; Nick Allen and James Titcomb, “Elon Musk’s Starlink Helping Ukraine to Win the Drone War,” Telegraph, March 18, 2022, https://www.telegraph.co.uk/world-news/2022/03/18/elon-musks-starlink-helping-ukraine-win-drone-war/; Charlie Parker, “Specialist Ukrainian Drone Unit Picks off Invading Russian Forces as They Sleep,” Times, March 18, 2022, https://www.thetimes.co.uk/article/specialist-drone-unit-picks-off-invading-forces-as-they-sleep-zlx3dj7bb.
99    Matthew Gault, “Mysterious Sea Drone Surfaces in Crimea,” Vice, September 26, 2022, https://www.vice.com/en/article/xgy4q7/mysterious-sea-drone-surfaces-in-crimea.
100    Economist, “How Elon Musk’s.”  
101    Akash Sriram, “SpaceX, USAID Deliver 5,000 Satellite Internet Terminals to Ukraine Akash Sriram,” Reuters, April 6, 2022, https://www.reuters.com/technology/spacex-usaid-deliver-5000-satellite-internet-terminals-ukraine-2022-04-06/.
102    Alex Marquardt, “Exclusive: Musk’s Spacex Says It Can No Longer Pay for Critical Satellite Services in Ukraine, Asks Pentagon to Pick up the Tab,” CNN, October 14, 2022, https://www.cnn.com/2022/10/13/politics/elon-musk-spacex-starlink-ukraine.  
103    Elon Musk (@elonmusk), “Ukraine-Russia Peace: – Redo elections of annexed regions under UN supervision. Russia leaves if that is will of the people. – Crimea formally part of Russia, as it has been since 1783 (until …” Twitter, October 3, 2022 12:15 p.m., https://twitter.com/elonmusk/status/1576969255031296000; Andrij Melnyk (@MelnykAndrij), Twitter, October 3, 2022, 12:46 p.m., https://twitter.com/MelnykAndrij/status/1576977000178208768.
104    Elon Musk (@elonmusk), Twitter, October 14, 2022, 3:14 a.m., https://twitter.com/elonmusk/status/1580819437824839681; Elon Musk (@elonmusk), Twitter, October 15, 2022, 2:06 p.m., https://twitter.com/elonmusk/status/1581345747777179651.
105    Elon Musk (@elonmusk), Twitter, October 17, 2022, 3:52 p.m., https://twitter.com/elonmusk/status/1582097354576265217; Sawyer Merrit (@SawyerMerritt), “BREAKING: The Pentagon is considering paying for @SpaceX ‘s Starlink satellite network — which has been a lifeline for Ukraine — from a fund that has been used …,” Twitter, October 17, 2022, 3:09 p.m., https://twitter.com/SawyerMerritt/status/1582086349305262080.
106    Alex Marquardt and Sean Lyngaas, “Ukraine Suffered a Comms Outage When 1,300 SpaceX Satellite Units Went Offline over Funding Issues” CNN, November 7, 2022, https://www.cnn.com/2022/11/04/politics/spacex-ukraine-elon-musk-starlink-internet-outage/; Iyengar, “Why Ukraine Is Stuck.”
107    Ryan Browne, “Ukraine Government Is Seeking Alternatives to Elon Musk’s Starlink, Vice PM Says,” CNBC, November 3, 2022, https://www.cnbc.com/2022/11/03/ukraine-government-seeking-alternatives-to-elon-musks-starlink.html.
108    William Harwood, “SpaceX Launches 40 OneWeb Broadband Satellites, Lighting up Overnight Sky,” CBS News, January 10, 2023, https://www.cbsnews.com/news/spacex-launches-40-oneweb-broadband-satellites-in-overnight-spectacle/.
109    Marquardt and Lyngaas, “Ukraine Suffered”; Mehul Srivastava et al., “Ukrainian Forces Report Starlink Outages During Push Against Russia,” Financial Times, October 7, 2022, https://www.ft.com/content/9a7b922b-2435-4ac7-acdb-0ec9a6dc8397.
110    Alex Marquardt and Kristin Fisher, “SpaceX admits blocking Ukrainian troops from using satellite technology,” CNN, February 9, https://www.cnn.com/2023/02/09/politics/spacex-ukrainian-troops-satellite-technology/index.html.
111    Charles R. Davis, “Elon Musk Blocked Ukraine from Using Starlink in Crimea over Concern that Putin Could Use Nuclear Weapons, Political Analyst Says,” Business Insider, October 11, 2022, https://www.businessinsider.com/elon-musk-blocks-starlink-in-crimea-amid-nuclear-fears-report-2022-10; Elon Musk (@elonmusk), Twitter, February 12, 2022, 4:00 p.m., https://twitter.com/elonmusk/status/1624876021433368578.
112    Mykhailo Fedorov (@FedorovMykhailo), “In 21 days of the war, russian troops has already killed 100 Ukrainian children. they are using DJI products in order to navigate their missile. @DJIGlobal are you sure you want to be a …,” Twitter, March 16, 2022, 8:14 a.m., https://twitter.com/fedorovmykhailo/status/1504068644195733504; Cat Zakrzewski, “4,000 Letters and Four Hours of Sleep: Ukrainian Leader Wages Digital War,” Washington Post, March 30, 2022, https://www.washingtonpost.com/technology/2022/03/30/mykhailo-fedorov-ukraine-digital-front/
113    DJI Global (@DJIGlobal), “Dear Vice Prime Minister Federov: All DJI products are designed for civilian use and do not meet military specifications. The visibility given by AeroScope and further Remote ID …,” Twitter, March 16, 2022, 5:42 p.m., https://twitter.com/DJIGlobal/status/1504206884240183297
114    Mehul Srivastava and Roman Olearchyk, “Starlink Prices in Ukraine Nearly Double as Mobile Networks Falter,” Financial Times, November 29, 2022, https://www.ft.com/content/f69b75cf-c36a-4ab3-9eb7-ad0aa00d230c.
115    Iyengar, “Why Ukraine Is Stuck.”
116    Michael Sheetz, “SpaceX Raises Another $250 Million in Equity, Lifts Total to $2 Billion in 2022,” CNBC, August 5, 2022, https://www.cnbc.com/2022/08/05/elon-musks-spacex-raises-250-million-in-equity.html.
117    “Starshield,” SpaceX, accessed January 17, 2023, https://www.spacex.com/starshield/; Micah Maidenberg and Drew FitzGerald, “Elon Musk’s Spacex Courts Military with New Starshield Project,” Wall Street Journal, December 8, 2022), https://www.wsj.com/articles/elon-musks-spacex-courts-military-with-new-starshield-project-11670511020.  
118    “Maps: Tracking the Russian Invasion of Ukraine,” New York Times, February 14, 2022, https://www.nytimes.com/interactive/2022/world/europe/ukraine-maps.html#:~:text=Ukraine%20has%20reclaimed%2054%20percent,for%20the%20Study%20of%20War; Júlia Ledur, Laris Karklis, Ruby Mellen, Chris Alcantara, Aaron Steckelberg and Lauren Tierney, “Follow the 600-mile front line between Ukrainian and Russian forces,” The Washington Post, February 21, 2023, https://www.washingtonpost.com/world/interactive/2023/russia-ukraine-front-line-map/.
119    Jimmy Rushton (@JimmySecUK), “Ukrainian soldiers deploying a Starlink satellite internet system in liberated Kherson, allowing local residents to communicate with their relatives in other areas of Ukraine,” Twitter, November 12, 2022, 8:07 a.m., https://twitter.com/JimmySecUK/status/1591417328134402050; José Andrés (@chefjoseandres), “@elonmusk While I don’t agree with you about giving voice to people that brings the worst out of all of us, thanks for @SpaceXStarlink in Kherson, a city with no electricity, or in a train from …,” Twitter, November 20, 2022, 1:58 a.m., https://twitter.com/chefjoseandres/status/1594223613795762176.
120    Mykhailo Fedorov (@FedorovMykhailo), “Every front makes its contribution to the upcoming victory. These are Anatoliy, Viktor, Ivan and Andrii from @Vodafone_UA team, who work daily to restore mobile and Internet communications …,” Twitter, April 25, 2022, 1:13 p.m., https://twitter.com/FedorovMykhailo/status/1518639261624455168; Mykhailo Fedorov (@FedorovMykhailo), “Can you see a Starlink? But it’s here. While providers are repairing cable damages, Gostomel’s humanitarian headquarter works via the Starlink. Thanks to @SpaceX …,” Twitter, May 8, 2022, 9:48 a.m., https://twitter.com/FedorovMykhailo/status/1523298788794052615.
121    Thomas Brewster, “Ukraine’s Engineers Dodged Russian Mines to Get Kherson Back Online–with a Little Help from Elon Musk’s Satellites,” Forbes, November 18, 2022, https://www.forbes.com/sites/thomasbrewster/2022/11/18/ukraine-gets-kherson-online-after-russian-retreat-with-elon-musk-starlink-help/?sh=186e24b0ef1e.  
122    Mark Didenko, ed., “Ukrtelecom Car Hits Landmine in Sumy Region, One Dead, Three Injured,” Yahoo!, October 2, 2022, https://www.yahoo.com/video/ukrtelecom-car-hits-landmine-sumy-104300649.html.
123    Vera Bergengruen, “The Battle for Control over Ukraine’s Internet,” Time, October 18, 2022, https://time.com/6222111/ukraine-internet-russia-reclaimed-territory/.
124    Bergengruen, “The Battle for Control over Ukraine’s Internet.”
125    Atlantic Council, “Ukraine’s Digital Resilience: A conversation with Deputy Prime Minister of Ukraine Mykhailo Fedorov,” December 2, 2022, YouTube video, https://www.youtube.com/watch?v=Vl75e0QU6uE; “Keeping connected: connectivity resilience in Ukraine,” EU4Digital, February 13, 2022, https://eufordigital.eu/keeping-connected-connectivity-resilience-in-ukraine/.
126    Greg Rattray, Geoff Brown, and Robert Taj Moore, “The Cyber Defense Assistance Imperative Lessons from Ukraine,” The Aspen Institute, February 16, 2023, https://www.aspeninstitute.org/wp-content/uploads/2023/02/Aspen-Digital_The-Cyber-Defense-Assistance-Imperative-Lessons-from-Ukraine.pdf, 8
127    CRDF Global, “CRDF Global becomes Platform for Cyber Defense Assistance Collaborative (CDAC) for Ukraine,” News 19, November 14, 2022, https://whnt.com/business/press-releases/cision/20221114DC34776/crdf-global-becomes-platform-for-cyber-defense-assistance-collaborative-cdac-for-ukraine/; Dina Temple-Raston, “EXCLUSIVE: Rounding Up a Cyber Posse for Ukraine,” The Record, November 18, 2022, https://therecord.media/exclusive-rounding-up-a-cyber-posse-for-ukraine/; Rattray, Brown, and Moore, “The Cyber Defense Assistance Imperative Lessons from Ukraine.” 
128    Beecroft, Evaluating the International Support.
129    Lee Hudson, “‘There’s Not Just SpaceX’: Pentagon Looks Beyond Starlink after Musk Says He May End Services in Ukraine,” POLITICO, October 14, 2022, https://www.politico.com/news/2022/10/14/starlink-ukraine-elon-musk-pentagon-00061896.

The post A parallel terrain: Public-private defense of the Ukrainian information environment appeared first on Atlantic Council.

]]>
The 5×5—Strengthening the cyber workforce https://www.atlanticcouncil.org/content-series/the-5x5/the-5x5-strengthening-the-cyber-workforce/ Thu, 23 Feb 2023 05:01:00 +0000 https://www.atlanticcouncil.org/?p=613977 Experts provide insights into ways for the United States and its allies to bolster their cyber workforces.

The post The 5×5—Strengthening the cyber workforce appeared first on Atlantic Council.

]]>
This article is part of The 5×5, a monthly series by the Cyber Statecraft Initiative, in which five featured experts answer five questions on a common theme, trend, or current event in the world of cyber. Interested in the 5×5 and want to see a particular topic, event, or question covered? Contact Simon Handler with the Cyber Statecraft Initiative at SHandler@atlanticcouncil.org.

On July 19, 2022, the White House convened leaders from industry, government, and academia at its a National Cyber Workforce and Education Summit. In his remarks at the Summit, recently departed National Cyber Director Chris Inglis committed to developing a National Cyber Workforce and Education Strategy with input from relevant stakeholders to align government resources and efforts toward addressing the many challenges in this area. Among these challenges is finding sufficient talent to fill the United States’ ever-growing number of openings for cyber-related roles across all sectors of the economy. According to research from CyberSeek, US employers posted 714,548 of these job openings in the year leading up to April 2022. While many of the vacancies are oriented toward individuals who are savvy in the more technical aspects of cybersecurity, more organizations are searching for multidisciplinary talent, ranging from international affairs to project management and everything in between. 

While we await the White House’s National Cyber Workforce and Education Strategy, we brought together a group of experts to provide insights into bolstering the cyber workforces of the United States and its allies.

#1 What is one assumption about the cyber workforce that is holding the cyber community back?

Nelson Abbott, senior director, advanced program operations, NPower

“‘We cannot find good talent.’ This sentiment is, in my opinion, a result of companies not broadening their talent acquisition strategies. You will not meet the increasing demand for cyber talent by using the same talent pipelines that are not increasing their output to market.” 

Richard Harris, principal cybersecurity policy engineer, MITRE Corporation

“One problematic assumption is that the market, academia, or government alone can solve the problem of cyber workforce shortages. Developing cyber workforces at the right time, in the right quantities, and with the right skills requires purposeful and persistent public, private, and academic partnerships.” 

Ayan Islam, director, cyber workforce, Office of the National Cyber Director

“There is an assumption that there is a single pathway into the cyber workforce when there are many pathways to recruit cyber workforce talent. To open the job pipeline to those for whom a career in cyber or a related field would be out of reach, new pathways need to be created. We need to fully leverage the potential for community colleges to contribute to the workforce, grow work-based learning programs such as apprenticeships, and further explore non-traditional training opportunities. While some exist today, we need many more pathways to allow for more entrants and career changers into the cyber workforce and to demystify those pathways.” 

Eric Novotny, Hurst professor of international relations, emeritus, School of International Service, American University

“One assumption that I have noticed in employment advertising is the posting of entry-level positions in which the Certified Information Systems Security Professional (CISSP) certification is listed as necessary or desirable. This certification, as is well-known in the community, is a cybersecurity management certification that requires five years of experience in the domain. It may be that human resources representatives do not understand the levels or purpose of cybersecurity certifications. Some organizations may lose qualified job candidates if desired certifications are not aligned with job requirements.” 

Merili Soosalu, partner leader and regional coordinator for Latin America and the Caribbean, EU Cyber Resilience for Development Project (Cyber4Dev), Information System Authority of Estonia (RIA)

“Cybersecurity as a topic is on its way to the mainstream. In the more and more digitalized world, cybersecurity is an integral aspect that cannot be overlooked. This should also be reflected in the outlooks of cyber careers that do not only mean highly experienced technical skills but rather a variety of professions and skillsets from the areas of project management and communications to the highly skilled blue- and red-team competencies.”

#2 What government or industry-led programs have had an outsized positive impact on workforce development efforts?

Abbott: “I am of the opinion that there have not been ‘outsized’ positive impacts. There are a lot of great companies and organizations doing good work (NPower, Per Scholas, etc.), but they do not have the capacity to meet the exponential growth in demand for talent. The recent cybersecurity sprint was good to develop interest in that alternative hiring model, but it is still too early to see what the measurable results are.” 

Harris: “Some of the most successful workforce development programs have been in local communities. These programs were the result of local businesses, governments, and academic institutions putting their heads together to meet cybersecurity and other technical skill needs. While these efforts help keep people in their communities, they also support workforce mobility where these same skills are in demand outside of the local community.” 

Islam: “With over seven hundred thousand (approximately 756,000 as of December 2022, per CyberSeek.org) vacancies in cybersecurity positions across the United States, these numbers constitute a national security risk and must be tackled aggressively. Therefore, it is important for government, industry, education, and training providers to all contribute to workforce development efforts, and work in tandem to address our growing needs. For example, the Office of National Cyber Director hosted a National Cyber Workforce and Education Summit at the White House last summer with government and private sector partners to discuss building the United States’ cyber workforce, increasing skill-based pathways to cyber careers, and equipping Americans to thrive in our increasingly digital society. The event resulted in many new commitments. A cybersecurity apprenticeship sprint was also announced at the Summit, which led to an increase in private-sector participation in the Department of Labor’s apprenticeship program, with 194 new registered participants and over seven thousand apprentices getting jobs.” 

Novotny: “Sponsored events to attract new talent into the field, such as Cyber 9/12, AvengerCon, and various Capture the Flag (CTF) exercises are invaluable for stimulating interest in cybersecurity and exposing students and young professionals to executives and experts in the field.” 

Soosalu: “In Estonia in recent years, many positive initiatives have been developed for different age groups. For instance, for adults looking to change their careers to information technology (IT), the Kood/Jõhvi, an international coding school, was created and top IT specialists should enter to the job market in the coming months. A private initiative called Unicorn Squad was created in 2018 to popularize technology education among girls. These initiatives, to name some, will hopefully show positive effects in the coming years. The Estonian State Systems Authority, responsible for national cybersecurity, prioritizes the knowledge development of cyber incidents of critical sectors by regularly organizing joint exercises between the national Computer Emergency Response Team (CERT) and the IT teams of different critical service providers.”

#3 Are there any issues or challenges in workforce development have been overstated or immaterial?

Abbott: “‘Anyone can do cyber.’ While it is true that there is a much broader spectrum of roles in cyber than most people realize (non-technical; governance, risk management, and compliance; policy; etc.), these still require a strong working knowledge of information technology and networking concepts.” 

Harris: “Many people need to move beyond wringing their hands about cyber workforce shortages or hoping that someone else will solve the problem. Organizations can start at the grassroots level and proactively develop partnerships and plans that result in a tangible workforce development achievement at whatever level is feasible, and then build on that success.” 

Islam: “Actually, what is understated and greatly material to the issue and challenge in cyber workforce development is the lack of appropriate resourcing and C-suite appreciation with security program investments. There is still a disconnect in recognizing that cybersecurity is a foundational business risk and not a one-time, niche issue. Without proper investments on the people side of security programs, we will continue to see the same issues or challenges in tackling cybersecurity threats.” 

Novotny: “There are some misconceptions that cybersecurity is an exclusively IT-driven, technical field. That is certainly true for some roles and responsibilities, but cybersecurity solutions also embrace people and processes, as well as technology.  Professionals with highly developed technical skills will need to include management and people skills in their career development.” 

Soosalu: “Today, all studies show that the IT sector, cybersecurity in particular, lacks a qualified workforce. Therefore, all challenges are real and need to be tackled.”

More from the Cyber Statecraft Initiative:

#4 How can different types of organizations better assess their cyber talent needs?

Abbott: “By 1) moving from credential-based job descriptions to competency-based job descriptions; 2) better communicating between hiring managers and talent-acquisition teams; 3) changing job descriptions to remove bias and non-negotiable requirements to encourage more candidates to apply; and 4) considering internal upskilling programs and backfilling entry-level roles with new talent.” 

Harris: “The National Institute of Standards and Technology’s (NIST) National Initiative for Cybersecurity Education (NICE) Framework is an awesome baseline reference for understanding workforce positions and skills. Organizations, however, must do the work to understand their current and future cyber talent needs, then leverage the NICE Framework, or a similar guide, to connect those business needs with the right positions and skill paths, and build a workforce development plan.” 

Islam: “A growing number of organizations are taking advantage of skill-based and aptitude assessments to allow for diverse and multidisciplinary candidates to join the cyber workforce. However, skill-based training and hiring practices are still necessary. Any solution must be inclusive of historically untapped talent, including underserved areas and neurodivergent populations. A cybersecurity career should be within reach for any American who wishes to pursue it, and skills-based training and hiring practices enable inclusive outcomes, give workers a fair shot, and keep the economy strong.” 

Novotny: “The size of the existing IT and cybersecurity internal infrastructure plays a huge role here. Medium and small enterprises will have a more difficult time justifying a large cybersecurity staff in most cases. For these organizations, where many cybersecurity functions are outsourced, the skills shift to management and procurement, rather than technical operations, such as staffing a security operations center. In the government sector, having different standards and compliance rules than in the private sector also drives different necessary skill sets. On the other hand, I would argue that any organization that has network operations and valuable information assets to protect has similar security requirements in principle.” 

Soosalu: “For assessing needs, some forms of standards are needed. In the European Union, the new European Cybersecurity Skills Framework (ECSF) was created to become a useful tool to help identify the profiles and skills that are most needed and valued. This will help create a European framework for recognizing skills and training programs.”

#5 How have cyber workforce needs shifted in the past five years, and where do you see them going from here?

Abbott: “They have only increased, and almost doubled in 2022. More companies are taking cybersecurity seriously, and are now realizing the importance of having those individuals on their teams. I fear that the demand for cyber talent will only continue unless employers start to create new solutions instead of relying on old habits when it comes to talent acquisition.” 

Harris: “Rapid technological change like the current artificial intelligence revolution, and increasingly complex risk dynamics exemplified by greater cyber-physical convergence, require cyber workforces and individuals to embrace continuous learning throughout their careers. More attention needs to be paid to developing interesting and flexible cyber career paths and investing in more career progression training and education.” 

Islam: “We need to broaden our thinking about the importance of cyber across occupations and professions in our interconnected society. There are many occupations and professions that have not traditionally required in-depth cybersecurity knowledge or training, but whose work relies on the use of cyber technologies. Greater attention should be paid to ensuring that cybersecurity training and education are part of the professional preparation of these workers.” 

Novotny: “Several broad trends are noticeable in workforce requirements that have changed over time. First, as more sectors of the economy are identified as critical infrastructure, professionals that have industry sector experience are in higher demand.  Second, the cyber threat intelligence business—in both government and in the private sector—has opened job opportunities for young professionals with language and international relations education. Third, there is an apparent fusion of traditional cybersecurity needs with a growing concern about misinformation, social media, and privacy. A few years ago, these latter issues were largely separate from the cybersecurity domain. That is not the case today.” 

Soosalu: “Estonia was the target of one of the first ever national cyberattacks in 2007, and therefore cybersecurity as an issue is not new to our general public. However, being one of the most digitalized countries in the world, Estonia relies heavily on its digital services and needs to both create awareness and invest in being as cyber resilient as possible. The lack of a skilled workforce is clearly a vector of risk. Compared to the period of past five years, the legislation has evolved. Today, many more sectors are obliged to follow information and cybersecurity standards, hire information security officers, and invest budget into dealing with cybersecurity. The topic of cybersecurity is here to stay, and we will need to do our outmost to create interested and competent workforce for these profiles. Hopefully, the initiatives named above (Question #2) will help to contribute to this, and we see soon more women and more IT and cyber enthusiasts in the job market.” 

Simon Handler is a fellow at the Atlantic Council’s Cyber Statecraft Initiative within the Digital Forensic Research Lab (DFRLab). He is also the editor-in-chief of The 5×5, a series on trends and themes in cyber policy. Follow him on Twitter @SimonPHandler.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post The 5×5—Strengthening the cyber workforce appeared first on Atlantic Council.

]]>
Soofer quoted in Radio Free Asia on North Korean missile developments https://www.atlanticcouncil.org/insight-impact/in-the-news/soofer-quoted-in-radio-free-asia-on-north-korean-missile-developments/ Thu, 09 Feb 2023 19:42:53 +0000 https://www.atlanticcouncil.org/?p=616700 On February 9, Forward Defense Senior Fellow Dr. Robert Soofer was quoted in an article by Radio Free Asia on the recent display of long range missiles by North Korea (DPRK) on February 8th during a military parade to mark the 75th Anniversary of the founding of the DPRK’s army. Soofer stressed the ‘alarming’ development […]

The post Soofer quoted in Radio Free Asia on North Korean missile developments appeared first on Atlantic Council.

]]>

On February 9, Forward Defense Senior Fellow Dr. Robert Soofer was quoted in an article by Radio Free Asia on the recent display of long range missiles by North Korea (DPRK) on February 8th during a military parade to mark the 75th Anniversary of the founding of the DPRK’s army. Soofer stressed the ‘alarming’ development of the DPRK’s nuclear arsenal and missile capabilities.

We know they have the missiles and the nuclear warheads. We don’t know for certain whether they can successfully reach the U.S. homeland and survive reentry into the atmosphere…Implications are big for U.S. homeland missile defense

Robert Soofer

Forward Defense, housed within the Scowcroft Center for Strategy and Security, generates ideas and connects stakeholders in the defense ecosystem to promote an enduring military advantage for the United States, its allies, and partners. Our work identifies the defense strategies, capabilities, and resources the United States needs to deter and, if necessary, prevail in future conflict.

The post Soofer quoted in Radio Free Asia on North Korean missile developments appeared first on Atlantic Council.

]]>
Avoiding the success trap: Toward policy for open-source software as infrastructure https://www.atlanticcouncil.org/in-depth-research-reports/report/open-source-software-as-infrastructure/ Wed, 08 Feb 2023 14:25:07 +0000 https://www.atlanticcouncil.org/?p=603755 Open-source software (OSS) sits at the center of almost every digital technology moving the world since the early 1980s—laptops, cellphones, widespread internet connectivity, cloud computing, social media, automation, all the rainbow flavors of e-commerce, and even secure communications and anti-censorship tools.

The post Avoiding the success trap: Toward policy for open-source software as infrastructure appeared first on Atlantic Council.

]]>

This report was drafted in collaboration with the Open Source Policy Network, a network of OSS developers, maintainers, and stakeholders convened by the Atlantic Council’s Cyber Statecraft Initiative to develop community-led strategy and policy recommendations for OSS.

Executive summary

High-profile security incidents involving open-source software (OSS) have brought the ubiquity of OSS and the unique challenges its communities face to the attention of policymakers in the United States, EU, and beyond. For policymakers seeking to support the security and sustainability of OSS as a shared resource, this report builds on an important perspective on open-source software: OSS as Infrastructure. OSS is code published under a license that allows anyone to inspect, modify, and re-distribute the source code. This helps developers share and re-use solutions to common problems, creating such efficiencies that some estimate that 97 percent of software depends on OSS. OSS ranges from small components for illustrating graphs to entire operating systems. Contributors include individuals working in their free time, staff at large companies, foundations, and many others. The ecosystem is community-based, with many governance structures to manage contributions and maintenance.

This report compares OSS to three infrastructure systems—water management systems, capital markets, and networks of roads and bridges—and draws on existing policy vehicles from each to suggest policy that supports the sustainability and security of OSS as a communally beneficial resource.

Software borrows metaphors from water systems, including “upstream” and “downstream” relationships between packages and the end products that rely on them. Entities that use water from the ground or rivers do not assume its potability or perpetual availability—instead, they ensure the water is fit for their varying needs. OSS consumers have a responsibility to ensure the OSS they consume is well supported and secure, and the largest OSS users have the most responsibility for supporting ecosystem sustainability. OSS also bears similarity to capital markets, facing compounding, systemic risks, as chains of software dependencies can make a single OSS project a point of failure for many downstream systems. These risks intensify when there is little transparency or accurate reporting available to consumers—or regulators—to evaluate and mitigate risk. Finally, OSS has previously been compared to roads and bridges, and this bears out in the manner that insufficient investment in ongoing support creates risk over time. The collapse of a bridge—or the discovery of a vulnerability in a widely used OSS package—can focus attention and investment, but continuous, mundane maintenance to prevent such crises often falls by the wayside.

Taken together, these infrastructure systems—and the policy vehicles that support them—provide key principles for policymakers looking to support open-source software as infrastructure:

Encouraging responsible OSS consumption:

  1. Get government to “walk the walk” of being a responsible OSS consumer by establishing one or more Open Source Program Offices in the federal government to help agencies manage their OSS strategy, policy, and relationships.
  2. Develop an OSS Best Practices framework through NIST that incorporates risk assessments andcontribution back to the OSS ecosystem. Industry and government could use the framework for self-assessment, and government could use it to help inform procurement evaluations.
  3. Develop, through OSS-mature companies and nonprofits, a standard of best practices for contributing to OSS to bring in more OSS Good Samaritans from smaller organizations.

Mitigating Systemic Risks:

  1. Create an Office of Digital Systemic Risk Management (ODSRM) within the Cybersecurity and Infrastructure Security Agency to identify systemic digital risks, including key widely used and at-risk OSS packages for targeted support.

Providing resources with security and sustainability in mind:

  1. Establish a target-of-opportunity funding program to support maintenance and incident-response work for systemically important OSS projects.
  2. Establish an OSS Trust Fund to provide sustainable and long-lasting investments in the security and maintenance of OSS code and the health and size of OSS maintainer communities.
  3. Develop an adopt-a-package program through which companies provide resources to support ongoing maintenance and vulnerability mitigation for OSS packages they depend on. Such a program could encourage more small and non-IT-sector companies to take part.

1. Introduction

Open-source software (OSS) sits at the center of almost every digital technology moving the world since the early 1980s—laptops, cellphones, widespread internet connectivity, cloud computing, social media, automation, all the rainbow flavors of e-commerce, and even secure communications and anti-censorship tools. OSS, developed without exclusive ownership by globe-spanning communities, has enabled engineers, scientists, and entrepreneurs alike to build great things and make momentous technological advances.

Much like the transcontinental rail systems of the nineteenth century and the intermodal shipping container system of the twentieth, OSS is an infrastructure that enables and shapes social, political, and economic activity across the world. Like the shipping container system and more than the highly visible railroad, OSS has long gone underrecognized outside of expert communities for the influence its code and developers have on the world.

That lack of recognition began changing only recently as OSS has come to the fore outside technology communities, with interest from philanthropic investors and grantmaking as well as congressional hearings after the December 2021 log4shell vulnerability.1 2The challenge with much of this attention is its emphasis on there being something wrong with OSS, something “broken” or “inherently weaker” with the code that needs fixing. The mindset of putting out a fire in open source, without critically reevaluating the relationship between OSS developers and consumers as well as the need for material acknowledgment of the importance of open-source code, threatens the long-term sustainability and security of OSS.

Pathbreaking research from Nadia Eghbal3 in 2016 helped present the public-policy challenge regarding OSS used to build essential technology systems. Not just an issue of shortfall in security, the OSS development model poses a basic problem of equity and value. OSS separates sale value, the amount a consumer is willing to pay for a free product, and use value, the amount this consumer gains by using it—an issue called out as early as 1997 by Eric Raymond.4 There is no clear market solution when conventional mechanisms to assign a value at sale and fractionally return that value to developers do not work. This kind of gap in a market opens a clear lane for public policy to do more than just support this infrastructure through the public purse. A survey conducted for this report,5discussed in more depth in the appendix, shows 65 percent of respondents agreed or strongly agreed on the necessity of a government role for the long-term health of the ecosystem. Moreover, 70 percent saw direct government funding as necessary to ensure this.

Figure 1. Survey response
Figure 2. Survey response

However, this is not to say that government is the only relevant player. Respondents indicated that, while they largely thought a government role in supporting the OSS ecosystem was requisite for its long-term health, they did not necessarily see it as the main party responsible for stepping up to the plate. This reflects a common thread of argument throughout this report: the criticality of OSS projects is determined not by their creators but by those using the package, and accordingly, responsibility for the ecosystem primarily rests in the hands of its largest beneficiaries—here, industry.

What are we doing here?

This report builds on previous research by the Atlantic Council and others, as well as the collected insights of the Open-Source Policy Network,6to argue that public policy can address the systems’ shortfalls by approaching OSS as infrastructure. Making policy to support and sustain OSS as infrastructure helps move viewing this code from a place of fear of security vulnerabilities to one that understands OSS as a critical component of an efficient software ecosystem, while still acknowledging the important role policy holds in improving security writ large.

When policy focuses only on terrible, potential outcomes, its ideas tend to reflect that bias toward fear, but this need not be the framing for OSS. Open source enables and solves much more than it imperils. Its security is as much a guarantor of continued value to users large and small, from individuals to national intelligence agencies, as it is a bulwark against malicious intent.

While OSS has come back to attention as an issue of national policy in the European Union (EU), and indeed become one for the first time in the United States in some ways as a product of fear and calamity, opportunities run much deeper. Infrastructure of such scale and magnitude is supported, reinforced, and amplified—not fixed in a brief whirlwind of activity—much like the consistent provisions of clean water, roads and bridges, and healthy capital markets. This report proposes clear models for sustained OSS support and offers guidance on how governments in the United States, European Union (EU), and nations across its member-state constituents can implement such models.

Much like roads or bridges, which anyone can walk or drive on, open source code can be used by anyone…This type of code makes up the digital infrastructure of our society today.

Roads and Bridges: Unseen Labor 7

This report identifies key principles of OSS development and use. It relates them to other physical infrastructures for which there are mature policies and laws in an ensemble approach to combine nuance and tangible recommendations. The report points policymakers toward adaptable policies addressing more familiar forms of infrastructure that serve as case studies for government support of OSS. There are two reasons for this work.

First, as tangible as the infrastructure comparison is, OSS also has useful differences from physical infrastructure that offer opportunities for nuance. The open-source ecosystem is far more varied, complex, and dynamic than most physical infrastructure. Eghbal, for example, explains in detail the many differences between OSS and her chosen roads and bridges analogy.8 Obscuring that nuance can lead policymakers to ignore obvious benefits—the substantial human communities involved in building and maintaining OSS, for example. OSS is, ultimately, the product of people with a variety of motivations, not the least of which are pure enthusiasm, curiosity, and a desire for community. Given the ecosystem’s overwhelming variety, it is often more accurate to understand OSS as an expression of social interaction and group problem-solving. Rather than designed top-down, it is infrastructure that emerges.9 OSS is fundamentally free speech in machine-readable form, not exquisite public works produced under a single engineering vision. Dynamic, interwoven groups of individuals produce, modify and maintain the code, rather than it being a commodity, product, or service per se, which carries significant ramifications for law and policy, as well as the infrastructure analogy.10

Second, as policymakers consider OSS in the larger context of significant cybersecurity policy in the United States, a set of guiding principles would help predict and model policies’ impact on OSS. Common physical infrastructure shares similarities to OSS: both support critical functions, provide dependable services, offer subtle and often unseen service delivery, function through systems of decentralized control, and more. Government has long engaged in infrastructure policy, so drawing on those more familiar frameworks offers opportunity to hone engagement with, and support for, the OSS ecosystem.

To better capture the complexity of the OSS ecosystem, this report offers not one but three infrastructure analogies for OSS policy. They are water-management systems, capital markets in the financial services sector, and roads and bridges from Eghbal’s report. The comparison between OSS and water-management systems invokes both systems’ sprawling networks of producers, intermediaries, quality assurers, and varied use cases. It also highlights the relationship between the degrees of usage and responsibility to the overall sustainable functioning of the ecosystem and discusses policy models based on Nevada water law and federal regulations around funding and protecting volunteer clean-up efforts. The comparison to the financial sector focuses on the nature of risk and transparency in both domains, where a variety of modular, interconnected, and aggregated items (projects in OSS, assets in finance) create nodes of risk and leverage and where risk management relies on insight into the location and of function of underlying system components. The section looks at policy efforts to identify and manage systemic risk created in these networks of dependence. Last, the roads and bridges comparison builds on Eghbal’s report, highlighting the importance of continual maintenance, funding, and tailored intervention across an interconnected network. It looks to the Highway Trust Fund (HTF) and adopt-a-highway programs for models of funding and support for key infrastructure.

Open source software is part of the foundation of digital infrastructure that promotes a free and open internet.

– S.4913, The Securing Open Source Software Act of 2022 11

For each analogy, the report addresses the prominent characteristics shared with the OSS ecosystem, explores the comparison in depth, and surfaces guiding policy principles before offering examples of relevant US and EU policies as potentially useful models for OSS. Following these analogies is a discussion of some existing government policies toward OSS and specific recommendations.

This report aims to develop tangible example policies for the United States and European Union to support OSS as infrastructure and point policymakers toward existing policy vehicles that government can readily modify and adopt to better support and engage with the OSS ecosystem. The report does not seek to make definitive statements about what open source is or is not through these analogies. Rather the goal is to capture a snapshot of its most essential features and most consequential participants. Any of the analogies can be extended far past usefulness, and policymakers should approach each keeping in mind the essential truth that, while all models are wrong, some (including, we believe, these) are useful, nonetheless. Before diving into the analogies though, this report looks to discuss the open-source ecosystem as it is, highlighting key principles and addressing common misconceptions.

2. The open-source ecosystem

While the motives of software developers can vary from securing a paycheck to satisfying personal curiosity, most software itself ultimately strives to carry out a task or solve a problem. Open-source software (OSS) is an acknowledgment that many such problems are similar and repeatedly encountered by developers. OSS works by making one solution to a problem available to all to re-purpose and re-use, which likely results in a strong return on investment (ROI),12both financially and socially.13 While there are several different legal approaches to defining and licensing what is “open source,” the common OSS philosophy grants forward to users and consumers the rights to inspect, modify, and redistribute software—its source code is “open.”14 In this, OSS generally differs from closed-source or proprietary software by providing these additional rights.

The result is a vast network of overlapping communities principally involved with developing, maintaining, and integrating OSS. These communities range from volunteers to paid professionals, with participants who exist entirely outside the for-profit technology industry and myriad others who are full-time employees from the likes of Google, Microsoft, and Amazon.

While open source as a philosophy predates the internet—witness the chaotic ballet of licensing and development values that characterized the 1969 birth of Unix and its fractured gestation as one example15—the internet proved a tremendous accelerant to OSS development. Indeed, the emergence of online communities developing and maintaining open-source code helped meaningfully differentiate the internet from precursor telecommunications networks and gave tangible form to Licklider and Taylor’s vision of creative communications among thinking machines.16

Figure 3. Dependencies and contributions

There are several key characteristics of the open-source ecosystem for policymakers to keep in mind. First among these is its sheer scale and variety. Though treating open source as a monolithic concept is a convenient abstraction—and for high-level policy, a necessary one at least up to a point—the real landscape is staggeringly diverse. There are communities built around specific programming languages, from commonly known Python to the deliberately esoteric Befunge.17 Some communities center on specific projects like the Linux kernel, and others orbit downstream functions like encrypted communications tools or specialized statistical analysis packages. Some projects serve simple ends like correctly adding characters to the left of a string or number.18 Others provide word-processing programs19or even entire operating systems, such as Linux and its many distributions.20 There are open cloud platforms such as OpenStack and open container orchestration systems like Kubernetes. There are also open-source code compilers, web servers, media players, and so on—some open source functions as standalone applications, some as deeply buried components for repurposing in different contexts. Some assembles programming languages into executable binaries, some builds software, some analyzes code for bugs, and so on.

The relationships between OSS projects and the larger software world are also complex and widely varying. A useful term here is “depth in stack,” referring to how deeply buried within an overall product or application OSS and other components can be. The most straightforward use of OSS might be in user-facing applications—for example, instead of purchasing Microsoft Word, one might download and use LibreOffice, an open-source word processor that provides largely the same functions as Word.21 A similar simple example of incorporating OSS into a project could include an academic researcher writing a data-analysis script in R, a commonly used statistics language. They might include the lines “install.packages(ggplot2)” and “library(ggplot2)” at the top of their script, giving them access to a variety of graphing tools and functions as they analyze a dataset.22

Figure 4. Buried OSS relationships

Other instances of OSS reliance run far deeper and are more challenging to map out. A user in the simple act of watching a show on Netflix relies on an immense variety of OSS, from the streaming platform’s own open-sourced projects to the guts of the underlying Amazon Web Services (AWS) cloud instances,23 which include server operating systems, container orchestrators, and innumerable component services. The log4shell incident highlighted just how deeply buried OSS dependence can be and, accordingly, how challenging the task of identifying dependence is. One report found that 60 percent of log4j uses were indirectly rather than directly implemented, challenging remediation efforts.24 One study by Qualys found that as of March 2022, some 30 percent of log4j instances remained unpatched.25 This pattern holds across the ecosystem, where dependence is rarely obvious and easily identified when OSS components lie buried beneath indirect relationships and obscure references.

While all the above mainly considers the open-source ecosystem through the lens of the code, keeping its human basis in mind is critical. Members of the open-source ecosystem can wear many hats, from running a hobby project to integrating OSS into industry products in their day job, often moving between different communities, contexts, and ecosystems. Even the common roles for a given open-source project are fluid—a developer might open-source one of their projects and act as its maintainer while they continue to contribute.26 Down the line though, either from lost interest in the project or not enough time to dedicate to its maintenance, a developer might call in a well-known contributor as a maintainer, either transferring the project over entirely or creating a team of maintainers. Different communities rely on different governance models, from maintainer-controls-all to elected positions for a project or select individuals relied upon for commit reviews. These OSS participants also distribute geographically, their contributions enabled by the foundational transparency of the ecosystem.

It is helpful to frame open source as many different, interacting ecosystems. They evolve, respond to stimuli, compete, collaborate, have cultures, and follow norms. Actions that impact an open source ecosystem can have ripple effects beyond that ecosystem – and beyond the world of proprietary technology or even technology altogether.

Julia Ferraioli 27

While OSS directly invokes “the code” and its developers, there also exists a staggering array of intermediary entities supporting and shaping the software side of things. Code hosts (sometimes called “forges”) store the actual code in either public or private repositories—for example, Microsoft’s GitHub, though there are myriad other hosts.28 Registries or indices, like Node Package Manager (npm) and the Python Package Index (PyPI), record official versioning and documentation for some packages, though their code might reside on a code host like GitHub or be mirrored there. Package managers like Python’s Preferred Installer Program (PIP) are the tools that, starting with a user command, retrieve the necessary code from a repository. At the more human level, nonprofits—many of them business leagues, like the Linux Foundation or Open Source Collective29—provide financial support for programs, and others, like the Open Source Initiative, manage licensing definitions.30 Some groups might provide security tooling or developer support to specific projects—for instance, the Alpha-Omega project assists maintainers of critical open-source projects.31

Figure 5. Maintainer and contributor relationship

All this is to say that the open-source ecosystem is complex. With that complexity comes disagreement, and assuming consensus among the ecosystem’s participants is an oversimplification similar to presuming that the code is uniform and governance structures straightforward. Some of the key debates among OSS communities will have direct policy implications. Some maintainers worry about where their projects might end up used,32some are wary of corporate involvement in the space shaping project direction and governance,33and others see OSS as a path toward a digital right to repair.34 The survey conducted for this report reflects this diversity in priorities well. What respondents considered the greatest source of risk for the OSS ecosystem ranged widely, including technical concerns about memory-safe languages, practices for transferring project ownership, government overregulation, misunderstood or disregarded OSS community values, unknown and deeply intertwined dependencies, the insufficiency of economic models, maintainer burnout and overburdening, and even maintainer sabotage. Similarly, there was little consensus on what metric best captured the overall health and well-being of an open-source project community, with the number of active contributors and maintainers being the only standout answer, and not by a wide margin. As a policy report first and foremost, many of these discussions are out of scope here, but they are nonetheless important to policymakers.35

Figure 6. Survey response

3. OSS as infrastructure: Three analogies

Defining infrastructure

Infrastructure rests as the “…vitally important, if not absolutely essential…” component that enables people to thrive, to create, and to build.36 Infrastructure is the underlying plumbing under great ideas. Some definitions lean toward the tangible, roads, bridges, software code, and computer networks. Others emphasize the economic categorization—infrastructure as a public good. However, not all kinds of infrastructure fulfill the strict economic definition of being both non-excludable and non-rivalrous entailed.

Even physical infrastructure is not so easily defined and sees a significant amount of “know it when you see it” classification—for instance, the Cybersecurity and Infrastructure Security Agency (CISA) lists sixteen critical infrastructure sectors, with the selection criteria emphasizing critical far more than infrastructure.37 OSS is present within traditional critical sectors, serving as infrastructure in a very literal sense.38 For this report’s purposes of guiding policy, significant similarity between OSS and infrastructure is sufficient, and there is plenty to find.

First, OSS handles many of the digital world’s unseen, “nitty-gritty” tasks upon which the larger digital ecosystem relies. Take, for instance, any of the following: OpenSSL, OpenStack, Kubernetes, the GNU Compiler Collection, BIRD, and Linux running on most large internet servers—all these functions are core to digital services and largely hidden from end users.39 Another striking example is cURL, which stands for client Uniform Resource Locator (URL) and pronounced curl informally.40 It is a command line tool and library to handle data transfers, residing within internet servers, gaming consoles, automobiles, operating systems, smartphones, and more.41 Consumers rely on digital systems for communications, financial transactions, transportation, healthcare, and other vital services—and many of those digital systems rely on OSS.

Second, beyond this necessary but less visible support, both OSS and physical infrastructure scale massively beyond their immediate surroundings, enabling huge swathes of the economy, end-user products, and more. One frequently cited report from Synopsys found that 78 percent of code in surveyed codebases was open source, while 97 percent of codebases contained at least some OSS.42 Buried in the settings of every iPhone (Settings > General > Legal & Regulatory > Legal Notices) is a four-thousand-line-long, barely navigable list of all the licenses declared by the phone, many of which concern the open-source components it relies on—including, in iconic OSS style, “‘THE BEER-WARE LICENCE’ (Revision 42)…As long as you retain this notice you can do whatever you want with this stuff. If we meet some day, and you think this stuff is worth it, you can buy me a beer in return.”43

Third, much of what physical infrastructure accomplishes happens out of immediate public view and is easily taken for granted, despite its centrality to a smoothly functioning society. Rarely does the end user think of complex tangles of transmission lines, transformer hubs, and powerplants when flicking on a light switch—except when the lights stay dark. Similarly, most end users are unaware of the role that OSS plays in the digital systems that underpin their daily lives. Likewise, that dependence remains underappreciated until disruption of the end service.

Fourth and finally, the variety of forms of “ownership” or stewardship of OSS mirror the complex web of federal, state, local, and private ownership of physical infrastructure. In physical infrastructure, some sectors see almost complete federal ownership, some feature neat division among state or local governments and industry, and others rely on the many distribution patterns in between these.44 For OSS, some projects are individually maintained, others housed in nonprofits or funded by foundations or trade organizations, some with support from large information technology (IT) vendors, or even maintained and curated by for-profit companies, and more. Some technology companies develop software projects in-house before “open sourcing” them out into the world. The variety of governance models in both domains requires careful, targeted, and flexible policy.

Industry players have repeatedly emphasized that OSS insecurity largely reflects the challenges of securing any kind of software—vulnerabilities are inevitable and agnostic to licensing.“45 The US government, meanwhile, has focused its most prominent efforts on OSS through a security lens—the first bill in Congress addressing OSS as an ecosystem, S.4913, is the Securing Open Source Software Act of 2022, and congressional testimony, and other spurts of government attention tend to react to security incidents like log4shell and Heartbleed. In one dataset of OSS government policies, security and modernization were the two most popular stated purposes for US policies related to OSS, with security holding the majority in the proposed legislation.46

This security focus does not and should not imply that OSS is in any way less secure than proprietary code. The two are not so easily distinguished, and the ability of anyone to review OSS for vulnerabilities should, at least in theory, make it more securable, if not secure, than obscured proprietary software. Rather, the fact that OSS underpins so much software and modern infrastructure means that its security, which is subject to some different incentives and forces than proprietary offerings, is of notable importance. This is like how CISA focuses on securing infrastructure not because it is innately insecure, but because it is critically important to the national interest. OSS is already as commonplace, structurally critical, and hidden from end users as rebar inside the reinforced concrete of a bridge span. It is equally critical, mundane, and—in some circles—unappreciated as the water treatment plants which ensure healthy drinking water or the catenary wires above an electric train. Where that criticality exceeds the ability of other policy levers to create change, a security lens helps prioritize action and investment, especially when shaping industry behavior.

Three analogies

Treating OSS as infrastructure also invites other forms of engagement without exclusivity. While some governments might focus on supporting the security of OSS insofar as it is infrastructure, others can focus on investing in it for the holistic benefits to society or for the influence it might provide their countries in shaping the future social impact of important technologies. Infrastructure corresponds to investment and provides a ready framework for international cooperation. An infrastructure framing allows stakeholders to hold independent priorities under common, unifying principles.

Different characteristics of the OSS ecosystem evoke different kinds of infrastructure. This section describes the report’s ensemble model: three analogies each mapping from principles shared by open source and a form of infrastructure to offer policy takeaways for the open-source ecosystem. Each analogy uses the language of tangible infrastructure alongside real-world policies that invest in, and support, this infrastructure. The table below summarizes these shared principles, infrastructure comparisons, and policy takeaways, in addition to the broader commonalities between physical infrastructure and OSS noted so far.

None of these analogies is complete on its own. Taken together, they present a practical view of much of what makes OSS work and work well at that. The takeaways intend to steer policymakers toward practical, considerate models for policy action shaped by lessons previously learned and concepts properly ordered.

Figure 7. Table of shared principles of infrastructure and open source

This section also provides several direct models for the beginnings of government support for OSS—these are not prescriptive policy recommendations but rather tangible examples of how the investment of funds and other resources can help better support OSS. These models highlight effective parallels to OSS policy challenges either through the problems and questions they address, the intervention strategies they offer, the systems dynamics they navigate, or some combination.

Water management systems

Water management and distribution systems share two crucial characteristics with the open-source ecosystem. Most visible are both systems’ continuous, directional relationships. Software development speak already roots itself in hydrologic nomenclature. The “upstream” and “downstream” relationships borrow from literal descriptions of rivers to describe how choices along supply chains impact different participants. Often, though not exclusively, these relationships explain the trickle-down impact of upstream incidents—for instance, the downstream users exposed to the recent log4shell vulnerability, or when the deletion of a little-known package called left-pad briefly broke websites across the world.47 For water management and distribution systems, an upstream issue with a dam might impact water levels downriver, or changes in weather patterns might disrupt aquifer replenishment, causing shortages for downstream users, whether industrial, agricultural, or otherwise.

Figure 8. Water management and open source

This straightforward language about chains of dependency and shared exposure also describes another similarity between water infrastructure and OSS: the obligation of its users to contribute to the sustainability of the larger ecosystem, from statewide apportionment of the Colorado river to agricultural collectives deciding on the usage of local aquifers. For both water and OSS, a relatively small subset of users relies more heavily on shared resources than others. Hydroelectric facilities and large farms can use more water in an hour than an average household does in a year.48 Likewise, massive IT vendors ship widely used products incorporating numerous open-source projects, while a researcher might rely on only a handful of packages aiding in statistical analysis.

While the policy solutions to protect the sustainability of water and the security of OSS do not map perfectly—a hard quota on industry use of OSS makes little sense. For example, as OSS is a non-rivalrous resource, the general ethos is critical: the largest users carry the largest obligations (and capacity) to contribute back to the sustainability of the ecosystem. Just like growing populations and a changing climate mean that water consumers and policymakers need to invest in conservation and sustainability,49the growth and increasing criticality of the OSS ecosystem means that OSS consumers and policymakers must understand that the availability and innate usability of the underlying code cannot be guaranteed without support. Few expect that water taken directly from a stream or pond be immediately potable. Neither should consumers assume the security and independent governance capacity for OSS projects as pulled into products without some level of security assurance and code review. Again, not because OSS is any less secure than proprietary offerings, but because it is all too likely that projects were developed without specific consumer usage in mind, and therefore, consumers should not expect them to cater to their exact management needs. An overriding principle of open-source licenses is that this code is delivered “as is.”

Water infrastructure also highlights the immense varieties in use, governance, and creation in the open-source ecosystem. Just as water fuels textile production, energy generation, and individual consumption alike, OSS has a wide variety of use cases, including hobbyist tinkering, academic research, internet functionality, and business- and product-critical operations.Open source and water management systems also feature large networks of intermediaries between easily conceptualized endpoints (e.g., developer and end user, mountain spring and sink faucet). Water does not just flow, uninterrupted, from a stream or spring into a residential tap, but instead twists through a series of reservoirs, canals, treatment facilities, and plumbing. In the same way, much OSS finds itself incorporated into software projects, those projects into others, and over again through other projects maintainers, repository hosts like GitHub, private mirrors within companies, curators like Red Hat, auditors like the Open Source Technology Improvement Fund (OSTIF), transitive dependencies of other projects, and more before ever reaching a user.

Many OSS stakeholders worry that government investment and support will bring onerous obligations and regulations for developers,50whether in the form of liability or excessive documentation, that risk dissuading developers from providing open-source systems. Water management systems provide a clear parallel example of an alternate approach. In the same way that companies and individuals do not assume the purity of water in unknown streams or springs, neither should they assume that volunteer developers, often uncompensated for their work, have provided perfectly secure code and will bear total responsibility for repairs and upkeep. Most open-source licensing bears out this relationship, including something to the effect of the Apache 2.0 license’s phrasing: the “licensor provides the work (and each contributor provides its contributions) on an ‘as is’ basis, without warranties or conditions of any kind.”51 OSS users, especially the largest and best-resourced, should bear more of the responsibility for supporting the security, and appropriate selection, of open-source software, rather than using blithely and thereby trusting warranties never promised. Among more mature OSS consumers—particularly large IT vendors—this relationship is well realized, with vendors like Microsoft, Google, and others investing significant funds and developer time into the OSS ecosystem.52 Governments can participate in similar relationships by funding OSS development and potentially even contributing to projects themselves, setting an example that may spur other large entities to act in kind.

The similarities between water management systems and OSS, including directional dependence, complex webs of intermediaries, and the need for sustainable usage, suggest a paradigm for policymakers weighing potential engagement with the open-source ecosystem. Considering directional dependence prompts a more accurate understanding of the importance of intermediaries in OSS as well as a better starting point for understanding the criticality of different OSS components and how to preempt costly incidents. Instead of expecting open-source software to be perfectly stable, well-maintained, and fully secure upon import, OSS consumers can continue to take more responsibility for their usage and all its benefits, consequences, and attendant obligations. Considering those connections also emphasizes the existing network of intermediaries between developer and end user, which government must engage with rather than disrupt. Finally, the water-management comparison emphasizes that a sustainable ecosystem requires a proactive relationship between large users and the source; an affirmative responsibility to contribute back to the ecosystem. Organizations with high expectations for, and dependence on, OSS be they public or private sector should devote substantial resources to supporting the relevant communities in meeting those expectations. Failure to do so will leave the OSS ecosystem perpetually under-supported and increasingly unable to support more complex and systemically critical use cases. The notion that open source might become unsustainable because of such overuse, or integration to critical applications without responsible consideration, would imperil the benefits of OSS to all.

Nevada Water Legislation: Mandate responsible use

Regulations surrounding water use, allocation, and sustainability in the United States are largely the purview of states or multi-state consortiums.53 Even where the federal government does take a more active role in water safety standards, such as with the Clean Water and Safe Drinking Water Acts, considerable room for state governments to take the lead exists, by design.54 Water management legislation in Nevada, the country’s most arid state, offers two examples of policy vehicles well-suited to the OSS ecosystem: Senate Bills (SB) 47 and 74, both passed in 2017. First, in SB 47, Nevada adopted the stance that “it is the policy of this State…To manage conjunctively the appropriation, use, and administration of all waters of this State, regardless of the source of the water.”55

From the OSS perspective, this is a straightforward acknowledgment of how usage drives criticality—that, regardless of the source of code or water, effective policy lies in governing where and how software is consumed as much or more than how it is developed. In this sense, for OSS particularly, policymaking that takes the existence of OSS as it is rather than aiming toward an unrealized ideal for the code itself is useful, and it is particularly well met by Nevada’s situation, whose primary sources of water generally originate in other states.56 SB 74 offers more concrete guidance, requiring water suppliers—here, analogized to OSS intermediaries—to develop water conservation plans,57with some additional requirements for larger providers.58

Both SB 47 and SB 74 put a large burden for the sustainability of the state’s water use on intermediary water suppliers—ostensibly those pulling water from its sources and sending it to users for various “municipal, industrial, and or domestic purposes” downstream.59 For OSS, this compares with ensuring responsibility lies with those who take open-source packages and use them in downstream applications, rather than expecting the river of OSS itself to be clean and self-sustaining to a degree sufficient for uses outside its control (or even on the repositories, similar to aquifers and reservoirs here). These bills focus on water suppliers not just as the users of the resource but the intermediaries with much sway over the connective infrastructure, specifically calling out their role in developing “standards for water efficiency for new developments” and reducing leaks among other provisions.

There is no shortage of OSS,60but insofar as conservation serves as a synonym for sustainable use, federal OSS policy can draw on this framing. A policy pivot away from just assessing the risks of using OSS—say as required by many conventional supply chain risk management programs—and toward broader models of enforcing responsible use might include recommending an explicit Sustainable OSS Usage Plan as a signal of large OSS users interacting responsibly with the ecosystem, inclusive of managing their risk posture but also deliberate, systemic efforts to identify and support communities around critical OSS dependencies. There is much to be gained in shifting the focus of OSS policy to improve security from the developers and their code (“the source”) to the framing of aggregate usage, reliance, and responsibility.

Moreover, the specific requirements of the Nevada conservation plans amount to a call for suppliers to explicitly understand their place and role in the larger ecosystem. Regarding intermediaries, more policies both from government and industry might focus on the ability of large code-hosting platforms to leverage their platform as natural bottlenecks in the ecosystem (as the means for many to access repositories and store their code) to provide useful tooling at scale to OSS communities.Some of this work is underway, and this is not a claim that it is insufficient but rather a call for policy to capitalize on those points of outsized returns on tooling investment and integration. Importantly, this is not a call for platforms to be responsible for the safety of all the code they host, but rather useful in the distribution and usability of tools to projects—to provide tools and capability for responsible use and security conscious development. In line with the water analogy, consideration of the context of different use cases is key—just as water powering hydroelectric dams need not be drinkable, different use contexts imply different support obligations and maintenance standards.

Good Samaritan Initiative: Limit liability for volunteers

Federal water law, meanwhile, has useful models for encouraging external support for the OSS ecosystem—specifically, unmaintained dependencies. The Environmental Protection Agency’s Good Samaritan Initiative helps facilitate the cleanup of abandoned mines, a significant source of water pollution, with over half a million abandoned mines estimated throughout the country.“61 Volunteers assist in the cleanup of these abandoned mines, providing a great benefit to their communities, which often rely on the same water impacted by the pollution. The Good Samaritan guidance protects those volunteers explicitly from liability for their efforts, effectively lowering the bar to entry for helpful ecosystem contributions. Some federal programs go further by directly funding cleanups of water systems, though these often come within larger spending packages rather than pulled from specific funds.62

There are two OSS parallels here: unmaintained projects, and organizations doing support work (e.g., security auditing or incident response support). On the former, a Tidelift study in 2019 found that between 10 and 20 percent of common OSS packages lacked active maintainers, posing obvious security and sustainability challenges, and arising likely as a symptom of limited developer time and resourcing.63 Organizations that support OSS projects are just an extension of this parallel beyond the common language of abandonment.

Government and industry might help improve the overall OSS ecosystem’s health through incentives for Good-Samaritan-style engagement and by continuing to maintain the widely understood protection for OSS developers and maintainers against liability arising from the downstream uses of their components. This comparison points to the importance of policymakers vetting proposed policies relating to security requirements for OSS to ensure they do not create additional compliance-related liability for OSS developers, contributors, or maintainers, which might paradoxically deter individuals and organizations from contributing to the OSS ecosystem.

In addition to liability protection, an OSS policy equivalent could emphasize broader support and investment by funding external support groups (much of which already takes place through the private sector), guiding them toward critical under- or un-supported projects, and rewarding and aiding the “adoption” of orphaned projects still in use. There has already been some consideration of these approaches outside the public sector, such as the Alpha-Omega project and several academic studies64— providing the basis less for reinvention than for renewal of government support as part of a broader engagement with the OSS ecosystem.

Environmental regulations, including water management systems, in the EU are guided by the “polluter pays principle,” which states that polluting entities should be responsible for costs like pollution control and prevention.65 The principle encompasses a wide variety of regulations targeting different industries including agriculture and manufacturing. The types of cost for which polluters are responsible also vary, funding anything from cleanups of pollution they caused to investigations and permitting efforts. The principle is explicitly included in several important pieces of regulation, such as the Water Framework Directive and Waste Framework Directive.66 Not all regulation is in line with the principle yet, but its inclusion in recent regulatory efforts and role guiding future policy demonstrates the EU’s emphasis on ensuring that those who use natural resources, resulting in their degradation, pay for the consequences of their actions so the public need not foot the bill.

Capital markets

A critical feature shared between financial markets and the open-source community is that both liquidity and OSS act as enabling inputs to a wide variety of other industries. Financial backing and loans from investors enable businesses and individuals to raise capital to overcome initial fixed costs, which is vital for getting businesses off the ground. Similarly, OSS allows businesses and individuals to save vast amounts of time and effort that would otherwise be spent re-solving similar problems—a critical input that helps overcome burdensome upfront investment. This enabling-input characteristic is true of many forms of physical infrastructure—in water management systems as noted above, as well as power grids, gas pipelines, transport networks, and more.

Capital markets, however, highlight the relationship between risk and transparency. In capital markets, debt or equity in real-world assets, stocks in companies, and mortgages back numerous financial instruments. Financial actors can manage their risk only by understanding the valuation and risk of these underlying components, and there are many intermediary entities such as ratings agencies that help create and provide this information. The 2008 financial crisis serves as a useful reminder of the consequences of failures in this system—when ratings agencies inaccurately appraised the risk of mortgage-backed securities, huge portions of the financial sector were left holding fundamentally unsound investments believed to be low-risk, leading to disastrous, global consequences.67 Without accurate transparency, sources of systemic risk went unidentified, unaddressed, and unmitigated, fueling a financial meltdown.

There are useful parallels for the OSS ecosystem here. Like financial instruments, OSS often serves as the building blocks for other end products. For consumers and producers, visibility into these components is necessary to improve risk-management practices. The entity that assembles a bundle of financial instruments—or a bundle of software that includes OSS components—holds a better perspective than the end user to understand the risks, as well as to know how to manage that risk through investment in upstream packages and projects. More transparency from assemblers can help recipients better understand the components within a product or a project and adjust their incident response and risk-management practices accordingly. The financial sector has developed procedures for assessing and describing risk, due to a combination of regulation, profit motive, and market demand. Industry-led development on tools and data to enable visibility into the use of OSS and other software components are already underway—software bills of materials (SBOMs) offer point-in-time insight into the components in a given piece of software (including open-source components), and ratings systems and metrics platforms like Supply chain Levels for Software Artifacts (SLSA), Community Health Analytics in Open Source Software (CHAOSS), and Open Source Security Foundation (OpenSSF) Scorecards offer aggregated insight into the security posture and maturity of those component projects.

Figure 9. Capital markets and open source

At the systemic level, transparency and visibility into the use of OSS components can highlight where the wider digital ecosystem is leveraged on a small number of critical packages, helping to prioritize support and investment on all fronts. Heartbleed, the left-pad incident, and log4shell illustrate this kind of risk—where disruption in a single upstream component has widespread effects, and in some cases, deep ones.68 The Census II report from the Linux Foundation and the Laboratory for Innovation Science at Harvard offers an example of the benefits of such system-scale analysis. The report used aggregated software-composition-analysis (SCA) data to identify open-source components widely depended upon across industry69—notably, the report identified log4j, the library impacted by the log4shell vulnerability, as one of those widely used packages (after the incident, unfortunately).70

The comparison to the financial sector also offers a model for how government might interact with industry and the open-source ecosystem. As noted, the private sector is already developing many of the tools that will help address risk with transparency. Government’s role in that space is best understood as one that supports and provides appropriate incentives, especially for adoption over prescription—for example, through its procurement policies, rather than supplanting these tools or intensive regulation. As in financial markets, government is well-positioned to guide ecosystem-scale efforts toward a better understanding of aggregated risk concentrations. And, as with financial market data, government may also need to consider how to safeguard data collected for that analysis, which may have proprietary or trade-secret sensitivities. For OSS, a list of critical projects would be as useful to attackers in guiding their efforts as to defenders. Finally, at the most abstract level, the relationship between transparency and risk to the larger system can help guide broad government strategy, emphasizing that transparency and openness are not just rhetorical values but practical tenets of extreme, tangible benefit to the stability of the overall ecosystem.

Financial Stability Oversight Council: Transparency toward proactive stability

Many proposed cybersecurity policies require a substantial level of system knowledge and data availability: they require being able to identify critical OSS packages across entities, the most significant users and beneficiaries of OSS, the overlap between projects that are unmaintained or under-resourced and that are key dependencies, and more. Policy vehicles from the financial sector, particularly those born out of the 2008 crisis, offer models for managing risk through transparency and an ecosystem-scale lens. Formed by the Dodd-Frank Act, the Financial Stability Oversight Council (FSOC) within the Department of Treasury works to “address several potential sources of systemic risk…[by] monitoring financial stability and designating…companies…and utilities as systemic[ally important].71 Where it identifies systemically important financial market utilities (FMUs), it can subject them to additional regulation in concert with the wide array of relevant government offices and regulators.

A parallel office for OSS would serve to identify projects, dependencies, and even entities that constitute systemically important infrastructure, and, in place of regulations, might offer those nodes of risk more targeted and comprehensive support, coordinating among government cyber authorities and industry, in place of financial regulators. Such a federal office would not need to limit its study to OSS dependencies. It could also contribute to analyzing cyber risk within other complex systems like cloud service providers and critical vendors to government.

Identifying points of risk concentration created by system-scale OSS dependencies points policy immediately toward the next mechanism from the financial system: stress testing. For financial entities, stress testing boils down, in part, to liquidity requirements—minimum asset-liability ratios meant to ensure institutional resilience to market shocks, or more simply having enough cash on hand to cope when things get ugly. For the OSS ecosystem, the first steps toward stress testing might include—once critical dependencies are better identified and understood—by-sector requirements for contingency planning in response to the compromise or degradation of important OSS packages. For example, government might start requiring such risk management of critical infrastructure sectors. This could also include exercises to respond to vulnerabilities in deep-in-the-stack packages or active compromise of developer tools or authentication systems widely depended on by identified software.

Critiques of the FSOC, and the larger Dodd-Frank Act (DFA) of which it is a part, illustrate useful considerations for a parallel body overseeing digital risk management concerning the OSS ecosystem. One notable concern for the DFA was its potential to overburden banks—both compared to other parts of the financial system and compared to international banks not covered by the act—to their detriment.72 Crucially for the OSS ecosystem, increasing burdens on open-source project developers and maintainers, already short on time and money, should be a non-starter for any policy. Given the principle that use (rather than the manner of construction) determines the criticality of an OSS project, any responsibilities added to existing regulation will better suit large vendors, and, even there, an OSS FSOC need not create further red tape. Rather, such an entity could focus on gathering data-—perhaps initially focused on the federal government’s most essential digital systems, the process of which could provide insights used to focus later iterations with other entities such as industry-heavy critical infrastructure sectors.

Metric selection is a significant challenge when assessing the risk of OSS projects, requiring careful consideration of both factors that affect a project’s capacity for secure development as well as the levels of dependence on that project across a vast digital ecosystem. When asked about the former, survey respondents for this report were generally split across answers, emphasizing the lack of consensus on key risk heuristics, though they did consistently devalue the number of sponsors, either corporate or individual that a project had and more significantly weighing project popularity, a history of recent vulnerabilities, and community size.

Focus on identifying risk concentrations, over mandating how to address and manage that risk, would also help a potential OSS FSOC equivalent navigate another concern it would share with its financial counterpart, namely, the complexity of the existing network of relevant authorities. The web of federal financial authorities, not to mention the role states play in other portions of that sector, is a challenge for the FSOC to navigate.73 Moreover, the division of powers and controls among federal cyber entities is even less mature. Many key agencies have come into existence only within the past decade. And unresolved and overlapping cybersecurity authorities in the United States remain divided between CISA, the Office of the National Cyber Director, the Office of Management and Budget, sector-specific agencies, chief information officers of agencies, and a variety of other offices and regulators at the federal and state levels. A digital FSOC’s primary focus on information gathering and collation would avoid stepping on the roles and responsibilities of other entities while providing ecosystem visibility to help them regulate more effectively. A mission of identifying nodes of dependence would help avoid messy interagency conflict while still highlighting systemic risk and helping the federal government get its own (cyber) house in better order.

Operating similarly to the FSOC in the United States, the EU’s European Services and Markets Authority (ESMA) oversees European financial markets. ESMA’s four objectives are assessing risks, developing standards for financial entities, ensuring the consistent application of financial regulations across the EU, and directly overseeing specific kinds of financial entities. ESMA releases detailed reports on the European financial markets, with specific releases focused on various securities, derivatives, alternative investment funds, and retail investment products. Like the FSOC, ESMA was created in the aftermath of the 2008 financial crisis as regulators sought more insight into the interactions among complex financial instruments. ESMA focuses more on broader ecosystem risks across the European financial system than on subjecting certain companies or utilities to heightened scrutiny, in line with its advisory role.74

Roads and bridges

The titular comparison of Eghbal’s Roads and Bridges report links OSS to critical transportation infrastructure. The comparison draws out key characteristics of the open-source ecosystem, such as the free-rider dynamic and the necessity of consistent, mundane maintenance. The concept of usage driving the need for maintenance deserves particular focus. OSS is used in many varied contexts and is the backbone of most digital technology. Like interstate highways and other transportation infrastructure, open-source software inevitably require maintenance, and waiting too long to address emerging issues can result in a catastrophic incident down the proverbial road.75 Responding to individual issues, like the collapse of a bridge or a widely-publicized vulnerability like log4shell, is essential, but is not enough to ensure the stability of the essential infrastructure of transportation systems or OSS. Coupling a recognition of OSS’s essential nature with an understanding that most code is not static and will require additional support over time allows for targeted policies that address the crucial challenges of OSS ecosystems.

Figure 10. Roads and bridges and open source

Relatedly, both physical transportation infrastructure and OSS ecosystems suffer from widely varying support, with no reliable transaction model to capture value from those who use the infrastructure and feed it back to maintenance and support. Eric Raymond identified this issue in The Cathedral and the Bazaar as a discontinuity between sale value and use value—the value of code at the point of transaction vs. its value in use over time.76 Roads are costless to use outside of specific toll schemes and yet valuable to their users, especially when well surveyed and maintained. The widespread assumption of availability means that, without sufficient dedicated efforts to overcome this lack of support through consistent maintenance and funding, roads and bridges would collapse due to damage from use, while essential OSS components may degrade in availability or security as their developers fail to receive support commensurate with the criticality of their code.

The roads and bridges analogy also captures well the variety of use within the open-source ecosystem. In the same way that interstate highways receive more traffic than streets in suburban neighborhoods and some roads provide singular access to remote geographies, certain packages are critical due to either the large number of software packages dependent on them or their service of a particularly niche function, while other packages might be relatively less important to the ecosystem due to a lack of widespread use in downstream applications. Importantly, there is no singular way to use any OSS project—each can serve different users and applications differently, much like how roads rarely require or serve a single destination and are agnostic to the route of drivers.

Government has long worked to close resourcing gaps in transportation infrastructure, for example, through the Highway Trust Fund (HTF). While the exact nature of the most useful forms of support for OSS is up for debate—they might include any combination of funding, developer hours, tooling, security auditing, and more—government is uniquely resourced to bolster efforts in closing that gap and help reset market expectations for contribution by the private sector. None of this is to counter or dispute the original Roads and Bridges report. Rather, this report emphasizes the utility of its analogy of choice, adds others to capture different OSS traits, and below strives to connect extant transportation policy to workable OSS models. Figures 11 and 12 capture survey responses to questions on what methods of external support, and investment, for open source projects would be most useful, for open source maintainers/developers and downstream users respectively. The results are notably consistent across both questions, highlighting the link between upstream resources and downstream benefits.

The Highway Trust Fund: Consistent and sustainable support

For transportation systems, the HTF provides an example of consistent funding to maintain critical infrastructure. Maintaining transportation infrastructure requires preventative, systemic investment instead of reactive disaster response; the Highway Trust Fund provides financial support so that bridges do not have to collapse before they receive maintenance. As such, it provides a useful model for how to fund the maintenance of OSS.

HTF funding is spent largely through grants to state and local governments, suggesting the importance of working with existing entities within an ecosystem with regional expertise.77 The federal government should not depend only on its own knowledge to identify useful recipients of funding—instead, it should work with industry and the existing web of OSS stakeholders including volunteer networks and paying foundations, relying on their expertise in the domain. Like the HTF, OSS funding could support instead of supplant existing efforts.

The HTF’s explicit focus on construction and maintenance is also a model of a solution for a potential shortcoming in existing OSS funding: several previously mentioned examples of funding intermediaries tend to focus on investing in the development and creation of open-source solutions, but support is also needed for the long-term, less glamorous work of maintaining OSS projects—managing contributions, ongoing security engagement and community governance, and so on. The solution might look like a federal OSS Trust explicitly focused on backing extant projects rather than focusing on spinning up new ones. It might directly pay maintainers of critical projects, as well as support the development of tooling, security support organizations, and other scalable means to support a broader ecosystem of OSS components. Relatedly, survey respondents for this report prioritized tooling, with several specifically calling out automated, scalable solutions, and direct funding to OSS developers as most useful for both OSS project support and downstream security.

Figure11. Survey response
Figure 12. Survey response

It is also worth mentioning the funding source that feeds the HTF: fuel taxes. From an economic perspective, the HTF thus linked (if by happenstance more than economic design) two distinct policy vehicles: a taxed negative externality and a subsidized public good. In a key difference from the HTF’s fuel-tax funding, there is no clear negative externality for OSS usage, and policy should not aim to discourage its use. Instead, it should develop incentives for more responsible usage, such as tax credits for upstream contributions and donations to an OSS fund. Such a model for OSS, a fund supported by consistent contribution premised on use value, would offer another incentive lever for policymakers to encourage large OSS consumers to contribute back to the sustainability of the ecosystem, and could potentially encourage additional industry players heavily reliant on OSS but outside the IT sector to play an increasing role in supporting OSS. These entities might rely just as much on OSS as IT vendors but struggle to mature their own OSS programming and therefore benefit from more general means of upstream support.

Adopt-a-highway: Incentivize direct local support

Transportation policy also provides a useful model for community-specific support. Adopt-a-highway programs are usually state-run endeavors connecting volunteers with stretches of local roads to remove litter. Aside from the convenient marketing phrase—adopt a package78—programs linking volunteers to both funding and packages they rely on and benefit from supporting offer another investment vehicle.

Adopt-a-Highway programs have faced challenges with groups seeking to participate in such programs.79 While parallel lessons are not as direct here as with the HTF, it is worth clarifying the role of any potential adopt-a-package programs (AAPPs) in OSS. One long-running concern for OSS communities has been the role of large corporations in the governance and direction of open-sourcing products, potentially keeping features behind a paywall with forked proprietary code or swamping independent projects with their sheer volume of contribution.80While the appeal of adopt-a-highway programs often lies in the optics of supporting local infrastructure, AAPPs can have a more practical purpose—they should instead focus on enabling and regularizing vendors substantively supporting the OSS projects they rely on, a practice already practiced in some isolated examples in the IT industry, with public recognition a secondary concern. There is a material benefit to these kinds of relationships, from component familiarity to better- managed and -resourced projects. Moreover, any implementation should healthily delegate to industry, which can better identify what projects require support.

Challenges that the HTF and adopt-a-highway programs have encountered can help pave a path forward for similar investment in the OSS ecosystem. The HTF, funded mainly by fuel-tax proceeds, has faced solvency crises requiring congressional intervention.81 Concerns about the source of funding are pertinent to any potential federal OSS fund. Fortunately, some key differences between OSS and physical infrastructure help here. Road construction is slow and disruptive, but maintenance of OSS projects and support for their developers less so in helping with popularity of investment. While ROI studies for OSS and highways are somewhat spotty, the estimates for OSS ROI are promising if realized,82 in addition to the knock-on benefits such investment might provide to national security concerns, workforce shortages, and more. Meanwhile, some OSS incidents can be directly connected to shortcomings in support,83 from unpaid developers pulling down widely used packages to small teams challenged with vulnerability identification and remediation at scale.

Finally, while valid concerns about investment in transportation projects leading to government “picking winners” exist,84 the OSS ecosystem indeed already has winners—projects meriting investment by virtue of either their ubiquity, criticality, or both—and there is much benefit to security in identifying those projects to begin with, as noted above. Moreover, the extant field of governance and support infrastructure from industry, nonprofits, and philanthropy already prioritizes some projects and modes of support over others—by necessity and often with more expertise and domain-specific knowledge than currently available to the federal enterprise. Working with and through those entities, rather than in parallel or at odds with them, and focusing on support and maintenance as much or more than project creation is a promising avenue for avoiding the lived shortfalls of some physical infrastructure planning.

These tangible policy vehicles all aim to make the three OSS as infrastructure analogies more readily useful, adding concrete intervention models and consideration of past challenges to the guiding principles and high-level characterizations of the OSS ecosystem already provided. The following section discusses a sampling of existing or proposed initiatives for policy engagement with the OSS ecosystem before converting the analogies into direct recommendations, primarily for government with some items including significant public-private coordination or giving the reins to industry.

Outside the United States, transportation infrastructure also faces a disconnect between the assumption of availability and the lack of support from those depending upon it. To overcome this gap and ensure essential infrastructure is maintained and reliable, the EU has several large funds that provide grants to build or maintain roads and other components of the transportation system. The Connecting Europe Facility (CEF) targets cross-border transport infrastructure, while the Cohesion Fund (CF) provides additional funding to countries in the EU with a Gross National Income per capita below 90 percent of the EU average. These funds help create consistency across the transportation infrastructure of the EU’s member states—difficult to ensure without a coordinating central entity. The CEF and CF are part of the EU’s sustainable development efforts, with both funds committed to ensuring that the infrastructures they build and maintain are energy efficient and cause minimal environmental impact. Though they spend toward slightly different project sets than the HTF—for example, the CEF also supports telecommunications and energy projects—the underlying principle is the same: infrastructure projects generally do not arise sufficiently from industry alone.85

4. Real-world infrastructure policy for OSS

The open-source ecosystem and its many stakeholders have long recognized the need for sustained, stable support to projects and responded with the creation of nonprofits and institutions to provide that. Government support, tailored to both community needs and government priorities such as security or innovation, can provide robust, stable backing for the existing patchwork of organizations and projects in the OSS world. This section describes several existing policies for governments to take inspiration from and work with rather than assuming the whole burden of reinventing the wheel of OSS policy.

This section samples relevant policies—sourced principally from the Center for Strategic and International Studies’ (CSIS) newly updated dataset, Government Open Source Software Policies86—in three categories synthesized from the three analogies above: government support and funding, ecosystem risk practices, and responsible use by OSS consumers. The CSIS dataset also described other kinds of policy outside these three categories—some establishing offices within governments dedicated to managing various OSS functions, often termed Open Source Program Offices (OSPOs), some requiring the open-sourcing of government-developed data and solutions, and others describing procurement practices.

Government support and funding

Policies establishing government support and funding for OSS were the most common of the three categories discussed here from the CSIS dataset, though there were still relatively few instances of these compared to the many procurement advisories and requirements it contained. Support for open-source projects in many ways is a natural extension of several government priorities—a search for non-proprietary solutions, support for acquired systems, and the logical conclusion of education and training programs—so their relative abundance makes sense. However, the fact that more policies discuss OSS procurement than OSS support is telling—just as in industry, it seems that governments are using OSS more than they are contributing back. The reasons for usage are often clearly laid out: “to reduce the dependency on proprietary software,”87 to reduce costs,88 and to improve interoperability. Approaching OSS as infrastructure adds depth to this discussion—there are great benefits to using OSS solutions (and recognizing the vast majority of proprietary code incorporates OSS as well) and that usage creates a need to support the underlying projects. Though government support lags government usage, there are some models of supporting OSS projects—even those not acquired and used by government—that can help create the increased market choice so many procurement policies seem to desire.

In Germany, several organizations work to channel government funding toward OSS projects. The German Ministry for Economic Affairs and Climate Action funds Germany’s Sovereign Tech Fund, which launched a pilot round for funding open digital infrastructure in October 2022,89 and the Prototype Fund which supports public interest technology—requiring that it be made available under open-source licensing—with investment coming from Germany’s Federal Ministry of Education and Research.90

There are nascent efforts in the United States too: the National Science Foundation’s Pathways to Enable Open-Source Ecosystems solicitation program launched in May 2022 to support governance organizations at the ecosystem level.91 The Open Technology Fund receives funding from the US Agency for Global Media among other entities, part of which goes toward “advancing global Internet freedom” through supporting open-source projects relevant to its mission.92 NASA’s Open-Source Science Initiative funds and adjusts policies to encourage open and collaborative scientific processes, including through supporting open-source software and related infrastructure.

More broadly across the world, a 2013 Argentinian policy established a fund with over $2 million in initial backing to build OSS projects.93The Austrian government, in 2016, offered prizes of up to €200,000 for the OSS projects in various categories—the first round of funding shelled out €3.6 million across 31 projects.94 One fund in Malaysia, set up in 2003, allocated $36 million for start-ups developing OSS, but further information on the project is scant.95 These funds often support the establishment of OSS projects fulfilling an established need. While the support is generally useful, it is worth noting that as important as funding project creation is, supporting existing projects, in the long run, is even more vital to the long-term sustainability of the ecosystem.

Ecosystem risk management

Though no government policies in the dataset explicitly focus on assessing ecosystem-wide risk in the OSS world, interest in dedicated open-source offices provides a possible avenue toward this activity. Recently, governments have begun turning an eye toward formal offices dedicated to the many open-source activities they may undertake, such as project support, license compliance, security evaluation, incident response, public awareness, and providing clear points of contact for government employees and OSS developers. These OSPOs originate in industry as departments for coordinating all manner of open-source efforts.96 The World Health Organization recently established an OSPO, for example,97 and the European Commission’s Open Source Software Strategy for 2020–2023 includes establishing an Open Source Program Office within the commission to implement relevant OSS actions of the strategy.98

Other governments are focusing on information gathering. This year, the Japanese Ministry of Economy, Trade, and Industry released a report from a task force studying Software Security, which studied private sector reliance on OSS. Government initiatives that study the open-source ecosystem can provide crucial information which can then guide future investment and support of OSS.99 Similarly, the proposed bill S.4913, the Securing Open Source Software Act of 2022, includes a requirement for the US government to conduct a study assessing its own reliance on OSS as well as its ability to accurately track those dependencies either through SBOM data, existing government programs like the Continuous Diagnostics and Mitigation (CDM) program run by CISA, and other sources of information.

Responsible use

Policies that focus on patterns of responsible use in the OSS landscape were scant. One Armenian document concerning the country’s principles of internet governance noted that the central role of decentralization in the development of the internet, specifically regulation on OSS, should be light, if necessary, at all.100 Other instances of policy embracing the cultural values of OSS also exist, and the preference of governments to open-source their own solutions and code is notable. However, an explicit discussion of incentive and responsibility structures in the OSS ecosystem is somewhat lacking. Notably, White House conversations about the forthcoming National Cyber Strategy have not included any new mechanisms to explicitly support OSS, addressing little more than a carve out to protect OSS developers from any potential liability regime: a good and warranted item but underwhelming against the totality of need in the ecosystem.

While government policies for OSS exist, they focus more on the government as a consumer than as a regulator or supporter. Government procurement preferences seem driven by a desire for autonomy from large vendors and expensive licenses and patterns in little procedural upstream contribution. Though some funding models exist, by and large, government policies explicitly addressing OSS seem to focus on what government purposes it can serve and what transparent values it might inspire in government practice.

5. Crafting infrastructure policy for OSS

OSS is really not much different from proprietary software: all code can be developed more securely, and the security risks OSS faces are common across most digital systems. For OSS the differences come in the relationships between open-source consumers—from government to the private sector to end users—and the projects they rely on. The lack of clear transactional relationships and the deeply influential role of the diverse, ever-changing contributor community are a challenge for policy and industry to navigate and support sufficiently. The result is an ecosystem that has both enabled digital innovation and often suffered from overburdened developers and under-resourced communities and projects.

Encouraging sustainable OSS participation

The recommendations of this section aim to use policy levers and industry collaboration to provide models for sustainable usage of and support for the OSS ecosystem, emphasizing responsibility driven by usage.

Start by improving government consumption

In the United States, the federal government is not just a regulator but also an enormous consumer of OSS. This enormous use case provides a valuable opportunity for the federal government to test many of the recommendations below on its codebases, which is of immediate benefit to the federal enterprise. If the federal government is to truly assign as much importance to the OSS ecosystem as it has recently signaled,101 it might consider creating institutional entities with an explicit mandate to focus on the federal government’s use of and support for OSS, modeled after OSPOs recently established by other organizations. For the United States, a whole-of-government OSPO-like entity could be established within OMB or (with a focus on government procurement) the General Services Administration (GSA). Alternately, OMB and GSA could provide a coordinating function for smaller OPSO-like entities established in each agency. Such a program could take inspiration from the OPEN Government Data Act, which requires the designation of Chief Data Officers within federal agencies,102 by requiring agencies to designate a Chief Open Source Officer (COSO).

In addition to setting agency policy around the use of OSS and managing relationships with relevant OSS communities and vendors, agency COSOs could also contribute to a whole-of-government OSS strategy through a structure like an inter-agency Chief Open Source Officers Council, modeled after or housed within the Chief Information Officers Council. S.4913, if enacted into law, would pilot OPSO-like programs in the federal government by directing OMB to select agencies to create pilot OSPO-like entities to develop standards for their agency’s use of OSS and engagement with the OSS ecosystem.103 EU member states, where collaboration with the OSS community and consumption of OSS similarly need not tie as closely to cybersecurity regulators, could well replicate this model.

Regardless of whether they have an OSPO, or an existing commitment to OSS consumption and development, (in the United States, see entities like the Department of Defense (DoD) and National Aeronautics and Space Administration (NASA)), all agencies should also encourage and fund travel to OSS community forums for government employees engaged with software development, procurement, and/or technology governance. The social graph of a project defines OSS development, maintenance, and growth. The security of this code and its sustainable integration into government software projects would benefit greatly from wider government employee participation in the myriad conferences and governance bodies that populate the OSS ecosystem. While this may be a practical challenge for some defense and intelligence organizations, it is an important, meaningful way to integrate government needs and contributions more fully into OSS communities and help identify risks and opportunities for sustainable use.

Support private-sector consumption

Develop an OSS Usage Best Practices framework through the National Institute of Standards and Technology (NIST) with significant industry input. Such a framework could include and build on the proposed OSS risk assessment guide recommended by S.4913.104 However, it should also incorporate consideration of upstream contribution as a foundational measure of organizational maturity around OSS usage. Included among its recommendations should be an organizational plan for sustainable OSS use.

This document would serve as a reference for further policy attempts to incentivize investment in OSS sustainability. For example, government procurement processes could include consideration of for-profit vendor compliance with the NIST OSS Usage Best Practices framework. By framing compliance as a consideration rather than a hard mandate, the goal would be to incentivize for-profit providers without precluding nonprofit and individual contributors lacking the resources to develop a compliance program. A similar framework, which considers financial contributions to upstream projects, could help guide the application of tax credits used to incentivize donations.

Industry, as well, could take a leading role here, developing a common, voluntary OSS-engagement plan across entities under the auspices of a coordinating nonprofit such as OpenSSF. Important too would be including non-IT companies in these considerations. Though understandably less fluent in the technology sphere, large industry manufacturers and other corporations nonetheless have a considerable dependence on OSS projects. Where such large, non-IT companies have their own robust IT resourcing and capacity in-house, they too should build and contribute to models for risk management based on discarding the assumption of availability or functionality of critical OSS packages.

A NIST guide on best practices for OSS usage could also help guide federal developers and agencies in their relationships with vendors, key projects, and larger risk-management practices. Further, federal developers’ and procurers’ experiences with using such a framework could help inform future iterations of the document and bring industry best practices more fully into the federal enterprise.

Protect OSS Good Samaritans

Private-sector firms with existing investments in the open-source community (e.g., Google, Microsoft/GitHub, and IBM/RedHat) and well-established OSS governance and security organizations (e.g., OSI, the Open Source Collective, OpenSSF, and the Internet Security Research Group) should lead on drafting a best-practice standard for contributing to and supporting OSS projects. This document should help define the standard of care associated with volunteer contributions. This standard is not a form of liability protection but a way for firms to design policies encouraging volunteer contributions to OSS packages in a way that best meets corporate risk appetite. These volunteer commitments are an important way to contribute back to OSS used by companies and are a form of contribution-in-kind to support packages used by others.

Addressing systemic risk

The rapid pace of digital innovation and the informal relationships between OSS dependencies and their downstream beneficiaries has led to a digital ecosystem prone to stacking risk in a relatively small number of critical OSS projects, and created challenges for nonprofits, governments or companies seeking to obtain visibility into those points of concentration. These recommendations aim to align government and industry in systematically identifying key dependencies meriting direct support and investment without adding undue regulatory burden. These recommendations take inspiration from the FSOC and ESMA entities in the capital markets analogy.

Establish an Office of Digital Systemic Risk Management (ODSRM)

Modeled after the FSOC or ESMA described above, a central government office would, in close cooperation with industry and OSS community stakeholders, work to identify critical OSS dependencies both in the federal civilian agencies and across critical infrastructure sectors. This office might eventually mature from identifying these points of concentration to stress testing their compromise (either malicious or otherwise) and the related, wider ecosystem effects, modeling and exercising through variations on future log4shell-style events using real-world dependency information.

In the United States, this office should have broad authority to draw on federal expertise wherever it might reside, from the National Security Agency to CISA, and focus both on identifying specific critical OSS projects or systems and methods for producing and collating dependency data that can highlight nodes of risk. Such data might, for instance, include pooling SBOMs provided to government during its procurement processes. Given the large mandate this office would eventually assume, implementation might best start in pilot programs focused on mapping out the dependencies of one or more federal IT systems. Existing programs to map Federal digital assets and existing Federal vendors would be natural partners in the project. However, in the latter case, the implementing agency, perhaps with congressional support, would need to overcome obdurate industry resistance to the inclusion of dependency data about software products in the form of software bills of material, despite being regularly generated and consumed already. While the array of use cases for these SBOMs is still maturing,105 large organizations, like New York Presbyterian Hospital,106 already use them regularly. And there is a healthy supply of software tools to generate and process them employed by for- and nonprofit entities.107

Lessons learned from the analysis of one system could inform a widening aperture across other government systems and eventually across the broader digital domain, particularly considering that there may be significant overlap of key OSS dependencies between similar systems. Establishing an ODSRM is an opportunity for government to better map its own digital systems and assets before using lessons learned in that process to inform its approach to a larger, industry-wide attempt at helping to identify key critical dependencies.

Provide resources with security and sustainability in mind

Throwing funds at a problem is rarely ever a sufficient fix, but where investment shortfalls exist, it can help. These recommendations focus on guiding policymakers toward a resourcing model that helps cover funding gaps, particularly around long-term maintenance and support rather than the creation of new OSS projects, while accounting for non-financial resources (e.g. labor time, expertise) and financial support for important non-technical factors (e.g. encouraging contributor community depth and diversity, governance and good package management policies) and relying on community expertise in directing resources toward critical projects.

There are three important factors to consider in developing schemes for government support to OSS as infrastructure. First, where resources go is as important as how they get there. Direct funding and government-to-project contributions may work well for areas of urgent or existential need, but OSS projects will benefit most from consistent support delivered with local knowledge about the project, its maintainer community, and its user base. Few, if any, government-led schemes will be able to achieve this level of local knowledge on their own, so resources should mostly, flow through trusted intermediaries like software foundations (e.g., Apache, Linux, and Eclipse) and nonprofit groups (e.g., Open Source Collective and the Internet Security Research Group) as well as selected university programs.

Second, support must be sustainable. One of the difficulties of private-sector funding for OSS projects and their security is that, outside of a handful of exceptions, crisis has been the catalyst for much of this support. Monies flow to projects and project classes affected by an ugly vulnerability or momentary disaster without the promise of consistent, long-term commitment that project owners can plan and build around. The good work of several software foundations across the OSS ecosystem is a function of both the resources they bring and the stability they offer.

Third, it bears repeating that resources need not just be financial. Dollars and euros are fungible and necessary—volunteer labor can only bring OSS projects so far and might not account well for specific technical skills or experience needed to audit code or management and governance processes. Governments, generally, possess a scale of financial power available to few in the private sector. But governments also have other policy levers. Changes to government policy can reduce barriers to sustainable OSS adoption, open new opportunities for agency and government employee-level contributions back to OSS projects and punish abusive or malicious behavior targeting OSS communities. These are non-monetary contributions to the long-term security and sustainability of OSS and important alongside financial support.

With that in mind, this report offers three final recommendations on how to shape government support for OSS, keeping security and sustainability as the key goals, instead of massive feature expansion or redevelopment.

Target of opportunity

Governments with the financial and organizational wherewithal should create target-of-opportunity funding programs to support OSS security. The goal of this funding is to award resources in a targeted manner, determined by government need, to OSS projects and activities. These awards should root in criticality and help account for urgent needs, ideally in anticipation of, but perhaps in response to, a crisis. Criticality can be determined by an entity, like the ODSRM, and used to guide single-agency or cross-government resourcing schemes. Smaller than the OSS Trust discussed below, a Target of Opportunity funding pool should scale into the single or tens of millions, allowing governments to resource security and compliance requirements that might fall on OSS programs as well as urgent mitigations and responses to incidents.

In the United States, such a program should be run by the federal agency best positioned to assess and respond to insecurity in technologies supporting critical infrastructure and broad swaths of society—CISA, under the US Department of Homeland Security (DHS). Congress, in S.4913, already views CISA as the logical home for tracking the use of OSS across the federal government and assessing the risks posed to OSS and other software. CISA should have the resources to support the implementation of those efforts and support the OSS projects identified as critical dependencies along the way.

Establish the OSS Trust

Recognition of OSS as the digital infrastructure underneath myriad economic and social activities entails a collective acknowledgment of the failure to-date to support it as such. Across national boundaries, open-source code generates and captures considerable value without consistent government backing, neither for the most critical security updates nor for long-running code maintenance and improvement. New resources will not solve every problem faced by OSS maintainers, and the intention of government support of this kind is not to rewrite the economic relationship between the maintainers of free and “as-is” code and their users.

The OSS Trust should be a mechanism for governments to provide consistent support for the security of OSS code, the integrity of OSS projects, and the health and size of OSS maintainer communities. These funds should scale into the hundreds of millions, enabling broad training and education programs, to support security reviews and mitigation for hundreds of projects at a time, and to bring more maintainers and contributors into OSS communities. These funds can help facilitate widely useful security research and cover the costs associated with long-term hardening, like rewriting a project in a memory-safe language. The Trust’s thesis of what to support should center on activities that produce sustainable, long-term improvements as well as less-well-funded aspects of secure OSS projects like effective governance practices.

In the United States, NIST could aid this effort by developing an inclusive list of metrics by which to gauge the health and needs of OSS packages and communities in close cooperation with extant industry initiatives such as OpenSSF’s Scorecard project, SLSA, S2C2F SIG, CHAOSS, and others.108 It might focus on determining what best practices signal project maturity and sufficient resourcing, and what shortfalls are most critical for downstream users and thus worth prioritizing in upstream support. This framework should not supplant, but rather aggregate and synthesize extant industry measurement initiatives and could later be part of vendor assessments and best practices documents in government procurement processes.

In the United States, the OSS Trust should rely on both regular congressional appropriations and the diversion of a small portion of corporate taxes. Depending on the structure of the receiving organization, Congress could also consider incentivizing individuals and corporations to contribute to the fund or similar organizations through tax-credited donations. Given the immense room for improved support in the OSS ecosystem, such a fund need not begin at its final potential size, able to satisfy all needs at once, on the first day but can grow incrementally, taking the opportunities to refine its grantmaking processes and partner-organization relationships as it grows.

This can and should eventually be an international scheme. The German-government-backed Sovereign Tech Fund already works to fund OSS projects to “support the development, improvement, and maintenance of open digital infrastructure.”109 This and similar initiatives at the EU member state level could be subsumed into a broader international effort in the near future or grow in isolation and work to coordinate with U.S. and other national programs absent immediate consolidation.

Like the HTF, CEF, or CF, such a fund should work with intermediaries to identify the best recipients—the central government need not try to locate decrepit concrete and unaddressed potholes itself, but rather can improve the resourcing of organizations with that on-the-ground expertise, relying on the existing web of intermediaries and support groups already present and growing in the OSS ecosystem.

Adopt-a-package

Private sector and nonprofit leaders in OSS should define schemes by which firms and other donors can “adopt” important unmaintained packages and provide resources to support their ongoing maintenance, vulnerability mitigation, and potentially rewrites into memory-safe languages or other structural updates. Rather than the urgent need met by a target-of-opportunity model or the long-term focus and friendliness to cross-cutting research of the OSS Trust. The government can contribute funding and support to existing initiatives or construct one in parallel, similar to Federal Emergency Management Agency’s (FEMA) reservist program. Government teams might supplement private-sector groups or focus on assisting incident response and resourcing for projects critical to government functions.

One entity already working toward this end is the for-profit startup, thanks.dev, which looks to connect users and patrons of open-source packages with a simple way to fund those packages and the packages they depend on. The company builds on several layers of deep dependency graphs using existing bill-of-materials, like data. That part is crucial—because of the web of dependencies across OSS, funding standalone packages is often not enough to drive resources everywhere they are needed. Log4j is a great example of a piece of a whole that turned out to be extremely important in the aggregate but may not have attracted high-profile attention on its own.

6. Conclusion

We do not build most of the code we use. In realizing this and accepting it for the indefinite future, OSS and the many communities developing and maintaining it should loom large in any analysis of cybersecurity and economic health. Open source constitutes the infrastructure to which we trust sensitive data, critical social programs, and cycles of economic development and innovation. That such infrastructure is weakening,110 and in some places crumbling,111 from the weight of demands placed on it should no more shock us than the imagery of bridges collapsing and reports of poisoned groundwater due to inadequate sustainment combined with widespread use.

None of this report reflects a belief that OSS is inherently insecure, but rather that it is uniquely central to modern digital systems and that relationships with the OSS community are necessarily, and substantively, different than those government has grown accustomed to with industry and industry within itself. Sustainable use emphasizes the user responsibility for much of the risk associated with software use, including OSS, and addresses OSS-specific features of development and contribution possibly only with open-source code. Addressing systemic risk is an important step for policy efforts to support the security and sustainability of OSS projects with an accurate picture of the considerable interdependency between code bases. Finally, governments must step up to support OSS as the infrastructure that it is. These resources should come alongside expanded private sector support and can manifest in targeted formats as well as a more general support model, the OSS Trust. OSS is infrastructure, and the provision of support for it as such will permit more rapid adoption and considerable innovation in even critical domains of economic and government activity.

Most of us too often take for granted the everyday things, the problems well solved. Yet, ignorance and the failure to protect them come with hefty price tags. Log4shell, a rash of open-source package incidents,112 and the chorus of concern amongst OSS maintainers about an economic model that extracts value from labor without committing back are symptoms of the choice to remain in such ignorance. The risk is the slow collapse of a vibrant ecosystem and a future riven by falling diversity in and capability for digital development outside a concentrated handful of technology firms, imperiling national security and economic competitiveness in equal measure. The good news is that this collapse is neither necessary nor permanent.

Change is possible, indeed much needed, but it must come in the form of investment as well as policy. Pennies on the dollar of value can be gained from a healthy and resilient open-source ecosystem, and such investments provide a means to secure essential digital infrastructure against a myriad of threats. Strong investment in and well-informed policy about OSS is, above all, a gift to the present, not just an abstract donation to future generations, that would impact and protect communities throughout the world.113

About the authors

Sara Ann Brackett is a research assistant at the Atlantic Council’s Cyber Statecraft Initiative under the Digital Forensic Research Lab (DFRLab). She focuses her work on open-source software security, software bills of material, and software supply-chain risk management and is currently an undergraduate at Duke University.

Acknowlegements

The authors owe a continuing debt of gratitude to the members of the Open Source Policy Network whose growing collaboration on open-source security and sustainability policy is an important part of this work. Major thanks to the Open Source as Infrastructure Working Group, including co-sponsor Open Forum Europe and its Executive Director Astor Nummelin Carlberg, whose insights shaped this report across 2022 and 2023. Thank you to Abhishek Arya, Jack Cable, Brian Fox, John Speed Meyers, Sarah Novotny, Jeff Wayman, and David Wheeler for their feedback on this and earlier drafts. Additional thanks to Kathy Butterfield, Estefania Casal Campos, and Abdolhamid Dalili for developing the report’s graphics; to Nancy Messiah and Andrea Raitu for its web design; to Donald Partyka for graphic and document design; and to Jen Roberts for coordinating the project’s many visual and design elements. This work is made possible with support from Craig Newmark Philanthropies, Schmidt Futures, the Open Source Security Foundation, and Omidyar Network.

CSI produced this report’s cover image using in part OpenAI’s DALL-E program, an AI image-generating application. Upon generating the draft image-prompt language, the authors reviewed, edited, and revised the language to their own liking and take ultimate responsibility for the content of this publication.

Appendix: Survey results

As part of this report, the Atlantic Council and the Open Source Policy Network distributed an anonymous survey to several OSS governance, policy, and security communities, including through the OpenSSF’s general Slack channel and Open Forum Europe’s email forum. The survey, which was open from November 20, 2022, through January 8, 2023, aimed to gather attitudes on OSS policy and security from OSS maintainers, developers, and stakeholder communities closer to the problem set than policymakers in Brussels or DC. Despite being open to over two thousand potential respondents, the survey only achieved a sample size of forty-six, limiting the insight into community priorities that it could provide. Nonetheless, there were some noteworthy trends in the responses, and the Atlantic Council and Open Source Policy Network will continue to gather outside perspectives and sentiment trends in this manner.

GovernmentICT VendorNon-ICT VendorIndependent ResearcherAcademiaNon-profit organizationOther
21643498
4.3%34.8%8.7%6.5%8.7%19.6%17.4%
1. Main respondent affiliation 
MaintainerContributorUserNone of the above
2932345
63.0%69.6%73.9%10.9%
2. Respondent’s primary role with respect to OSS (select all that apply)
ICT VendorsAll IndustryOSS devsFoundations/Non-profitsGovOther
9202582
19.6%43.5%4.3%10.9%17.4%4.3%
3. If you had to pick one party to assume more responsibility than they currently do for security outcomes associated with the use of open-source software, which would it be? 
Project activityContributor communityMaintainersHigh-activity contributors and maintainersCommunity principlesSecurity expert involvementOther
75512548
15.2%10.9%10.9%26.1%10.9%8.7%17.4%
4. Which is the most useful characteristics for assessing the health and well-being of an open-source community, if you had to pick just one? 
Public education and awarenessPublic-private coordination managementFundingLicensing + auditing policiesOSS engagementOther
5414995
10.9%8.7%30.4%19.6%19.6%10.9%
5. Which is the most critical function of an Open-Source Program Office (OSPO) if you had to pick just one? 
Project metadataUsage dataVulnerability reportingVulnerability info accessSecurity testingSBOM generationOther
1113212512
2%24%7%4%26%11%26%
6. Where do you see the tooling or information gap that might be most harmful to the OSS ecosystem? 
 1-Most useful2345-Least useful
Security testing/assessments14141251
Bug-bounty programs112111111
Security info-sharing and procedures21713113
Incident response support5169133
Direct funding258661
7. Please sort these methods of external support for, and investment in, open-source projects from most useful to least useful open-source maintainers and developers, in your opinion and relative to each other.  
 1-Most useful234567-Least useful
Project popularity711810235
Community size and activity1313116111
Cost of maintenance and usage37971118
Fulltime developer count911108422
Recent significant vulnerabilities711127360
Number of corporate sponsors44786125
Number of individual sponsors23551498
8. Please sort these heuristics for assessing the risk of using a specific OSS package from most useful to lease useful, in your opinion. 
  1-Most useful2345-Least useful
Security testing/assessments 18131023
Bug-bounty programs112101211
Security infosharing and procedures41515210
Incident response support7178122
Direct funding228664
9. Please sort these methods of external support for, and investment in, open-source projects from most useful to least useful for the security of downstream users.  

How much do you agree or disagree with the following statements?  

Strongly AgreeAgreeNeutralDisagreeStrongly Disagree
1614853
34.8%30.4%17.4%10.9%6.5%
10. A government role in supporting the open-source ecosystem is necessary for its long-term sustainability and success. 
Strongly AgreeAgreeNeutralDisagreeStrongly Disagree
1418841
30.4%39.1%17.4%8.7%2.2%
11. Government support must include direct financial investment to ensure the open-source ecosystem’s long-term sustainability and success. 

12. Tell us about your bogeyman – where do you see the most risk across the OSS community? Answers here can reflect either security risks, dangers posed by policy, or other concerns. 

  • Moving software to memory safe languages and actionable OSS supply chain management are the highest risk issues, IMO.
    • Lack of proper project governance, in particular for accepting commits.
      • Rogue maintainers who sabotage their own work for whatever reason.siloing of information about os production and consumption leading to ineffectual allocation of support/resources. lack of coordinated engagement by all relevant stakeholders: the community, foundations and other industry bodies, government and consumers (especially large global ones)I fear that the burden (via law/policy) of ensuring secure software will be set unrealistically (zero bugs) and fall (with serious consequences) on individual contributors.  This would effectively kill the OSS ecosystem by creating huge disincentives for anyone to be involved.The volume of mission critical code that is written in a memory unsafe language is highly alarming – it’s so bug-prone and those bugs are often part of an exploit chain. lack of security awareness and efforts by OSS developers.Tragedy of the commons, and assumption that someone else will do “it”. Putting too much of the burden on volunteer maintainers. Companies shouldn’t try to require too much of the free projects that they are using. Any interventions must come with strong community incentives.Increasing and poorly tracked dependency on projects, in some cases individuals, misalignment of funding and resources, treating OSS as a public good (gov investment) is maybe sound, consider tax concepts (really), who benefits more should pay/fund more, cui bono, shouldn’t be a complete gov subsidy.Transitive dependencies, where users evaluate the parent OSS project, but not all of its dependencies to see if they are well maintained and following best practices.Funding. The world is capitalism, and it is not practical for critical open source maintainers to focus on that job full-time without capital.NULLUsers of open source don’t understand that in many/most cases that the software isn’t supported in the same was commercial software is.  Example, I recently say a user ask about when some vulnerabilities that have been published would be addressed in the project.  This project is widely used and has critical vulnerabilities in it.  The single maintainer’s response was “it simply depends on my spare time.”  Critical security issues in what is likely critical software for some orgs and it will be addressed when someone has some spare time.  That’s not a formula for highly secure software.Education and knowledge gapThe Jeeper CreeperGovernment attempting to regulate an anti-culture which is based entirely on the foundations of helpfulness, novelty, and innovation. Open source is not industry, it is not corporate, and the ideals of it are often at odds as those using it.  It’s like volunteer EMTs or Good Samaritans. There should be support and protections for those that do the reasonable right thing without introducing a burden on them.Security loopholes should addressed with caution and strict measures. comprehensive and aligned and equal support for both upstream creators of open source and downstream consumers is criticalRisk: death and burnout. We are currently ignoring both in the name of security and that’s going to bite us.Funding. Governments should require a % of profit – not even revenue, just profit – be invested into their open source stack.Trey Herr really worries me. Don’t let him near a command line.The biggest risk is automation without proper processes and workflows in-place. Automating a process incorrectly is a greater risk than not doing it at all.Biggest risk across OSS community is sustainable – having x-omega and free security trainings is nice but research has shown most of OSS projects have a single maintainer. How can you expect a single maintainer to maintain his/her project and also spent time on security considerations?  We need an open source way to give OSS usesr (especially large enterprise) easy insights into OSS usage so they then can undertake action to support the OSS projects vital/critical to them (have seen people use OSS Review Tookit for this) One of the most significant issues is a cultural one. Today, most conversations around open-source software still put too much emphasis on the community aspect and define it as some charity. The solutions are usually related to increasing long-term volunteer contributions from corporations or individuals.However, if the open-source initiatives had the necessary financial resources, like any other businesses, they would already do their best to minimize the risks, hire the needed talents and produce a healthy software solution. Hence, we should recognize the overall economic value of open-source software, see it as a regular business activity in which entrepreneurs contribute to digital public goods, and address investment coordination issues around it.Once the open-source ecosystem receives adequate funding, the competition in the market should sort out the rest.Projects fail or are mismanaged due to lack of organizational support.Not having an asset list of what actually the enterprise has Insider risk or the malicious maintainer – Open source projects can switch hands or be influenced by anyone despite their motivations or backgrounds. This is an incredible difficult security risk to address for OSS. Government overreach would be a concern. Standards would be helpful. A monopoly on the code hosting servicessecurity fatigue due to vendors overselling BS, generally the amount of bad security vendors and productsLack of direct funding for core nodes, central components in wide use. Lack of practical contributions by ENISA, see e.g. their analysis of Heartbleed in someoneshouldhavedonesomething style, bugbounty programs on a too small scale without larger buy-in of officials, no large scale strategic investment in open source with regards to platform dependencies of the economy, that is, no learnings from the Putin gas disaster, dependencies from China in the hardware sector.Security risks. Code bases not checked by real security experts.Throwing OSS under geopolitical busLack of critical thinking and understanding of biaised axiomsGovernment/large corporate business users failing to financially support the open source projects they use. Government stepping in and trying to regulate/control a system that does not want or need this. Government depts. trying to be software developers.Funding Public, Securitysoftware patentsAn inability to objectively and discrete measure risk assocaited with different OSS projects.It’s a fact that very few projects have been undergone independent security review. More funding should go into initiatives that can do that. Furthermore, even “well-supported” projects are prone to vulnerabilities and exploits; so projects need to be consistently evaluated and reviewed based on their risk and usage. License changes in existing and widely used open source components and libraries. For example, Akka is changing its license from OSS license to a commercial license. All the other OSS components and libraries depending on Akka need to change Akka to some other library or try to meet the requirements of the new license (which is not always possible). Regulation that fails to account for the dynamics of the work project that is open source.Most surveys of this ilk have a common top blocker: time available to address this priority with everything else to be done. While there are such time constraints amongst OSS devs and maintainers, the risks are high that security issues won’t get addressed in the optimal way. Blindly relying on projects without ensuring they have a long-term viability.
      • The biggest risk I see is in continuity. If the primary maintainer(s) of a popular project leaves the project for whatever reason (burn-out, interest changes, death, etc.), what can the overall open source community to do help that transition?

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

1    “Critical Digital Infrastructure Research,” Ford Foundation, accessed January 12, 2023, https://www.fordfoundation.org/campaigns/critical-digital-infrastructure-research/.
2    Full Committee Hearing: “Responding to and Learning from the Log4Shell Vulnerability,” US Senate Committee on Homeland Security & Governmental Affairs,  February 8, 2022, https://www.hsgac.senate.gov/hearings/responding-to-and-learning-from-the-log4shell-vulnerability; Hearing: “Securing the Digital Commons: Open-Source Software Cybersecurity,” House Committee on Science, Space, and Technology,  May 11, 2022, https://science.house.gov/2022/5/joint.
3     Nadia now goes by Nadia Asparouhova and more on her work can be found here https://nadia.xyz/
4    Eric S. Raymond, The Cathedral & the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary (O’Reilly Media, Inc., 2001).
5    To the reader, as part of this report, the Atlantic Council and the Open-Source Policy Network distributed an anonymous survey to several OSS governance, policy, and security communities, including through the OpenSSF’s general Slack channel and Open Forum Europe’s email forum. The survey, open from November 20, 2022, through January 8, 2023, aimed to gather attitudes on OSS policy and security from OSS maintainers, developers, and stakeholder communities closer to the problem set than policymakers in Brussels or DC. Despite being open to over two thousand potential respondents, the survey only achieved a sample size of forty-six, limiting the insight into community priorities that it could provide. Nonetheless, there were some noteworthy trends in the responses, and the Atlantic Council and Open-Source Policy Network will continue to gather outside perspectives and sentiment trends in this manner.
6    To the reader, this project and the Open Source Policy Network are made possible with support from Craig Newmark Philanthropies, Schmidt Futures, the Open Source Security Foundation, and the Omidyar Network.
7    Nadia Eghbal, “Roads and Bridges: The Unseen Labor Behind Our Digital Infrastructure,” Ford Foundation, June 14, 2016, https://www.fordfoundation.org/media/2976/roads-and-bridges-the-unseen-labor-behind-our-digital-infrastructure.pdf.
8    Eghbal, “Roads and Bridges: The Unseen Labor Behind Our Digital Infrastructure.”
9    Julia Ferraioli, “Open Source and Social Systems,” (blog), December 7, 2022, https://juliaferraioli.com/blog/2022/open-source-social-systems/.
10    Alison Dame-Boyle, “EFF at 25: Remembering the Case That Established Code as Speech,” Electronic Frontier Foundation, April 16, 2015, https://www.eff.org/deeplinks/2015/04/remembering-case-established-code-speech.
11    “Securing Open Source Software Act of 2022,” S.4913, 117th Congress (2022), https://www.congress.gov/bill/117th-congress/senate-bill/4913.
12    Frank Nagle, “Government Technology Policy, Social Value, and National Competitiveness,” Harvard Business School Strategy Unit Working Paper No. 19-103, March 3, 2019, https://doi.org/10.2139/ssrn.3355486.
13    Karl Fogel and Cecilia Donnelly, “Open Data for Resilience Initiative and GeoNode: A Case Study on Institutional Investments in Open Source” (Washington, DC: World Bank Group, December 31, 2017), http://documents.worldbank.org/curated/en/713861563520709009/Open-Data-for-Resilience-Initiative-and-GeoNode-A-Case-Study-on-Institutional-Investments-in-Open-Source; Knut Blind et al., “Study about the Impact of Open Source Software and Hardware on Technological Independence, Competitiveness and Innovation in the EU Economy | Shaping Europe’s Digital Future” (Brussels: European Commission, September 6, 2021), https://digital-strategy.ec.europa.eu/en/library/study-about-impact-open-source-software-and-hardware-technological-independence-competitiveness-and; Brian Proffitt, “The ROI of Open Source,” Red Hat Blog, August 26, 2020, https://www.redhat.com/en/blog/roi-open-source. To the reader, while the authors of this report are not aware of replication studies validating these findings, it is worth noting that the sheer ubiquity of OSS already in proprietary offerings indicates the widespread success of the model. Whether that is due to reduced development time, crowd-sourced innovation, or other factors is not clear, however.
14    “The Open Source Definition,” Open Source Initiative, accessed January 13, 2023, https://opensource.org/osd.
15    Peter Salus, The Daemon, The Gnu, and the Penguin, (Reed Media Services, September 2008).
16    Joseph Carl Robnett Licklider and Robert W. Taylor, “The Computer as a Communication Device,” Science and Technology 76 (April 1968), 21–31.
17    befunge, GitHub, accessed January 13, 2023, https://github.com/topics/befunge.
18    Left-Pad, Nonsense Poetry Manager (npm), accessed January 13, 2023, https://www.npmjs.com/package/left-pad.
19    LibreOffice, accessed January 13, 2023, https://www.libreoffice.org/.
20    “Linux Distribution Introduction and Overview,” Linux Training Academy, accessed January 13, 2023, https://www.linuxtrainingacademy.com/linux-distribution-intro/.
21    LibreOffice.
22    ggplot2, accessed January 13, 2023, https://ggplot2.tidyverse.org/.
23    Andrew Spyker and Ruslan Meshenberg, “Evolution of Open Source at Netflix,” Netflix Technology Blog, October 28, 2015, https://netflixtechblog.com/evolution-of-open-source-at-netflix-d05c1c788429.
24    Liran Tai, “The Log4j Vulnerability and Its Impact on Software Supply Chain Security,” Snyk, December 13, 2021, https://snyk.io/blog/log4j-vulnerability-software-supply-chain-security-log4shell/.
25    Mehul Revankar, “New Study Reveals 30% of Log4Shell Instances Remain Vulnerable,” Qualys Security Blog, March 18, 2022, https://blog.qualys.com/qualys-insights/2022/03/18/qualys-study-reveals-how-enterprises-responded-to-log4shell.
26    To the reader, using the term “open-source” as a verb means to make the source code available to all, often on a code hosting platform, with GitHub being one of the most commonly used repository hosts.
27    Ferraioli, “Open Source and Social Systems.”
28    Milo Z. Trujillo, Laurent Hébert-Dufresne, and James Bagrow, “The Penumbra of Open Source: Projects Outside of Centralized Platforms Are Longer Maintained, More Academic and More Collaborative,” EPJ Data Science 11, no. 1 (May 21, 2022): 1–19, https://doi.org/10.1140/epjds/s13688-022-00345-7.
29    To the reader, these fall under the 501(c)(6) classification. Their main difference from a 501(c)(3) nonprofit is that where (c)(3) organizations must serve the public, (c)(6) organizations must their members. For more detail, see Internal Revenue Services, “Business Leagues,” irs.gov, accessed January 12, 2023, https://www.irs.gov/charities-non-profits/other-non-profits/business-leagues.
30    “Licenses & Standards,” Open Source Initiative, accessed January 13, 2023, https://opensource.org/licenses.
31    Alpha-Omega, Open Source Security Foundation, accessed January 13, 2023, https://openssf.org/community/alpha-omega/.
32    David Gray Widder, “Can You Stop Your Open-Source Project from Being Used for Evil?,” Overflow, August 8, 2022, https://stackoverflow.blog/2022/08/08/can-you-stop-your-open-source-project-from-being-used-for-evil/.
33    John Sullivan, “Thinking Clearly about Corporations,” Free Software Foundation, June 24, 2021, https://www.fsf.org/bulletin/2021/spring/thinking-clearly-about-corporations.
34    Sam Williams, Free as in Freedom: Richard Stallman’s Crusade for Free Software (O’Reilly Media, Inc., 2002), https://www.oreilly.com/library/view/free-as-in/9781449323332/.
35    To the reader, footnote entries offer further readings on the larger ecosystem and some of its defining debates.
36    To the reader, Dr. Tracy Miller defines infrastructure as “facilities, structure, equipment, or similar physical assets…vitally important, if not absolutely essential, to people having the capabilities to thrive…in ways critical to their own well-being and that of their society, and the material and other conditions which enable them to.” See: Tracy Miller, “Infrastructure: How to Define It and Why the Definition Matters,” Mercatus Center, July 12, 2021, https://www.mercatus.org/research/policy-briefs/infrastructure-how-define-it-and-why-definition-matters.
37    “Critical Infrastructure Sectors,” Cybersecurity and Infrastructure Security Agency, accessed January 12, 2023, https://www.cisa.gov/critical-infrastructure-sectors.
38    David Wheeler, “Securing Open Source Software Is Securing Critical Infrastructure,” Open Source Security Foundation (blog), October 11, 2022, https://openssf.org/blog/2022/10/11/securing-open-source-software-is-securing-critical-infrastructure/.
39    Steven Vaughan-Nichols, “Can the Internet Exist without Linux?,” ZDNet, October 15, 2015, https://www.zdnet.com/home-and-office/networking/can-the-internet-exist-without-linux/; “Cloud Infrastructure for Virtual Machines, Bare Metal, and Containers,” OpenStack, accessed January 13, 2023, https://www.openstack.org/; “Welcome to OpenSSL!” Open Secure Sockets Layer (OpenSSL) Project, accessed January 13, 2023, https://www.openssl.org/; Nate Matherson, “26 Kubernetes Statistics to Reference,” ContainIQ, July 3, 2022, https://www.containiq.com/post/kubernetes-statistics; “The BIRD Internet Routing Daemon Project,” BIRD, accessed January 12, 2023, https://bird.network.cz/.
40    Curl, accessed January 13, 2023, https://curl.se/.
41    Daniel Stenberg, “The World’s Biggest Curl Installations,” (blog), September 17, 2018, https://daniel.haxx.se/blog/2018/09/17/the-worlds-biggest-curl-installations/.
42    “Open Source Security and Risk Analysis Report,” (Mountain View, California: Synopsys Inc., 2022), https://www.synopsys.com/content/dam/synopsys/sig-assets/reports/rep-ossra-2022.pdf.
43    To the reader, authors tip their hats to the researchers at Chainguard for pointing this out.
44    Jennifer Bennett et al., “Measuring Infrastructure in the Bureau of Economic Analysis National Economic Accounts” (Suitland, MD: Bureau of Economic Analysis, December 1, 220AD).
45    The United States Securing Open Source Software Act: What You Need to Know,” Open Source Security Foundation (blog), September 27, 2022, https://openssf.org/blog/2022/09/27/the-united-states-securing-open-source-software-act-what-you-need-to-know/.
46    “Government Open Source Policies,” Center for Strategic and International Studies, August 2022, https://www.csis.org/programs/strategic-technologies-program/government-open-source-software-policies.
47    Sean Gallagher, “Rage-Quit: Coder Unpublished 17 Lines of JavaScript and ‘Broke the Internet,’” Ars Technica, March 25, 2016, https://arstechnica.com/information-technology/2016/03/rage-quit-coder-unpublished-17-lines-of-javascript-and-broke-the-internet/.
48    “How We Use Water,” Overviews and Factsheets, US Environmental Protection Agency, accessed January 13, 2023, https://www.epa.gov/watersense/how-we-use-water.
49    Rachel Estabrook and Michael Elizabeth Sakas, “The Colorado River Is Drying up — but Basin States Have ‘No Plan’ on How to Cut Water Use,” Colorado Public Radio, September 17, 2022, https://www.cpr.org/2022/09/17/colorado-river-drought-basin-states-water-restrictions/.
50    Ashwin Ramaswami, “Securing Open Source Software Act of 2022,” Sustain Open Source Forum, October 3, 2022, https://discourse.sustainoss.org/t/securing-open-source-software-act-of-2022/1098.
51    “Apache License, Version 2.0org,” Open Source Initiative, accessed January 13, 2023, https://opensource.org/licenses/Apache-2.0.
52    “Open Source Security Foundation Raises $10 Million in New Commitments to Secure Software Supply Chains,” Open Source Security Foundation (blog), October 13, 2021, https://openssf.org/press-release/2021/10/13/open-source-security-foundation-raises-10-million-in-new-commitments-to-secure-software-supply-chains/.
53    “Water Law Overview – National Agricultural Law Center,” National Agricultural Law Center, accessed January 12, 2023, https://nationalaglawcenter.org/overview/water-law/.
54    “Drinking Water Laws and New Rules,” Overviews & Factsheets, US Environmental Protection Agency, accessed January 12, 2023, https://www3.epa.gov/region1/eco/drinkwater/laws_regs.html.
56    Daniel Rothberg, “Everyone in Nevada Is Talking about Water. Here Are Five Things to Know.,” Nevada Independent, May 19, 2022, https://thenevadaindependent.com/article/everyone-in-nevada-is-talking-about-water-here-are-five-things-to-know-efbfbc.
57    To the reader, though originally required in the 90s, it is more precise to say that this legislation updated the requirements for those plans among other related items.
59    Pub. L. No. SB74 (2016).
60    To the reader, one might argue that there is a shortage of OSS tailored to meet all consumers’ needs, which leads to its constant change.
61    Fact Sheet: Good Samaritan Administrative Tools,” US Environmental Protection Agency, accessed January 13, 2023, https://www.epa.gov/enforcement/fact-sheet-good-samaritan-administrative-tools.
62    Rep. Lori Trahan (D-MA-03), Press Release: “House Passes Comprehensive Legislation to Aid Ukraine, Invest Millions in Third District,” March 9, 2022, https://trahan.house.gov/news/documentsingle.aspx?DocumentID=2411.
63    Havoc Pennington, “Up to 20% of Your Application Dependencies May Be Unmaintained,” Tidelift (blog), April 9, 2019, https://blog.tidelift.com/up-to-20-percent-of-your-application-dependencies-may-be-unmaintained.
64    Théo Zimmermann and Jean-Rémy Falleri, “A Grounded Theory of Community Package Maintenance Organizations-Registered Report,” CoRR 2108.07474 (September 2021), https://dblp.org/rec/journals/corr/abs-2108-07474.html?view=bibtex; Jailton Coelho et al., “Identifying Unmaintained Projects in GitHub,” in Proceedings of the 12th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, 2018, 1–10, https://doi.org/10.1145/3239235.3240501; Jordi Cabot, “Adopt an Open Source Project,” Livable Software, September 21, 2018, https://livablesoftware.com/adopt-abandoned-open-source-project/; Alpha-Omega; Adopt A Project, GitHub, accessed January 13, 2023, https://github.com/jonobacon/adopt-a-project.
65    European Commission, “Specific Principles: Polluter Pays Principle,” Principles of EU Environmental Law,  https://www.era-comm.eu/Introduction_EU_Environmental_Law/EN/module_2/module_2_11.html.
66    “Waste Framework Directive,” European Commission, https://environment.ec.europa.eu/topics/waste-and-recycling/waste-framework-directive_en and “Water Framework Directive,” European Commission, https://environment.ec.europa.eu/topics/water/water-framework-directive_en.
67    United States: Financial Crisis Inquiry Commission, “The Financial Crisis Inquiry Report: Final Report of the National Commission on the Causes of the Financial and Economic Crisis in the United States” (Washington DC: US Government Printing Office, February 25, 2011), https://www.govinfo.gov/app/details/GPO-FCIC.
68    To the reader, examples include Heartbleed and log4shell.
69    To the reader, some companies offer as a service scanning of software products to identify with reasonable but varied accuracy the underlying components within.
70    Frank Nagle et al., “Census II of Free and Open Source Software — Application Libraries,” [Linux Foundation, Laboratory for Innovation Sciences at Harvard (LISH), and Open Source Security Foundation (OpenSSF), March 2, 2022], https://lish.harvard.edu/publications/census-ii-free-and-open-source-software-%E2%80%94-application-libraries.
71    ”Jeffrey M Stupak, “Financial Stability Oversight Council (FSOC): Structure and Activities,” Congressional Research Services, February 12, 2018, (https://digital.library.unt.edu/ark:/67531/metadc1157125/, accessed January 13, 2023, University of North Texas Libraries, UNT Libraries Government Documents Department).
72    Walter Frick, “What You Should Know About Dodd-Frank and What Happens If It’s Rolled Back,” Harvard Business Review, March 2, 2017, https://hbr.org/2017/03/what-you-should-know-about-dodd-frank-and-what-happens-if-its-rolled-back.
73    House Hearing 114th Congress: “Oversight of the Financial Stability Oversight Council” (Washington, DC: US Government Publishing Office, December 8, 2015), https://www.govinfo.gov/content/pkg/CHRG-114hhrg99796/html/CHRG-114hhrg99796.htm.
74    “About ESMA,” European Securities and Markets Authority, accessed January 13, 2023, https://www.esma.europa.eu/about-esma.
75    To the reader, it is not the mere act of using code that creates the need for maintenance—binaries do not degrade like asphalt—but rather the fact that downstream dependencies and integrations make it essential for upstream components to keep pace with evolving language and environment features and security practices.
76    Raymond, The Cathedral & the Bazaar.
77    “Highway Trust Fund: Federal Highway Administration Should Develop and Apply Criteria to Assess How Pilot Projects Could Inform Expanded Use of Mileage Fee Systems” (Washington DC: US Government Accountability Office, January 10, 2022), https://www.gao.gov/products/gao-22-104299.
78    Adopt A Project.
79    Lindsey Bever, “KKK Takes Adopt-a-Highway Case to Georgia Supreme Court,” Washington Post, October 26, 2016, sec. Post Nation, https://www.washingtonpost.com/news/post-nation/wp/2016/02/23/kkk-takes-adopt-a-highway-case-to-georgia-supreme-court/.
80    Morten Rand-Hendriksen, “On the Corporate Takeover of the Cathedral and the Bazaar,” MOR10 (blog), February 4, 2019, https://mor10.com/on-the-corporate-takeover-of-the-cathedral-and-the-bazaar/.
81    Committee for a Responsible Federal Budget, “The Infrastructure Bill’s Impact on the Highway Trust Fund,” CFRB, February 3, 2022, https://www.crfb.org/blogs/infrastructure-bills-impact-highway-trust-fund.
82    Frank Nagle, “Why Congress Should Invest in Open-Source Software,” Brookings (blog), October 13, 2020, https://www.brookings.edu/techstream/why-congress-should-invest-in-open-source-software/.
83    To the reader, wording  considers both security lapses and wider incidents where developers pull down packages. See: “Awful OSS Incidents” (2022; PayDevs), accessed January 12, 2023, https://github.com/PayDevs/awful-oss-incidents for examples.
84    Hearing [archived webcast]: “Equity in Transportation Infrastructure: Connecting Communities, Removing Barriers, and Repairing Networks Across America,” US Senate Committee on Environment and Public Works, May 11, 2021, https://www.epw.senate.gov/public/index.cfm/2021/5/equity-in-transportation-infrastructure-connecting-communities-removing-barriers-and-repairing-networks-across-america.
85    “Cohesion Fund Fact Sheet,” European Parliament, https://www.europarl.europa.eu/factsheets/en/sheet/96/cohesion-fund, and “Connecting Europe Facility,” Innovation and Networks Executive Agency, December 22, 2022, https://wayback.archive-it.org/12090/20221222151902/https://ec.europa.eu/inea/en/connecting-europe-facility.
86    Eugenia Lostri, Georgia Wood, and Meghan Jain, “Government Open Source Software Policies,” Center for Strategic and International Studies, January 10, 2023, https://www.csis.org/programs/strategic-technologies-program/government-open-source-software-policies.
87    Gijs Hillenius, “Norway to Increase Its Use of Open Source,” Open Source Observatory, November 19, 2008, https://joinup.ec.europa.eu/collection/open-source-observatory-osor/news/norway-increase-its-use.
89    Sovereign Tech Fund, German Ministry for Economic Affairs and Climate Action, accessed January 13, 2023, https://sovereigntechfund.de/en.
90    Prototype Fund, Open Knowledge Foundation Germany, accessed January 13, 2023, https://prototypefund.de/en/.
91    Program Solicitation, NSF 22-572: “Pathways to Enable Open-Source Ecosystems (POSE),” National Science Foundation, https://www.nsf.gov/pubs/2022/nsf22572/nsf22572.htm.
92    “Supporting Internet Freedom Worldwide,” Open Technology Fund, https://www.opentech.fund/.
93    “The Ministry of Science Creates a Cluster for Free Software Companies,” iProfessional, April 26, 2013, https://www.iprofesional.com/tecnologia/159530-el-ministerio-de-ciencia-crea-cluster-para-empresas-de-software-libre.amp.
94    Gijs Hillenius, “Up to EUR 200,000 for Austria… | Joinup,” Open Source Observatory, August 22, 2016, https://joinup.ec.europa.eu/collection/open-source-observatory-osor/news/eur-200000-austria.
95    John Lui, “Malaysia Sets up $36m Open Source Fund – Silicon.Com,” Silicon, October 30, 2003, https://web.archive.org/web/20050411192233/http:/software.silicon.com/os/0,39024651,39116677,00.htm.
96    Chris Anizczyk et al., “Creating an Open Source Program,” Open Source Guides, n.d., https://www.linuxfoundation.org/resources/open-source-guides/creating-an-open-source-program.
97    Astor Nummelin Carlberg, “The WHO Is the Latest Public Administration to Launch an Open Source Programme Office,” Open Source Observatory, March 18, 2022, https://joinup.ec.europa.eu/collection/open-source-observatory-osor/news/who-builds-ospo
98    Think Open, “Communication to the Commission: Open Source Software Strategy 2020 – 2023” (Brussels, October 21, 2020), https://commission.europa.eu/about-european-commission/departments-and-executive-agencies/informatics/open-source-software-strategy_en.
99    “Collection of Use Case Examples Expanded Regarding Management Methods for Utilizing Open Source Software and Ensuring Its Security,” (Tokyo, Japan: Ministry of Economy, Trade and Industry, May 10, 2022), https://www.meti.go.jp/english/press/2022/0510_003.html.
100    “Extract from the Minutes of the Session of the Government of the Republic of Armenia – On the Endorsement of Internet Governance Principles,” http://www.irtek.am, August 2014, http://www.irtek.am/views/act.aspx?aid=77996.
101    Dan Knauss, “Open Source Communities: You May Not Be Interested in CISA, But CISA Is Very Interested in You,” Post Status (blog), October 3, 2022, https://poststatus.com/open-source-communities-you-may-not-be-interested-in-cisa-but-cisa-is-very-interested-in-you/.
103    “Securing Open Source Software Act of 2022,” S.4913.
104    “Securing Open Source Software Act of 2022,” S.4913.
105    Amelie Koran et al., “The Cases for Using the SBOMs We Build,” Atlantic Council (blog), November 22, 2022, https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/the-cases-for-using-sboms/.
106    Katie Bratman and Adam Kojak, “SBOM Ingestion and Analysis at New York-Presbyterian Hospital” (Open Source Summit North America 2022, Austin, TX, June 21, 2022), https://ossna2022.sched.com/event/11Q0t/sbom-ingestion-and-analysis-at-new-york-presbyterian-hospital-katie-bratman-adam-kojak-newyork-presbyterian-hospital.
107    To the reader, for more examples of SBOMs already for OSS projects, see the bom-shelter dataset built by John Speed Meyers and Chainguard: “bom-shelter” (Chainguard), https://github.com/chainguard-dev/bom-shelter.
108    Open Source Security Foundation, “Secure Supply Chain Consumption Framework (S2C2F) SIG,” GitHub, accessed January 11, 2023, https://github.com/ossf/s2c2f.
109    Sovereign Tech Fund.
110    James Mcbride and Anshu Siripurapu, “The State of US Infrastructure,” Council on Foreign Relations, November 8, 2021, https://www.cfr.org/backgrounder/state-us-infrastructure.
111    Jim Mone, “NTSB: Design Errors Factor in 2007 Bridge Collapse,” USA Today, November 13, 2008, http://usatoday30.usatoday.com/news/world/2008-11-13-628592230_x.htm.
112    Dan Goodin, “Numerous Orgs Hacked after Installing Weaponized Open Source Apps,” Ars Technica, September 29, 2022, https://arstechnica.com/information-technology/2022/09/north-korean-threat-actors-are-weaponizing-all-kinds-of-open-source-apps/.
113    “OpenSSF Annual Report – 2022,” Open Source Security Foundation, December 2022, https://openssf.org/wp-content/uploads/sites/132/2022/12/OpenSSF-Annual-Report-2022.pdf.

The post Avoiding the success trap: Toward policy for open-source software as infrastructure appeared first on Atlantic Council.

]]>
Russia’s cyberwar against Ukraine offers vital lessons for the West https://www.atlanticcouncil.org/blogs/ukrainealert/russias-cyberwar-against-ukraine-offers-vital-lessons-for-the-west/ Tue, 31 Jan 2023 17:47:28 +0000 https://www.atlanticcouncil.org/?p=606930 Ukraine’s experience in countering Russian cyber warfare can provide valuable lessons while offering a glimpse into a future where wars will be waged both by conventional means and increasingly in the borderless realm of cyberspace.

The post Russia’s cyberwar against Ukraine offers vital lessons for the West appeared first on Atlantic Council.

]]>
Vladimir Putin’s full-scale invasion of Ukraine is fast approaching the one-year mark, but the attack actually started more than a month before columns of Russian tanks began pouring across the border on February 24, 2022. In the middle of January, Russia launched a massive cyberattack that targeted more than 20 Ukrainian government institutions in a bid to cripple the country’s ability to withstand Moscow’s looming military assault.

The January 14 attack failed to deal a critical blow to Ukraine’s digital infrastructure, but it was an indication that the cyber front would play an important role in the coming war. One year on, it is no longer possible to separate cyberattacks from other aspects of Russian aggression. Indeed, Ukrainian officials are currently seeking to convince the International Criminal Court (ICC) in The Hague to investigate whether Russian cyberattacks could constitute war crimes.

Analysis of the Russian cyberwarfare tactics used in Ukraine over the past year has identified clear links between conventional and cyber operations. Ukraine’s experience in countering these cyber threats can provide valuable lessons for the international community while offering a glimpse into a future where wars will be waged both by conventional means and increasingly in the borderless realm of cyberspace.

Stay updated

As the world watches the Russian invasion of Ukraine unfold, UkraineAlert delivers the best Atlantic Council expert insight and analysis on Ukraine twice a week directly to your inbox.

The Russian cyberattack of January 2022 was not unprecedented. On the contrary, Ukraine has been persistently targeted since the onset of Russian aggression with the seizure of Crimea in spring 2014. One year later, Ukraine was the scene of the world’s first major cyberattack on a national energy system. In summer 2017, Ukraine was hit by what many commentators regard as the largest cyberattack in history. These high profile incidents were accompanied by a steady flow of smaller but nonetheless significant attacks.

Following the launch of Russia’s full-scale invasion one year ago, cyberattacks have frequently preceded or accompanied more conventional military operations. For example, prior to the Russian airstrike campaign against Ukraine’s civilian infrastructure, Ukrainian energy companies experienced months of mounting cyberattacks.

These tactics are an attractive option for Russia in its undeclared war against the West. While more conventional acts of aggression would likely provoke an overwhelming reaction, cyberattacks exist in a military grey zone that makes them a convenient choice for the Kremlin as it seeks to cause maximum mayhem in Europe and North America without risking a direct military response. Russia may not be ready to use tanks and missiles against the West, but Moscow will have fewer reservations about deploying the cyberwarfare tactics honed in Ukraine.

In addition to disrupting and disabling government bodies and vital infrastructure, Russian cyberattacks in Ukraine have also sought to manipulate public opinion and spread malware via compromised email accounts. The Ukrainian authorities have found that it is crucial to coordinate efforts with the public and share information with a wide range of stakeholders in order to counter attacks in a timely manner.

The effects of cyberattacks targeting Ukraine have already been felt far beyond the country’s borders. One attack on the satellite communication system used by the Ukrainian Armed Forces during the initial stages of the Russian invasion caused significant disruption for thousands of users across the European Union including private individuals and companies. Given the borderless nature of the digital landscape, similar scenarios are inevitable as cyberwarfare capabilities continue to expand.

From a Russian perspective, cyberwarfare is particularly appealing as it requires fewer human resources than traditional military operations. While Moscow is struggling to find enough men and military equipment to compensate for the devastating losses suffered in Ukraine during the first year of the invasion, the Kremlin should have no trouble finding enough people with the tech skills to launch cyber offensives against a wide range of countries in addition to Ukraine.

Russia can draw from a large pool of potential recruits including volunteers motivated by Kremlin propaganda positioning the invasion of Ukraine as part of a civilizational struggle against the West. Numerous individual attacks against Western targets have already been carried out by such networks.

At the same time, Ukraine’s experience over the past year has underlined that cyberattacks require both time and knowledge to prepare. This helps explain why there have been fewer high-complexity cyber offensives following the initial failure of Russia’s invasion strategy in spring 2022. Russia simply did not expect Ukraine to withstand the first big wave of cyberattacks and did not have sufficient plans in place for such an eventuality.

Ukraine has already carried out extensive studies of Russian cyberwarfare. Thanks to this powerful experience, we have increasing confidence in our ability to withstand further attacks. However, in order to maximize defensive capabilities, the entire Western world must work together. This must be done with a sense of urgency. The Putin regime is desperately seeking ways to regain the initiative in Ukraine and may attempt bold new offensives on the cyber front. Even if Russia is defeated, it is only a matter of time before other authoritarian regimes attempt to wage cyberwars against the West.

The democratic world must adapt its military doctrines without delay to address cyberspace-based threats. Cyberattacks must be treated in the same manner as conventional military aggression and should be subject to the same uncompromising responses. Efforts must also be made to prevent authoritarian regimes from accessing technologies that could subsequently be weaponized against the West.

The Russian invasion of Ukraine is in many ways the world’s first cyberwar but it will not be the last. In the interests of global security, Russia must be defeated on the cyber front as well as on the battlefields of Ukraine.

Yurii Shchyhol is head of Ukraine’s State Service of Special Communications and Information Protection.

Further reading

The views expressed in UkraineAlert are solely those of the authors and do not necessarily reflect the views of the Atlantic Council, its staff, or its supporters.

The Eurasia Center’s mission is to enhance transatlantic cooperation in promoting stability, democratic values and prosperity in Eurasia, from Eastern Europe and Turkey in the West to the Caucasus, Russia and Central Asia in the East.

Follow us on social media
and support our work

The post Russia’s cyberwar against Ukraine offers vital lessons for the West appeared first on Atlantic Council.

]]>
Unlocking a sustainable future by making cybersecurity more accessible https://www.atlanticcouncil.org/blogs/energysource/unlocking-a-sustainable-future-by-making-cybersecurity-more-accessible/ Mon, 30 Jan 2023 20:00:20 +0000 https://www.atlanticcouncil.org/?p=606715 Cybersecurity will be a key feature of the energy transition. Decision-makers will need to be diligent as they look to secure an increasingly digital and interconnected global energy system.

The post Unlocking a sustainable future by making cybersecurity more accessible appeared first on Atlantic Council.

]]>
The world is on its way toward building a sustainable, inclusive energy future. Renewable energy sources have seen rapid growth thanks to technology innovation and declining costs. At the same time, digitalization is making conventional energy infrastructure more efficient. Continuing these trends will be critical to meeting global climate goals while raising prosperity around the world. And because energy transformation will herald a new, digitalized energy system, cybersecurity has a key role to play in unlocking that sustainable, inclusive future.

The energy sector must withstand a constant siege of cyberattacks—including some backed by nation-states. New attacks can propagate at the speed of light, and their consequences can take days and weeks to unravel, disrupting markets, making equipment unsafe to operate, and causing cascading effects that spread beyond the targeted organization.

Every energy sector participant—new or established, private or public—has an interest in maturing cybersecurity across an increasingly interconnected digital energy system. To continue to strengthen resilience and reliability, investments designed to improve the cost-benefit profile for cybersecurity are critical not just for the biggest players, but for everyone.

Both new and old energy technologies depend on cybersecurity. Rapid digitalization across the energy sector has increased efficiency and decreased emissions, but also has changed and expanded the vulnerabilities the sector must consider. Attackers increasingly target not just information technologies (IT), but operating technologies (OT) as well.  Retrofits to existing OT infrastructure like pipelines and legacy generating plants mean these are now often network-connected. Newer technologies like wind and solar depend on digital management.

The cyber threat isn’t limited to big players or the Global North. Recent years have seen successful ransomware against the biggest petroleum products pipeline in the United States, against the biggest electricity supplier in Brazil, and against smaller infrastructure operators like the municipal electricity utility in Johannesburg. We have also seen attacks against subcontractors leveraged to penetrate electric utilities connected to the US grid. This is a global challenge, for organizations large and small.

Faced with a continuous onslaught of cyberattacks, the energy sector will need to establish practices and institutions that drive down the cost of deploying strong cybersecurity across the energy value chain. Startups, subcontractors, and small utilities will become a consistently weak link in the energy ecosystem if affordable, effective cybersecurity remains unavailable.

So how can the energy sector ensure that cybersecurity keeps pace with cyber risk, and seize opportunities to get ahead of attackers? How can public and private sector leaders contribute to building a community of trust?

Regulators in the energy sector should ensure they enable—or at a minimum, don’t stifle—technology innovations that enhance cybersecurity. Cyber innovation will need to keep pace with both the new technologies of the energy transformation and the known risks to those technologies, even if slow-moving regulatory processes have not yet accounted for new business models, technologies, or threats.

Similarly, regulators should consider how to encourage rapid information sharing about threat intelligence. Although threat intelligence can help quickly harden targets against novel attacks, operators may be reluctant to share information if they believe it will later lead to legal and financial liabilities. Tabletop exercises that convene public and private organizations can improve incident response, building relationships and providing actionable insights before a crisis occurs.

Public and private sector leaders can both work to expand the pool of cybersecurity talent—one of the chief cost barriers for stronger cybersecurity. Cybersecurity experts are scarce, and experts who are also familiar with the operating technologies enabling the energy transition even more so. Training programs—public or private—will help meet demand. Solutions that expand the scope and power of automation can also help, as can information-sharing that enables security teams to quickly recognize new threats and efficiently apply patches.

For asset operators (public or private), cybersecurity should be part of decision-making on new projects. Considering how to secure new infrastructure or planned retrofits can help reduce the cost and complexity needed to manage risk. Monitoring operations helps operators and cyber analysts understand how systems interact with each other during normal production—and enables earlier detection of malicious activity. Seeking opportunities for automation of routine tasks can reduce the cost of strong cybersecurity. Advancements in machine learning and artificial intelligence make it easier to rapidly draw useful insights from massive data sets.

Private sector collaborations can help build trust and cyber maturity across the industry. Common standards and certifications can help spread best practices and build confidence that potential partners or clients will not introduce new vulnerabilities. Threat intelligence can sometimes be more comfortably shared across peer organizations than with regulators.

Private sector leaders can assess and improve their own organizations’ cyber risk posture. Boards that accurately understand their cyber risks will be better able to invest appropriately in managing those risks. Likewise, making clear that cybersecurity is a cross-cutting competency key to performance for every business unit helps build a strong security culture. And of course, recognizing that cybersecurity is an ongoing effort across the sector helps build the collaboration across the energy sector needed to contend with a dynamic, interconnected cyber threat landscape.

Finally, an inclusive energy transformation will also require cyber-inclusivity. Even as the Global North continues to build the connective tissue necessary to meet the cyber risks of a digitalized energy system, passing those lessons forward as the developing world pursues electrification and sustainable energy access will be necessary to ensure that the energy system of the Global South is constructed with cyber-resiliency in mind. Using global convenings like the Atlantic Council Global Energy Forum in Abu Dhabi earlier this month to bring cybersecurity to the table alongside discussions of increasing energy access is critical to build community and advance shared security in a digital energy system.

Leo Simonovich is the vice president and global head of industrial cyber and digital security at Siemens Energy.

Reed Blakemore is a deputy director at the Atlantic Council Global Energy Center.

Learn more about the Global Energy Center

The Global Energy Center develops and promotes pragmatic and nonpartisan policy solutions designed to advance global energy security, enhance economic opportunity, and accelerate pathways to net-zero emissions.

The post Unlocking a sustainable future by making cybersecurity more accessible appeared first on Atlantic Council.

]]>
The 5×5—China’s cyber operations https://www.atlanticcouncil.org/content-series/the-5x5/the-5x5-chinas-cyber-operations/ Mon, 30 Jan 2023 05:01:00 +0000 https://www.atlanticcouncil.org/?p=604684 Experts provide insights into China’s cyber behavior, its structure, and how its operations differ from those of other states.

The post The 5×5—China’s cyber operations appeared first on Atlantic Council.

]]>
This article is part of The 5×5, a monthly series by the Cyber Statecraft Initiative, in which five featured experts answer five questions on a common theme, trend, or current event in the world of cyber. Interested in the 5×5 and want to see a particular topic, event, or question covered? Contact Simon Handler with the Cyber Statecraft Initiative at SHandler@atlanticcouncil.org.

On October 6, 2022, the Cybersecurity and Infrastructure Security Agency, Federal Bureau of Investigation, and National Security Agency released a joint cybersecurity advisory outlining the top Common Vulnerabilities and Exposures that Chinese state-linked hacking groups have been actively exploiting since 2020 to target US and allied networks. Public reporting indicates that, for the better part of the past two decades, China has consistently engaged in offensive cyber operations, and as the scope of the country’s economic and political ambitions expanded, so has its cyber footprint. The number of China-sponsored and aligned hacking teams are growing, as they develop and deploy offensive cyber capabilities to serve the state’s interests—from economic to national security.

We brought together a group of experts to provide insights into China’s cyber behavior, its structure, and how its operations differ from those of other states.

#1 Is there a particular example that typifies the “Chinese” model of cyber operations?

Dakota Cary, nonresident fellow, Global China Hub, Atlantic Council; consultant, Krebs Stamos Group

“China’s use of the 2021 Microsoft Exchange Server vulnerability to access email servers captures the essence of modern Chinese hacking operations. A small number of teams exploited a vulnerability in a critical system to collecting intelligence on their targets. After the vulnerability became public and their operation’s stealth was compromised, the number of hacking teams using the vulnerability exploded. China has established a mature operational segmentation and capabilities-sharing system, allowing teams to quickly distribute and use a vulnerability after its use was compromised.” 

John Costello, former chief of staff, Office of the National Cyber Director

“No. China’s approach has evolved too quickly; its actors too heterogenous and many. What has remained consistent over time is the principal focus of China’s cyber operations, which, in general, is the economic viability and growth of China’s domestic industry and advancement of its scientific research, development, and modernization efforts. China does conduct what some would call ‘legitimate’ cyber operations, but these are vastly overshadowed by campaigns that are clearly intended to obtain intellectual property, non-public research, or place Chinese interests in an advantageous economic position.” 

Bulelani Jili, nonresident fellow, Cyber Statecraft Initiative, Digital Forensic Research Lab (DFRLab), Atlantic Council

“What is unique is how the party-state promotes surveillance technology and cyber operations abroad. It utilizes diplomatic exchanges, law enforcement cooperation, and training programs in the Global South. These initiatives not only advance the promotion of surveillance technologies and cyber tools but also support the government’s goals with regard to international norm-making in multilateral and regional institutions.” 

Adam Kozy, independent analyst; CEO and founder, SinaCyber; former official with the FBI’s Cyber Team and Crowdstrike’s Asia-Pacific Analysis Team

“There is not one typical example of Chinese cyber operations in my opinion, as operations have evolved over time and are uneven in their distribution of tooling, access to the vulnerability supply chain, and organization. However, one individual who typifies how the Chinese Communist Party (CCP) has co-opted domestic hacking talent for state-driven espionage purposes is Tan Dailin (谭戴林/aka WickedRose) of WICKED PANDA/APT41 fame. He first began as a patriotic hacker during his time at university in 2000-2002, conducting defacements during the US-Sino hacker war, but was talent spotted by his local People’s Liberation Army (PLA) branch, the Chengdu Military Region Technical Reconnaissance Bureau (TRB) and asked to compete in a hackathon. This was followed by an “internship” where he and his fellow hackers at the NCPH group taught attack/defense courses and appear to have played a role in the 2003-2006 initial Titan Rain attacks probing US and UK government systems. Tan and his friends continued to do contract work for gaming firms, hacking a variety of South Korean, Japanese, and US gaming firms, which gave them experience with high-level vulnerabilities that are able to manipulate at the kernel level and also afforded them stolen gaming certificates allowing their malware to evade antivirus detection. After a brief period where he was reportedly arrested by the Ministry of Public Security (MPS) for hacking other domestic Chinese groups, he reemerged with several new contracting entities that have been noted to work for the Ministry of State Security (MSS) in Chengdu. Tan has essentially made a very comfortable living out of being a cyber mercenary for the Chinese state, using his legacy hacking network to constantly improve and upgrade tools, develop new intrusion techniques, and stay relevant for over twenty years.” 

Jen Roberts, program assistant, Cyber Statecraft Initiative, Digital Forensic Research Lab (DFRLab), Atlantic Council

“While no one case study stands out to typify a “Chinese” model, Chinese cyber operations blend components of espionage and entrepreneurship and capitalize on China’s pervasiveness in the international economy. One example of this is the Nortel/Huawei example where espionage, at least in part, caused the collapse of the Canadian telecommunications company.”

#2 What role do non-state actors play in China’s approach to cyber operations?

 

Cary: “Chinese security services still have a marked preference for using contracted hacking teams. These groups often raise money from committing criminal acts, in addition to work on behalf of intelligence agencies. Whereas in the United States, the government may purchase vulnerabilities to use on an offensive mission or hire a few companies to conduct cyber defense on a network, the US government does not hire firms to conduct specific offensive operations. In China, the government may hire teams for both offensive and defensive work, including offensive hacking operations.” 

Costello: “Non-state actors play a myriad number of roles. Most notably, Department of Justice and Federal Bureau of Investigation indictments show clear evidence of contractual relationships between the MSS and non-state actors conducting cyber intelligence operations. Less conventional, Chinese hacktivists have on occasion played a limited but substantive role in certain cases, such as cyberattacks against South Korea’s Lotte group during the US Terminal High Altitude Area Defense (THAAD) system kerfuffle in 2017. Hypothetically, China’s military strategy calls for a cyber defense militia; but the contours or reality of mobilization, training, and reliability are unclear. China’s concept of ‘people’s war’ in cyberspace—a familiar adoption of Maoist jargon for new concepts—has been discussed but has yet to be seen in practice in any meaningful form.” 

Jili: “State investment and procurement of public security systems from private firms are driving the development of China’s surveillance ecosystem. Accordingly, private firm work and collaboration with the state are scaling Beijing’s means to conduct surveillance operations on targeted domestic populations that are perceived threats to regime stability. Crucially, given the financial incentives to collaborate with Beijing, private companies have limited reasons not to support state security prerogatives.” 

Kozy: “This question has the issue of mirroring bias. We tend to view things from a United States and Western lens when evaluating whether someone is a state actor or not, because we have very defined lines around what an offensive cyber operator can do acting on behalf of the US government. China has thrived in this grey area, relying on patriotic hackers with tacit state approval at times, hackers with criminal businesses, as well as growing its domestic ability to recruit talented researchers from the private sector and universities. The CCP has historically compelled individuals who would be considered traditionally non-state-affiliated actors to aid campaigns when necessary. Under an authoritarian regime like the CCP, any individual who is in China or ethnically Chinese can become a state actor very quickly. Actors like Tan Dailin do constitute a different type of threat because the CCP effectively co-opts their talents, while turning a blind eye to their criminal, for-profit side businesses that are illegal and have worldwide impact.” 

Roberts: “Chinese non-state actors are very involved in Chinese cyber operations. A wide variety of non-state entities, such as contractors and technology conglomerates (Alibaba, Huawei, etc.), have worked in tandem with the CCP on a variety of research, development, and execution of cyber operations. This relationship is fortified by Chinese disclosure laws and repercussions of violating them. While Russia’s relationship with non-state actors relies on the opaqueness of non-state groups’ relationships with the government, China’s relationship with non-state entities is much more transparent.”

#3 How do China’s cyber operations differ from those of other states in the region?

Cary: “China has the most hackers and bureaucrats on payroll in Asia. Its operations are not different in kind nor process, but scale. While Vietnam’s or India’s cyber operators are able to have some effect in China, they are not operating at the scale at which China is operating. The most significant differentiator—which is still only speculation—is that China likely collects from the backbone of the Internet via agreements or compromise of telecommunication giants like Huawei, China Unicom, etc., as well as accessing undersea cables.” 

Costello: “Scale. The scale of China’s cyber operations dwarfs those of other countries in the region—the complexity and sheer range of targeting, and the number of domestic technology companies whose increasingly global reach may be utilized for intelligence gain and influence. As China’s influence and global reach expands, so too does its self-perceived need to protect and further expand its interests. Cyber serves as a low-risk and often successful tool to accomplish this in economic and security realms.” 

Jili: “While most regional and global players’ cyber operations have a domestic bent, Beijing also actively promotes surveillance technology and practices abroad through diplomatic exchanges, law enforcement cooperation, and training programs. These efforts not only advance the proliferation of Chinese public security systems, but they also support the government’s goals concerning international norm-making in multilateral and regional institutions.” 

Kozy: “China is by far the most aggressive cyber power in its region. It can be debated that Russian cyber operatives are still more advanced in terms of sophistication, but China aggressively conducts computer network exploitations against all of its regional neighbors with specific advanced persistent threat (APT) groups across the PLA and MSS having regional focuses. Some of its neighbors such as India, Vietnam, Japan, and South Korea have advanced capabilities of their own to combat this, but there are regular public references to successful Chinese cyber campaigns against these countries despite significant defensive spending. Regional countries without cyber capabilities likely have long-standing compromises of critical systems.” 

Roberts: “China has a talent for extracting intellectual property and conducting large-scale espionage. While other threat actors in the region, like North Korea, also conduct espionage operations, North Korea’s primary focus is on operations that prioritize fiscal extraction to fund regime activity, while China seems much more intent on collecting data for a variety of purposes. Despite differing capacities, sophistication, and types of operations, the end goals for both states are not all that different—political survival.”

More from the Cyber Statecraft Initiative:

#4 How have China’s offensive cyber operations changed since 2018?

Cary: “China’s emphasis on developing its domestic pipeline of software vulnerabilities is paying off. China has passed policies that co-opt private research on behalf of the security services, support public software vulnerability competitions, and invest in technology to automate software vulnerability discovery. Together, as outlined by Microsoft’s Threat Intelligence Center’s 2022 analysis, China is combining these forces to use more software vulnerabilities now than ever before.”

Costello: “China’s cyber operations have unsurprisingly grown in scale and sophistication. Actors are less ‘noisy’ and China’s tactical approach to cyber operations appears to have evolved towards more scalable operations, namely supply-chain attacks and targeting service providers. These tactics have the advantage of improving the return on investment for an operation or campaign, as they allow compromise of all customers who use the product or service while minimizing risk of discovery. Supply chain attacks or compromise through third-party services can also be more difficult to detect and identify. China’s cyber landscape is not homogenous, and there remains great variability in sophistication across the range of Chinese actors.

As reported by the Director of National Intelligence in the last few years, China has increasingly turned towards targeting US critical infrastructure, particular natural gas pipelines. This is an evolution, though whether it is ‘learning by doing,’ operational preparation of the battlespace, or nascent ventures by a more operationally-focused Strategic Support Force (reorganization into a Space and Cyber Corps from 2015-17) is unclear. Time will most certainly tell.”

Jili: “Since 2018, the party-state has been more active in utilizing platforms like BRICS (Brazil, Russia, India, China, and South Africa), an emerging markets organization, and the Forum on China-Africa Cooperation (FOCAC) to promote digital infrastructure products and investments in the Global South. Principally, through multilateral platforms like FOCAC, Beijing has promoted resolutions to increase aid and cooperation in areas like cybersecurity and cyber operations.”

Kozy: “Intrusions from China have continued unabated since 2018, with a select number of Chinese APTs having periods of inactivity due to COVID-19 shutdowns. The Cyber Security Law and National Intelligence Law, both enacted in 2017, provided additional legal authority for China’s intelligence services to access data and co-opt Chinese companies for use in vaguely worded national security investigations. Of note is China’s efforts to increase the number of domestic cybersecurity conferences and nationally recognized cybersecurity universities as part of ongoing recruitment pipelines for cyber talent. Though there was increased focus from the Western cybersecurity community on MSS-affiliated contractors after the formation of the PLA Strategic Support Force (PLASSF) in 2015, more PLA-affiliated APT groups have emerged since the pandemic with new tactics, techniques, and procedures. The new PLASSF organization means these entities may be compromising high-value targets and then assessing them for use for offensive cyber operations in wartime scenarios or cyber espionage operations.”

Roberts: “Since 2018, Chinese offensive cyber operations have increased in scale. China has reinvigorated its workforce capacity-building efforts to increase the overall quantity and quality of workers. It has tightened its legal regime, cracking down on external vulnerability disclosure. It has also begun significantly investing in disinformation campaigns, especially against Taiwan. This is evident by the Chinese influence in Taiwan’s 2018 and 2020 elections.”

#5 What domestic entities, partnerships, or roles exist in China’s model of cyber operations model that are not present in the United States or Western Europe?

Cary: “China’s emphasis on contracted hackers coincides with divergent levels of trust between the central government and some provincial-level MSS hacking teams. Some researchers maintain that one contracted hacking team pwns targets inside China to do internal security prior to visits by central government leaders. While there is scant evidence that these attitudes and beliefs make their way into operations against foreign targets, they do likely impact the distribution of responsibilities and operations in a way not seen in mature democracies. The politicization of intelligence services is particularly risky in China’s political system.”

Costello: “The extralegal influence of the CCP cannot be overstated. Though the National Security Law, National Intelligence Law, and other laws ostensibly establish a legal foundation for China’s security apparatus, the reality is that the party is not bound strictly to these laws—and they only demonstrate a public indicator of what power it may possess. The lack of any independent judiciary suggests unchecked power of the CCP to co-opt or compel assistance from any citizen or company for which it almost certainly has near-total leverage. While the suspicion of Chinese organizations can be overblown, the idea that the CCP has the power to utilize not each but any organization is sobering and the root of many of these concerns. The lack of rigorous rule of law, in these limited circumstances, is certainly a competitive advantage in the intelligence sphere.”

Jili: “Beijing has nurtured a tech industry and environment that actively support the party-state’s aims to bolster government surveillance and cyber capabilities. From large firms to startups, many companies work with the state to conduct vulnerability research, develop threat detection capabilities, and produce security and intelligence products. While these private firms rely on Chinese venture capital and state loans, they have grown to service a global customer base.”

Kozy: “Starting with the 2015 control of WooYun, China’s largest vulnerability site, the CCP has gained an incredible amount of control of the vulnerability supply chain within China, which affords its cyber actors access to high-value vulnerabilities for use in their campaigns. The aforementioned 2017 laws also made it easier for Chinese authorities to prevent domestic researchers from competing in cyber conferences overseas and improved access to companies doing vulnerability research in China. The CCP’s public crackdowns on Jack Ma, Ant Financial, and many others have shown that the CCP fears the influence its tech firms have and has quickly moved to keep its tech giants loyal to the party; a stark contrast to the relationships that the United States and European Union have with tech giants like Google, Facebook, etc.”

Roberts: “While corporate-government partnerships exist everywhere, what separates the United States and Western Europe from China is the scope and scale of the connective tissue that exists between the two entities. In China, this relationship has more explicit requirements in the cyber domain, especially when it comes to vulnerability disclosure.”

Simon Handler is a fellow at the Atlantic Council’s Cyber Statecraft Initiative within the Digital Forensic Research Lab (DFRLab). He is also the editor-in-chief of The 5×5, a series on trends and themes in cyber policy. Follow him on Twitter @SimonPHandler.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post The 5×5—China’s cyber operations appeared first on Atlantic Council.

]]>
Russian War Report: Russian hacker wanted by the FBI reportedly wins Wagner hackathon prize  https://www.atlanticcouncil.org/blogs/new-atlanticist/russian-war-report-russian-hacker-wanted-by-the-fbi-reportedly-wins-wagner-hackathon-prize/ Fri, 13 Jan 2023 19:04:07 +0000 https://www.atlanticcouncil.org/?p=602036 In December 2022, Wagner Group organized a hackathon that was won by a man wanted by the FBI for his connection to computer malware.

The post <strong>Russian War Report: Russian hacker wanted by the FBI reportedly wins Wagner hackathon prize</strong>  appeared first on Atlantic Council.

]]>
As Russia continues its assault on Ukraine, the Atlantic Council’s Digital Forensic Research Lab (DFRLab) is keeping a close eye on Russia’s movements across the military, cyber, and information domains. With more than seven years of experience monitoring the situation in Ukraine—as well as Russia’s use of propaganda and disinformation to undermine the United States, NATO, and the European Union—the DFRLab’s global team presents the latest installment of the Russian War Report. 

Security

Russian forces claim control of strategic Soledar

Tracking narratives

Russian hacker wanted by the FBI reportedly wins Wagner hackathon prize

Frenzy befalls French company accused of feeding Russian forces on New Year’s Eve

Former head of Russian space agency injured in Donetsk, mails shell fragment to French ambassador

Sputnik Lithuania’s former chief editor arrested

International response

New year brings new military aid for Ukraine

Ukrainian envoy to Georgia discusses deteriorating relations between nations

Russian forces claim control of strategic Soledar

Russia said on January 13 that its forces had taken control of the contested city of Soledar. Recent fighting has been concentrated in Soledar and Bakhmut, two cities in the Donetsk region that are strategically important to Ukrainian and Russian forces. Moscow has been trying to take control of the two cities since last summer. Over the past week, Russia has increased its presence on the fronts with the support of Wagner units. Russia wants control of the Soledar-Bakhmut axis to cut supply lines to the Ukrainian armed forces.  

On January 10, Russian sources claimed that Wagner forces had advanced into Soledar. Interestingly, Wagner financier Yevgeny Prigozhin denied the claim and said the forces were still engaged in fighting. Wagner’s presence was established in a camp near Bakhmut. Soldiers from the Wagner Group and other special forces deployed to Bakhmut after other military units had failed to break through the Ukrainian defense.  

On January 11, Ukrainian Deputy Defense Minister Anna Malyar said that heavy fighting was taking place in Soledar and that Russian forces had replaced the unit operating in the city with fresh troops and increased the number of Wagner soldiers among them. The same day, Prigozhin claimed that Wagner forces had taken control of Soledar. The Ukrainian defense ministry denied the allegation. On January 12, Ukrainian sources shared unconfirmed footage of soldiers driving on the main road connecting Bakhmut and Soledar with Sloviansk and Kostyantynivka to as evidence that the area remained under Ukrainian control.  

Elsewhere, on January 11, the Kremlin announced that Valery Gerasimov would replace Sergei Surovikin as commander of Russian forces in Ukraine. The unexpected move could be interpreted as evidence of a struggle for influence in Russian military circles. Surovikin is considered close to Prigozhin’s entourage, which has criticized senior officers recently, including Gerasimov. Some analysts believe that the change signals a possible military escalation from Russia. 

Furthermore, on January 8, Ukrainian forces repelled a Russian offensive the vicinity of Makiyivka and Stelmakhivka. Further north of Lysychansk, on January 11, Ukraine also repelled an attack on the city of Kreminna. In the neighboring Kharkiv region, aerial threats remain high. On the southern front, the city of Kherson and several cities across the Zaporizhzhia region remain targets of Russian attacks.  

Lastly, a new Maxar satellite image from nearby Bakhmut exemplifies the brutality of war on the frontline in Donetsk. The image shows thousands of craters, indicating the intensity of the artillery shelling and exchange of fire between Ukrainian and Russian forces.

Valentin Châtelet, Research Associate, Brussels, Belgium

Ruslan Trad, Resident Fellow for Security Research, Sofia, Bulgaria

Russian hacker wanted by the FBI reportedly wins Wagner hackathon prize

In December 2022, the Wagner Group organized a hackathon at its recently opened headquarters in St. Petersburg, for students, developers, analysts, and IT professionals. Wagner announced the hackathon on social media earlier that month. Organizers created the promotional website hakaton.wagnercentr.ru, but the website went offline soon after. A December 8 archive of the website, accessed via the Internet Archive Wayback Machine, revealed that the objective of the hackathon was to “create UAV [unmanned aerial vehicle] positioning systems using video recognition, searching for waypoints by landmarks in the absence of satellite navigation systems and external control.” Hackathon participants were asked to complete the following tasks: display the position of the UAV on the map at any time during the flight; direct the UAV to a point on the map indicated by the operator; provide a search for landmarks, in case of loss of visual reference points during the flight and returning the UAV to the point of departure, in case of a complete loss of communication with the operator.   

On December 9, Ukrainian programmers noticed that hakaton.wagnercentr.ru was hosted by Amazon Web Services and asked users to report the website to Amazon. Calls to report the channel also spread on Telegram, where the channel Empire Burns asked subscribers to report the website and provided instructions on how to do so. Empire Burns claims hakaton.wagnercentr.ru first went offline on December 9, which tallies with archival posts. However, there is no evidence that reporting the website to Amazon resulted in it being taken offline.   

Snapshots of hakaton.wagnercentr.ru from the Wayback Machine show the website was created in a Bitrix24 online workspace. A snapshot captured on December 13 shows an HTTP 301 status, which redirects visitors to Wagner’s main website, wagnercentr.ru. The Wagner website appears to be geo-restricted for visitors outside Russia. 

On December 23, a Wagner Telegram channel posted about the hackathon, claiming more than 100 people applied. In the end, forty-three people divided into twelve teams attended. The two-person team GrAILab Development won first place, the team SR Data-Iskander won second place, and a team from the company Artistrazh received third place. Notably, one of Artistrazh’s co-founders is Igor Turashev, who is wanted by the FBI for his connection to computer malware that the bureau claims infected “tens of thousands of computers, in both North America and Europe, resulting in financial losses in the tens of millions of dollars.” Artistrazh’s team comprised four people who won 200,000 Russian rubles (USD $3,000). OSINT investigators at Molfar confirmed that the Igor Turashev who works at Artistrazh is the same one wanted by the FBI.  

Wagner said that one of the key objectives of the hackathon was the development of IT projects to protect the interests of the Russian army, adding that the knowledge gained during the hackathon could already be applied to clear mines. Wagner said it had also invited some participants to collaborate further. The Wagner Center opened in St. Petersburg in early November 2022; the center’s mission is “to provide a comfortable environment for generating new ideas in order to improve Russia’s defense capability, including information.”

Givi Gigitashvili, DFRLab Research Associate, Warsaw, Poland

Frenzy befalls French company accused of feeding Russian forces on New Year’s Eve

A VKontakte post showing baskets of canned goods produced by the French company Bonduelle being distributed to Russian soldiers on New Year’s Eve has sparked a media frenzy in France. The post alleges that Bonduelle sent Russian soldiers a congratulatory package, telling them to “come back with a win.” The post quotes Ekaterina Eliseeva, the head of Bonduelle’s EurAsia markets. According to a 2019 Forbes article, Eliseeva studied interpretation at an Russian state security academy.  

Bonduelle has issued several statements denying the social media post and calling it fake. However, Bonduelle does maintain operations in Russia “to ensure that the population has access to essential foodstuff.”  

French broadcaster TV 5 Monde discovered that Bonduelle’s Russia division participated in a non-profit effort called Basket of Kindness, sponsored by the Fund of Presidential Grants of Russia. Food and supplies were gathered by food banks to be delivered to vulnerable segments of the population. However, during the collection drive, Dmitry Zharikov, governor of the Russian city of Podolsk, posted on Telegram that the collections would also serve military families.   

The story was shared on national television in France and across several international outlets. The Ukrainian embassy in France criticized Bonduelle for continuing to operate in Russia, claiming it was “making profits in a terrorist country which kills Ukrainians.”

Valentin Châtelet, Research Associate, Brussels, Belgium

Former head of Russian space agency injured in Donetsk, mails shell fragment to French ambassador

Dmitry Rogozin, former head of the Russian space agency Roscosmos, said he was wounded in Ukrainian shelling on December 21, 2022, at the Shesh hotel in Donetsk while “celebrating his birthday.” In response, Rogozin sent a letter to Pierre Lévy, the French ambassador to Russia, with a fragment of the shell.   

In the letter, Rogozin accused the French government of “betraying [Charles] De Gaulle’s cause and becoming a bloodthirsty state in Europe.” The shell fragment was extracted from Rogozin’s spine during surgery and allegedly came from a French CAESAR howitzer. Rogozin requested the fragment be sent to French President Emmanuel Macron. His message was relayed by Russian news agencies, and on Telegram by pro-Russian and French-speaking conspiracy channels.  

At the time of the attack, Rogozin was accompanied by two members of his voluntary unit, “Tsar’s wolves,” who were killed in the attack, according to reporting from RT, RIA Novosti, and others.  

Valentin Châtelet, Research Associate, Brussels, Belgium

Sputnik Lithuania’s former chief editor arrested

On January 6, Marat Kasem, the former chief editor of Sputnik Lithuania, was arrested in Riga, Latvia, on suspicion of “providing economic resources” to a Kremlin propaganda resource under EU sanctions.  

The following day, pro-Kremlin journalists held a small demonstration in support of Kasem in front of the Latvian embassy in Moscow. Russian journalist Dmitry Kiselyov and politician Maria Butina attended the event. 

The demonstration was filmed by Sputnik and amplified with the Russian hashtag  #свободуМаратуКасему (#freedomForMaratKasem) on Telegram channels operating in the Baltic states, including the pro-Russian BALTNEWS, Своих не бросаем! | Свободная Балтика!, and on Butina’s personal channel. The news of Kasem’s arrest also reached the Russian Duma’s Telegram channel, which re-shared Butina’s post. 

Valentin Châtelet, Research Associate, Brussels, Belgium

New year brings new military aid for Ukraine

International efforts in support of Ukraine are continuing in full force in 2023. On January 4, Norway announced it had sent Ukraine another 10,000 155mm artillery shells. These shells can be used in several types of artillery units, including the M109 self-propelled howitzer. On January 5, Germany confirmed it would provide Ukraine with Marder fighting vehicles and a Patriot anti-aircraft missile battery. German news outlet Spiegel also reported that talks are underway to supply Ukraine with additional Gepard anti-aircraft guns and ammunition. 

In addition, UK Foreign Secretary James Cleverly said the British government would supply Ukraine with military equipment capable of delivering a “decisive” strike from a distance. At the end of 2022, UK Defense Secretary Ben Wallace discussed the possibility of transferring Storm Shadow cruise missiles, with a range of up to 250 kilometers. Finland also reported that it is preparing its twelfth package of military assistance to Ukraine.  

US aid to Ukraine is also being reaffirmed with a $2.85 billion package on top of weapon deliveries. Additionally, the US plans to deliver fourteen vehicles equipped with anti-drone systems as part of its security assistance package. The company L3Harris is part of the Pentagon’s contract to develop anti-drone kits. This equipment would help protect Ukrainian civil infrastructure, which has been a frequent Russian target since October 2022.  

On January 6, French President Emmanuel Macron announced that France would supply Ukraine with units of the light AMX-10RC armored reconnaissance vehicle. These vehicles were produced in 1970 and have been used in Afghanistan, the Gulf War, Mali, Kosovo, and Ivory Coast. The French defense ministry also announced that the country was to deliver twenty units of ACMAT Bastion armored personnel carriers. 

On January 11, Ukrainian President Volodymyr Zelenskyy met with Presidents Andrzej Duda of Poland and Gitanas Nauseda of Lithuania in Lviv. During the visit, Duda announced that Poland would deliver fourteen units of the much-awaited German Leopard combat tanks, and Nauseda announced that his country would provide Ukraine with Zenit anti-aircraft systems. 

Meanwhile, the largest manufacturer of containers for the transport of liquified natural gas has ceased operations in Russia. French engineering group Gaztransport & Technigaz (GTT) said it ended operations in Russia after reviewing the latest European sanctions package, which included a ban on engineering services for Russian firms. The group said its contract with Russian shipbuilding company Zvezda to supply fifteen icebreakers to transport liquefied natural gas was suspended effective January 8.

Valentin Châtelet, Research Associate, Brussels, Belgium

Ruslan Trad, Resident Fellow for Security Research, Sofia, Bulgaria

Ukrainian envoy to Georgia discusses deteriorating relations between nations

On January 9, Andrii Kasianov, the Ukrainian Chargé d’Affaires in Georgia, published an article discussing the deteriorating relationship between the two countries. The article stated that the top issues affecting relations were military aid to Ukraine, bilateral sanctions against Russia, visa policies for fleeing Russians, and the legal rights of Mikheil Saakashvili, the imprisoned third president of Georgia, who is also a Ukrainian citizen. 

Kasianov noted that Tbilisi declined Kyiv’s request for military help, specifically for BUK missile systems, which were given to Georgia by Ukraine during Russia’s 2008 invasion. The diplomat said that the weapons request also included Javelin anti-tank systems supplied to Georgia by the United States.  

“Despite the fact that the Georgian government categorically refused to provide military aid, Ukraine opposes the use of this issue in internal political disputes and rejects any accusations of attempts to draw Georgia into a war with the Russian Federation,” Kasianov said. 

Since the Russian invasion of Ukraine, the Georgian Dream-led government has accused Ukraine, the US, and the EU of attempting to drag Georgia into a war with Russia.  

Eto Buziashvili, Research Associate, Tbilisi, Georgia

The post <strong>Russian War Report: Russian hacker wanted by the FBI reportedly wins Wagner hackathon prize</strong>  appeared first on Atlantic Council.

]]>
The West reaps multiple benefits from backing Ukraine against Russia https://www.atlanticcouncil.org/blogs/ukrainealert/the-west-reaps-multiple-benefits-from-backing-ukraine-against-russia/ Thu, 12 Jan 2023 16:43:23 +0000 https://www.atlanticcouncil.org/?p=601351 Ukraine is often viewed as being heavily reliant on Western support but the relationship is mutually beneficial and provides the West with enhanced security along with valuable intelligence, writes Taras Kuzio.

The post The West reaps multiple benefits from backing Ukraine against Russia appeared first on Atlantic Council.

]]>
As it continues to fight against Russia’s ongoing invasion, Ukraine is often depicted as being heavily reliant on Western military and economic support. However, this relationship is not as one-sided as it might initially appear. Western backing has indeed been crucial in helping Ukraine defend itself, but the democratic world also reaps a wide range of benefits from supporting the Ukrainian war effort.

Critics of Western support for Ukraine tend to view this aid through a one-dimensional lens. They see only costs and risks while ignoring a number of obvious advantages.

The most important of these advantages are being won on the battlefield. In short, Ukraine is steadily destroying Russia’s military potential. This dramatically reduces the threat posed to NATO’s eastern flank. In time, it should allow the Western world to focus its attention on China.

During the initial period of his presidency, Joe Biden is believed to have felt that the US should “park Russia” in order to concentrate on the far more serious foreign policy challenge posed by Beijing. Ukraine’s military success is now helping to remove this dilemma.

Defeat in Ukraine would relegate Russia from the ranks of the world’s military superpowers and leave Moscow facing years of rebuilding before it could once again menace the wider region. Crucially, by supporting Ukraine, the West is able to dramatically reduce Russia’s military potential without committing any of its own troops or sustaining casualties.

Backing Ukraine today makes a lot more strategic sense than allowing Putin to advance and facing a significantly strengthened Russian military at a later date. As former US Secretary of State Condoleezza Rice and former US Secretary of Defense Robert M. Gates wrote recently in The Washington Post, “The way to avoid confrontation with Russia in the future is to help Ukraine push back the invader now. This is the lesson of history that should guide us, and it lends urgency to the actions that must be taken, before it is too late.”

If this lesson is ignored and Ukraine is defeated, Russia will almost certainly go further and attack NATO member countries such as the Baltic nations, Finland, or Poland. At that point, it will no longer be possible to avoid significant NATO casualties.  

Stay updated

As the world watches the Russian invasion of Ukraine unfold, UkraineAlert delivers the best Atlantic Council expert insight and analysis on Ukraine twice a week directly to your inbox.

The international response to Russia’s invasion of Ukraine has also reshaped the geopolitical landscape far from the battlefield. Since February 2022, it has reinvigorated the West as a political force.

The war has given NATO renewed purpose and brought about the further enlargement of the military alliance in Scandinavia with the recent membership applications of Sweden and Finland. The EU is also more united than ever and has now overcome a prolonged crisis of confidence brought about by the rise of populist nationalist movements.

In the energy sector, Putin’s genocidal invasion has finally forced a deeply reluctant Europe to confront its debilitating dependency on Russian oil and gas. This has greatly improved European security and robbed the Kremlin of its ability to blackmail Europe with weaponized energy exports. It now looks likely that the era of corrupt energy sector collaboration with the Kremlin is now drawing to a close, in Europe at least.

Western support for Ukraine is bringing a variety of practical military gains. While Ukraine’s Western partners provide Ukraine with vital battlefield intelligence, Ukraine returns the favor by offering equally valuable intelligence on the quality and effectiveness of Russian troops, military equipment, and tactics. The events of the past ten months have confirmed that pre-war perceptions of the Russian army were wildly inaccurate. Thanks to Ukraine’s unique experience and insights, Western military planners now have a far more credible picture of Moscow’s true military capabilities.

Ukraine’s MacGyver-like ability to adapt and deploy NATO weapons using Soviet-era platforms could prove extremely useful to the alliance in future conflicts. Ukrainian troops have proven quick to learn how to use Western weapons, often requiring far shorter training periods than those allocated to Western troops.

The innovative use of digital technologies by the Ukrainian military also offers invaluable lessons for their Western counterparts. Ukraine’s widespread deployment of Elon Musk’s Starlink system in front line locations is unprecedented in modern warfare and offers rare insights for all NATO countries.

Similarly, the war in Ukraine is highlighting the increasingly critical role of drone technologies. This is building on the experience of the Second Karabakh War in 2020, when Israeli and Turkish drones played an important part in Azerbaijan’s victory over Armenia.

Russia failed to invest sufficient resources into the development of military drones and has been forced to rely on relatively unsophisticated Iranian drones. In contrast, Ukraine enjoys a strong military partnership with Turkey that includes a deepening drone component. Turkey’s Bayraktar drones gained iconic status during the early stages of the Russian invasion. The company has since confirmed plans to build a manufacturing plant in Ukraine. 

In addition to these Turkish drones, Ukraine’s powerful volunteer movement and tech-savvy military have created a myriad of drone solutions to address the challenges of today’s battlefield. Ukraine’s rapidly evolving drone technologies are extremely interesting to Western military planners and will be studied in great detail for years to come.

Ever since the full-scale invasion of Ukraine began on February 24, 2022, Ukrainian forces have fused courageous fighting spirit with advanced intelligence and innovative use of battle management software. “Tenacity, will, and harnessing the latest technology give the Ukrainians a decisive advantage,” noted General Mark Milley, the current US Chairman of the Joint Chiefs of Staff.

The relationship between Ukraine and the country’s Western partners is very much a two-way street bringing significant benefits and strategic advantages to both sides. While Ukraine is receiving critical military and economic support, the Western world is benefiting from improved security along with important intelligence and unique battlefield experience. There is clearly a strong moral case for standing with Ukraine, but it is worth underlining that the strategic argument is equally convincing.

Taras Kuzio is professor of political science at the National University of Kyiv Mohyla Academy and author of the forthcoming “Russia’s War and Genocide Against Ukrainians.”

Further reading

The views expressed in UkraineAlert are solely those of the authors and do not necessarily reflect the views of the Atlantic Council, its staff, or its supporters.

The Eurasia Center’s mission is to enhance transatlantic cooperation in promoting stability, democratic values and prosperity in Eurasia, from Eastern Europe and Turkey in the West to the Caucasus, Russia and Central Asia in the East.

Follow us on social media
and support our work

The post The West reaps multiple benefits from backing Ukraine against Russia appeared first on Atlantic Council.

]]>
2023 DC Cyber 9/12 Strategy Challenge https://www.atlanticcouncil.org/content-series/cyber-9-12-project/2023-dc-cyber-9-12-strategy-challenge/ Wed, 04 Jan 2023 21:38:23 +0000 https://www.atlanticcouncil.org/?p=599092 The Atlantic Council’s Cyber Statecraft Initiative, in partnership with American University’s School of International Service and Washington College of Law, will hold the eleventh annual Cyber 9/12 Strategy Challenge both virtually and in-person in Washington, DC on March 17-18, 2023. For the first time in the competition’s history, we will be hosting a hybrid event. Teams […]

The post 2023 DC Cyber 9/12 Strategy Challenge appeared first on Atlantic Council.

]]>
The Atlantic Council’s Cyber Statecraft Initiative, in partnership with American University’s School of International Service and Washington College of Law, will hold the eleventh annual Cyber 9/12 Strategy Challenge both virtually and in-person in Washington, DC on March 17-18, 2023. For the first time in the competition’s history, we will be hosting a hybrid event. Teams are welcome to attend either virtually via Zoom, or in-person at American University’s Washington College of Law. The agenda and format will look very similar to past Cyber 9/12 Challenges, except that it will be held in a hybrid format. Plenary sessions will be livestreamed via Zoom.

Held in partnership with:

Frequently Asked QuestionsVirtual

How do I log in to the virtual sessions? 

Your team will be sent an invitation to your round’s Zoom meeting the day before the event using the emails provided during registration

How will I know where to log in, and where is the schedule? 

We will send out links to Zoom webinars and meetings, along with an agenda, the day before the event. 

How are the virtual sessions being run? 

Virtual sessions will be run very close to the traditional competition structure and rules. Each Zoom meeting will be managed by a timekeeper. This timekeeper will ensure that each team and judge logs on to the conference line and will manage the competition round.  

At the beginning of the round, decision documents will be shared by the timekeeper via Zoom and judges will have 2 minutes to review the documents prior to the competitors’ briefing.  

Teams will have 10 minutes to present their briefing and 10 minutes for Q&A. Judges will be asked to mute themselves for the 10minute briefing session. 

Judges will then be invited to a digital breakout room and will have 5 minutes to discuss scores and fill out their scorecards via JotForm.  

After the scoring is over, judges will have 15 minutes to provide direct feedback to the team.  

A 10-minute break is scheduled before the start of the next round. Each round has been allotted several minutes of transition time for technical difficulties and troubleshooting. 

What do I need to log into a virtual session?  

Your team will need a computer (recommended), tablet, or smartphone with a webcam, microphone, and speaker or headphones. 

Your team will be provided with a link to the Zoom conference for each competition round your team is scheduled for. If you have any questions about the software, please see Zoom’s internal guide here. 

Will my team get scored the same way on Zoom as in-person? 

Yes, the rules of the competition remain the same, including the rubric for scoring. 

How will my team receive Intel Pack 2 and Pack 3? 

We will send out the intelligence packs via email to all qualifying teams. 

How will the final round be run? 

The final round will be run identically to the traditional final round format except that each the only participants allowed in the competition Zoom conference will be final round judges and the assigned team.  

Finalists will not able to watch the presentations of other teams in real time. Final rounds will be recorded and published on the Atlantic Council website after the final round ends. 

Frequently Asked QuestionsIn-person

Where will the event be held in-person? 

For participants attending in-person, the Cyber 9/12 Strategy Challenge will be held at American University’s Washington College of Law (WCL). On the evening on Friday, March 17th, there will be a reception held at the Atlantic Council offices downtown. Further information about the Atlantic Council offices can be found here.

What time will the event start and finish? 

While the final schedule has yet to be finalized, participants will be expected at American University WCL at 8:00am on Day 1, and the competition will run until approximately 5:00pm, with an evening reception at approximately 6:30pm. Day 2 will commence at approximately 9:00am, and will finish at approximately 4:00pm. The organizing team reserves the right to modify the above timing. The official schedule of events will be distributed to teams in advance of the event. All times are EST. 

Can teams who are eliminated on Day 1 still participate in Day 2 events? 

Yes! All teams are welcome at all of the side-programming events. We strongly encourage teams eliminated on Day 1 to attend the competition on Day 2.

Will meals be included for in-person attendees?

Yes, breakfast and lunch will be provided for all participants on both days. Light refreshments & finger foods will be provided at the evening reception on Day 1.

What should I pack/bring to a Cyber 9/12 event?

At the event: Please bring at least 4 printed copies of your decision documents for the judges on Day 1. We will help print documents on Day 2. Name tags will be provided to all participants, judges, and staff at registration on March 17. We ask you to wear these name tags throughout the duration of the competition. Name tags will be printed using the exact first and last name provided upon registration.

Dress Code: We recommend that students dress in at least business casual attire as teams will be conducting briefings. You can learn more about business casual attire here.

Electronic Devices: Cell phones and laptops will not be used during presentations but we recommend teams bring their laptops as they will need to draft their decision documents for Day 2 and conduct research. Please refer to the competition rules for additional information.

How do we get to American University?

American University is on the DC Metro Red line. Metro service from both Dulles International Airport (IAD) and Reagan National Airport (DCA) connect with the Metro Red Line at Metro Center. 

Zoom

What is Zoom? 

Zoom is a free video conferencing application. We will be using it to host the competition remotely. 

Do I need to pay for Zoom to participate? 

No.  

Do I need a Zoom account? 

You do not have to have an account BUT we recommend that you do and download the desktop application to participate in the Cyber 9/12 Strategy Challenge. 

Please use your real name to register so we can track participation. A free Zoom account is all that is necessary to participate.  

What if I don’t have Zoom? 

Zoom is available for download online. You can also access Zoom conferences through a browser without downloading any software or registering.  

How do I use Zoom on my Mac? Windows? Linux Machine? 

Follow the instructions here and here to get started. Please use the same email you registered with for your Zoom to sign up.

Can I use Zoom on my mobile device? 

Yes, but we recommend that you use a computer or tablet 

Can each member of my team call into the Zoom conference line independently for our competition round? 

Yes. 

Can other teams listen-in to my team’s session? 

Zoom links to competition sessions are team specific—only your team and your judges will have access to a session and sessions will be monitored once all participants have joined. If an observer has requested to watch your team‘s presentation, your timekeeper will notify you at the start of your round.

Staff will be monitoring all sessions and all meetings will have a waiting room enabled in order to monitor attendance. Any team member or coach in a session they are not assigned to will be removed and disqualified. 

Troubleshooting

What if my team loses internet connection or is disconnected during the competition? 

If your team experiences a loss of internet connection, we recommend following Zoom’s troubleshooting steps listed here. Please remain in contact with your timekeeper.

If your team is unable to rejoin the Zoom conference – please use one of the several dial-in lines included in the Zoom invitation.  

What if there is an audio echo or other audio feedback issue? 

There are three possible causes for audio malfunction during a meeting: 

  • A participant has both the computer and telephone audio active. 
  • A participant computer and telephone speakers are too close together.  
  • Multiple participant computers with active audio are in the same room.  

If this is the case, please disconnect the computer’s audio from other devices, and leave the Zoom conference on one computer. To avoid audio feedback issues, we recommend each team use one computer to compete. 

What if I am unable to use a video conference, can my team still participate? 

Zoom has dial-in lines associated with each Zoom conference event and you are able to call directly using any landline or mobile phone. 

We do not recommend choosing voice only lines unless absolutely necessary.

Other

Will there be keynotes or any networking activity remotely? 

Keynotes will continue as reflected on our agenda and will be broadcast with links to be shared with competitors the day before the event.  

We also encourage competitors and judges to join the Cyber 9/12 Strategy Challenge Alumni Group on LinkedIn where we will post job vacancies and internship opportunities. 

How should I prepare for a Cyber 9/12?

Check out our preparation materials, which includes past scenarios, playbooks including award-winning policy recommendations and a starter pack for teams that includes templates for requesting coaching support or funding.

Cyber Statecraft Initiative

The post 2023 DC Cyber 9/12 Strategy Challenge appeared first on Atlantic Council.

]]>
The 5×5—The cyber year in review https://www.atlanticcouncil.org/content-series/the-5x5/the-5x5-the-cyber-year-in-review/ Wed, 14 Dec 2022 05:01:00 +0000 https://www.atlanticcouncil.org/?p=594701 A group of experts reviews the highs and lows of the year in cybersecurity and look forward to 2023. 

The post The 5×5—The cyber year in review appeared first on Atlantic Council.

]]>
This article is part of The 5×5, a monthly series by the Cyber Statecraft Initiative, in which five featured experts answer five questions on a common theme, trend, or current event in the world of cyber. Interested in the 5×5 and want to see a particular topic, event, or question covered? Contact Simon Handler with the Cyber Statecraft Initiative at SHandler@atlanticcouncil.org.

One year ago, the global cybersecurity community looked back at 2021 as the year of ransomware, as the number of attacks nearly doubled over the previous year and involved high-profile targets such as the Colonial Pipeline—bringing media and policy attention to the issue. Now, a year later, the surge of ransomware has not slowed, as the number of attacks hit yet another record high—80 percent over 2021—despite initiatives from the White House and the Cybersecurity and Infrastructure Security Agency (CISA). The persistence of ransomware attacks shows that the challenge will not be solved by one government alone, but through cooperation with friends, competitors, and adversaries. 

Russia’s full-scale invasion of Ukraine, the landmark development of 2022, indicates that this challenge will likely remain unsolved for a while. Roughly three-quarters of all ransomware revenue makes its way back to Russia-linked hacking groups, and cooperation with the Kremlin on countering these groups is unlikely to yield much progress anytime soon. Revelations in the aftermath of Russia’s invasion confirmed suspicions that Russian intelligence services not only tolerate ransomware groups but give some of them direct orders. 

Ransomware was not the only cyber issue to define 2022, as other challenges continued, from operational technology to workforce development, and various public and private-sector organizations made notable progress in confronting them. We brought together a group of experts to review the highs and lows of the year in cybersecurity and look forward to 2023. 

#1 What organization, public or private, had the greatest impact on cybersecurity in 2022?

Rep. Jim Langevin, US Representative (D-RI); former commissioner, Cyberspace Solarium Commission

“I think we have really seen the Joint Cyber Defense Collaborative (JCDC) come into its own this year. We saw CISA, through JCDC, lead impressive and coordinated cyber defense efforts in response to some of the most critical cyber emergencies the Nation faced in 2022, including the Log4Shell vulnerability and the heightened threat of Russian cyberattacks after its invasion of Ukraine.” 

Wendy Nather, nonresident senior fellow, Cyber Statecraft Initiative, Digital Forensic Research Lab (DFRLab), Atlantic Council; head of advisory CISOs, Cisco

“I would argue that Twitter has had the most impact on cybersecurity. As a global nexus for public discourse, security research, threat intelligence sharing, media resources, and more, its recent implosion has disrupted essential communications and driven many cybersecurity stakeholders to seek connectivity elsewhere. We will probably continue to see the effects of this disruption well into 2023 and possibly beyond.” 

Sarah Powazek, program director, Public Interest Cybersecurity, UC Berkeley Center for Long-Term Cybersecurity

“CISA. The cross-sector performance goals and the sector-specific 100-Day Cyber Review Sprints this year are paving the way for a more complete understanding and encouragement of cybersecurity maturity in different industries. It is finally starting to feel like we have a federal home for nationwide cybersecurity defense.” 

Megan Samford, nonresident senior fellow, Cyber Statecraft Initiative, Digital Forensic Research Lab (DFRLab), Atlantic Council; vice president and chief product security officer for energy management, Schneider Electric

“I think all of us feel that it has to be the warfighting efforts that are going on in the background of the Ukraine war—these are the ‘known unknown’ efforts. If we take that off the table though, I would say it is not an organization at all, it is a standard (IEC 62443). As boring as it is to say that standards work, right now industry most needs time for the standards to be adopted to reach a minimum baseline. If we fail to achieve standardization, we will see regulation—both achieve the same things at different paces with different tradeoffs.” 

Gavin Wilde, senior fellow, Technology and International Affairs Program, Carnegie Endowment for International Peace

“The State Special Communications Service of Ukraine (SSSCIP), which has deftly defended and mitigated against Russian cyberattacks throughout Moscow’s war. SSSCIP’s ability to juggle those demands while coordinating and communicating with a vast array of state and commercial partners has improved the landscape for everyone.”

#2 What was the most impactful cyber policy or initiative of 2022? 

Langevin: “The Cyber Incident Reporting for Critical Infrastructure Act, or CIRCIA. Its impact lies not only in its effect—which will dramatically improve the federal government’s visibility of cyber threats to critical infrastructure—but also in the example it has set for how Congress, the executive branch, and the private sector can effectively work together to craft major legislation that will make the country fundamentally safer in cyberspace.” 

Nather: “I have to call out CISA’s election security support at this crucial point in our Nation’s fragile and chaotic state. It continues to provide excellent information and resources—particularly the wonderfully named “What to Expect When You are Expecting an Election” and video training to help election workers protect themselves and the democratic process. Reaching out directly to stakeholders and citizens with the education they need is every bit as important as the ‘public-private partnership’ efforts that most citizens never encounter.” 

Powazek: “CISA’s State and Local Cybersecurity Grant Program and Tribal Cybersecurity Grant Program. The programs will dole out $1 billion in cyber funding to state, local, tribal, and territorial governments over four years, with at least 25 percent of those funds earmarked for rural areas. If that money is invested well, it will be an incredible boon to critical public agencies struggling to improve their cybersecurity maturity, and it can better protect millions of people.” 

Samford: “Software bill of materials (SBOM), but not for the reasons people may think. SBOM is a very useful tool in managing risk, provided that organizations already have good asset inventory capability. In operational technology, asset inventory is an area that asset owners continue to struggle with, so the benefit from SBOM is more of a long-term journey. That is why I say SBOM, but not for the reasons people think. In my mind what I think was most impressive around SBOM was that it demonstrated that the industry can successfully rally and rapidly standardize around very specific asks. SBOM came together because it had three things: 1) common industry understanding of the problem; 2) existing tooling that, for the most part, did not require new training; and 3) government policy and right-sized program management.” 

Wilde: “The European Union’s proposed Cyber Resilience Act, which is poised to update and harmonize the regulatory environment across twenty-seven member states and set benchmarks for product and software security—particularly as both cybercrime and Internet-of-Things applications continue to proliferate. The proposals offer a stark contrast between a forward-looking regulatory regime, and a crisis-driven reporting and mitigation one.”

#3 What is the most important yet under-covered cyber incident of 2022?

Langevin: “I think it is worth reminding ourselves just how serious the ransomware attacks were that crippled the Costa Rican government this year. This was covered in the news, but from a policy perspective, I think it warrants a deeper conversation about what the United States can be doing on the international stage to double down on capacity-building and incident response efforts with allies, particularly those more vulnerable to such debilitating attacks. Part of that conversation needs to include a commitment to ensuring that our government actors, like the State Department’s Bureau of Cyberspace and Digital Policy, have the appropriate resources and authorities to effectively provide that assistance.”

Nather: “The Twilio breach (although Wired did a good job covering it). It is important because although SMS is a somewhat-reviled part of our security infrastructure, it is utterly necessary, and will continue to be long into the future.”

Powazek: “The Los Angeles Unified School District (LSUSD) ransomware attack by Vice Society was highly covered in the news, but I think the implications are resounding. LAUSD leaders refused to pay the ransom, maintained transparency with students and parents, and were able to move forward with minimal downtime. It was a masterclass in incident management, and I was thrilled to see a public institution take a stand against ransomware actors and recover quickly.”

Samford: “Uber’s chief information security officer (CISO) going to jail. This has turned the industry on its head and forced people to challenge what it means to be an executive in this industry and make decisions that can land you—not the chief executive officer or chief legal counsel—in jail. What is the compensation structure for this amount of risk taking? I have heard of CISOs being called the ‘chief look around the corner officer’ or the ‘chief translation officer,’ but now has the CISO become the ‘chief scapegoat officer?”

Wilde: “The US Department of Justice’s use of ‘search and seizure’ authority (Rule 41 of the federal criminal code) to neutralize a botnet orchestrated by the Russian GRU. So many fascinating elements of this story—including the legal and technical implications of the operation, as well as the cultural shift at DOJ—seem to have gone underexamined. Move over, NSPM-13…”

More from the Cyber Statecraft Initiative:

#4 What cybersecurity issue went unaddressed in 2022 but deserves greater attention in 2023?

Langevin: “I am hopeful that this answer proves to be wrong before the end of the year, but right now, it is the lack of a fiscal year (FY) 2023 budget. The federal government has a wide array of new cybersecurity obligations stemming from recent legislation and Biden administration policy, but agencies will struggle to fulfill these responsibilities if Congress does not provide appropriate funding for them to do so. Keeping the government at FY22 funding levels simply is not good enough; if we want to see real progress, we need to pass a budget.” 

Nather: “One trend I see is that there is almost no check on technological complexity, which is the nemesis of security. Simply slapping another ‘pane of glass’ on top of the muddled heap is not a long-term solution. I believe we will see more efforts to consolidate underlying infrastructure for many reasons, among them cost and ease of administration, but cybersecurity will be one of the loudest stakeholders.” 

Powazek: “The United States still does not have a scalable solution for providing proactive cyber assessments to folks who cannot afford to hire a consulting firm. There are lots of toolkits available, but some organizations do not even have the staff or time to consume them, and there is no substitute for face-to-face assistance. We could use more solutions like cybersecurity clinics and regional cyber advisors that address this market failure and help organizations increase resiliency to cyberattacks.” 

Samford: “Coordinated incident response as well as whistleblower protection. If you want safety-level protections in cybersecurity, you need safety-level whistleblower protections. In the culture of safety, based on decades of culture development and nurturing, whistleblowing is a key enabler. It is based on a basic truth that anyone in an organization can ‘stop the line’ if they see unsafe behavior. In cyber, we lack ‘stop the line’ power and, in many cases, individuals fail to report risk because of fear of attribution and retaliation. That is why, in my mind, the topic of whether or not whistleblower protection should become a cyber norm remains something that has gotten little attention but it is a critical decision point in how the cyber community wants to move forward. Will we have more of a tech-based culture or a safety-based culture?  

As far as coordinated incident response, we estimate that upward of 80 percent of the cyber defense capacity resides in the private sector, yet very few mechanisms exist to coordinate these resources alongside a government-led response. We have not yet figured out how to tap that pool of resources, and I fear that we are going to have to learn it quickly one day should such attacks occur that require rapid and consistent response coordination, such as a targeted campaigned cyberattack linked with physical impact on critical infrastructures. Using Incident Command System could solve for this and the ICS4ICS program is picking up this challenge.” 

Wilde: “Privacy and data protection. The ‘Wild West’ of data brokerages and opaque harvesting schemes that enables illicit targeting and exploitation of vulnerable groups poses as much a threat to national security as any foreign-owned applications or state intelligence agencies.”

#5 What do the results of the 2022 midterm elections in the United States portend for cybersecurity legislation in the 118th Congress?

Langevin: “The cybersecurity needs of the country are too great for Congress to get bogged down in partisan fighting, and I think there are bipartisan groups of lawmakers in both chambers who understand that. There may be philosophical differences on certain issues that are more pronounced in a divided Congress, but I expect that we will still see room for effective policymaking to improve the Nation’s cybersecurity. The key to progress, as it would have been no matter who controlled Congress, will be continuing to build Members’ policy capacity on these issues, lending a broader base of political support to those Members who understand the issues and can lead the charge on legislation.”

Nather: “Some of the centrist leaders from both parties who led on cybersecurity, such as John Katko (R-NY) and Jim Langevin (D-RI), are retiring. And Will Hurd (R-TX), who held a similar role—working across the aisle on cybersecurity issues—in the previous Congress, is gone. As the work on cybersecurity legislation has historically stayed largely above the political fray, it will be interesting to see who steps up to build consensus on this critical topic.”

Powazek: “The retirement of policy powerhouses Rep. John Katko and Rep. Jim Langevin leaves an opening for more cyber leadership, and the recent elections are our first glimpse of who those leaders may be. As a Californian, I am particularly excited about Rep. Ted Lieu and Senator Alex Padilla, both of whom are poised for cyber policy leadership.”

Samford: “More focus on zero trust, supply chain, and security of build environments. These are efforts that all have bipartisan support and engagement.”

Wilde: “The retirement of several of the most driven and conversant members does not bode well for major cybersecurity initiatives in Congress next session. Diminished expertise is not only a hurdle from a substantive perspective, but it also makes it difficult to avoid cyber issues falling victim to other political and legislative agendas from key committees.”

Simon Handler is a fellow at the Atlantic Council’s Cyber Statecraft Initiative within the Digital Forensic Research Lab (DFRLab). He is also the editor-in-chief of The 5×5, a series on trends and themes in cyber policy. Follow him on Twitter @SimonPHandler.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post The 5×5—The cyber year in review appeared first on Atlantic Council.

]]>
Wargaming to find a safe port in a cyber storm https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/wargaming-to-find-a-safe-port-in-a-cyber-storm/ Mon, 12 Dec 2022 15:00:00 +0000 https://www.atlanticcouncil.org/?p=593535 With the Maritime Transportation System increasingly reliant on cyberspace, how can cybersecurity be improved within key nodes of this critical infrastructure, particularly cargo ports?

The post Wargaming to find a safe port in a cyber storm appeared first on Atlantic Council.

]]>

Executive summary

With the Maritime Transportation System increasingly reliant on cyberspace, how can cybersecurity be improved within key nodes of this critical infrastructure, particularly cargo ports? Given the close relationship between the cyber and maritime domains, wargaming provides a useful tool for examining the potential threats and opportunities. This includes the attack surfaces, prioritization challenges, and coordination advantages highlighted by the new maritime cyber wargame Hacking Boundary.

Introduction

Critical infrastructure is rarely headline news—not until something goes very wrong—and the maritime transportation system (MTS) is no exception. The MTS, which is responsible for the safe transport of the majority of international trade, is vital to the global economy.1 From backlogged cargo at port facilities during the COVID-19 pandemic to the Ever Given container ship blocking the Suez Canal, recent events have highlighted the vulnerability of maritime transportation, and how impactful disruptions to that system can be to everyday life.2

Broadly speaking, the MTS consists of all the waterways, vehicles, and ports that are used to move people and goods via water.3 The volume of goods moved in this way is particularly striking, with most of the world’s cargo carried by sea—between 70–90 percent, depending on how the cargo is counted. For the United States, the MTS contributes to nearly 25 percent of gross domestic product, totaling around $5.4 trillion.4It is also essential to the US ability to project military power. Today, as for the past century, sealift—the use of cargo ships to deploy military assets—is responsible for transporting the vast majority of US military matériel around the world.5

Unfortunately, this critical infrastructure is under threat. Along with natural disasters and human errors, cyberattacks are increasingly threatening the MTS. In 2017, a destructive and rapidly propagating piece of malware known as NotPetya spread from Ukraine around the world.6 One of the many NotPetya victims was Maersk, the world’s largest shipping company. This single cyber incident cost the shipping giant approximately $300 million,7 and the price would have been much higher, were it not for a single uninfected server in Ghana. During another cyber incident just last year, foreign government-backed hackers were suspected of breaching information systems at the Port of Houston, further demonstrating that maritime transportation is firmly in the crosshairs.8 NotPetya, the Port of Houston, and other cyberattacks against various kinds of critical infrastructure—including the ransomware attack on Colonial Pipeline in 2021—provide an ominous glimpse into the threat environment.

Learning through gaming

Global and national security depend on understanding and mitigating threats to the MTS. The US government has taken some steps in this direction, including the National Maritime Cybersecurity Plan released in December 2020. More needs to be done, however, and one approach is to study what’s necessary through cyber wargaming, a useful tool for examining the complex and confusing problems involved with cyber and physical threats to critical infrastructure.

Working with Ed McGrady, the Cyber & Innovation Policy Institute (CIPI) at the US Naval War College in Newport, Rhode Island, hosted government officials, military service members, students, and academics to play Hacking Boundary: A Game of Maritime Cyber Operations.9This war game addresses a hypothetical cyberattack against a major US port facility, and the first iteration of the game was played at the CIPI Summer Workshop on Maritime Cybersecurity in June 2022.

The game was developed and run by Ed McGrady at the Center for a New American Security.

The second iteration of the game, conducted in partnership with the Atlantic Council’s Cyber Statecraft Initiative, was held at the Industrial Control Systems Village at the DefCon Hacking Conference in August 2022 in Las Vegas, Nevada. This iteration featured participants from across the maritime ecosystem, including active duty US Navy and Coast Guard personnel,  penetration testers, private sector operators, and many more.

This brief describes Hacking Boundary, along with several strategic and policy implications illuminated by repeated game play. The core takeaways include: (1) understanding the large attack surfaces of port facilities and the lead times that may be required to attack them; (2) the difficulties of prioritizing how and when to spend scarce resources; and (3) understanding that the tensions between competition and coordination, if navigated wisely, may offer defenders marginal—but valuable—advantages when providing maritime cybersecurity.

Scenario and players

Imagine a major US port facility, modeled on the Port Elizabeth Intermodal Complex of New York and New Jersey, in the year 2027. The facility includes a terminal along the water. Within the terminal are the yard, gantry cranes, cargo containers, scales, semitrucks, inspection sites, gates, and administrative offices needed to load, offload, process, and move 1.8 million twenty-foot equivalent units (TEUs) annually, or approximately 43 million tonnes of cargo. Connecting all of this equipment, and the people operating it, are local area networks, Wi-Fi, radios, phones, and wires, forming a complex web of near constant communication.

Picture from game play in Las Vegas.

When an ultra large container ship carrying 21,000 TEUs enters port, all of this information and operational technology is put to work. Positioning systems and radio communication with pilot ships helps steer the container ship into a berth; cargo data files are digitally sent to the port authority; local security contractors screen the cargo; and access control handles the hundreds of trucks required to move the cargo. Work that was once handled by thousands of people is now performed by computers, scanners, remote closed-circuit television cameras, and routers working both autonomously and with human support. Underpinned by cyberspace, this daily routine unfolds at a massive scale and pace.

During the wargame, teams of defenders and attackers face off in this cyber-physical environment. On the defending team, the maritime shipping industry is represented by a fictitious private firm called Worldwide Logistics Operations (WLO), which leases the container terminal. WLO runs the information technology (IT) infrastructure for the terminal. It also cooperates with local authorities and the federal government, played by another team of defenders. The attackers are broken into four groups, each representing different kinds of advanced persistent threats (APTs) with their own background, expertise, and modus operandi. These attackers range from independent cyber criminals to mercenaries to groups with ties to foreign intelligence organizations. Overseeing the contest between the attackers and the defenders is a game master, who helps construct and control the game narrative and, in the process, judges the outcome of each team’s moves.

Game play

This game is played over multiple turns, with each turn representing a month in the real world. At the start of each turn, the attacking and defending teams both draw random event cards. Possible events range from good news (e.g., receiving an unexpectedly large budget) to bad news (e.g., a power outage or having members of your team poached by the competition). These events are intended to represent some of the unpredictable realities faced by both parties in the real world. With a random event card in hand, each team plans their course of action.

The defending team’s objective is to prevent port terminal intrusions and establish resilient systems that fail gracefully, minimizing potential disruption or damage. Given a limited budget, represented in the game as coins, the team must make choices that involve difficult trade-offs. For example, defenders could prioritize security training and upgraded hardware but, as a consequence, they may have insufficient resources to conduct penetration testing to identify other potential vulnerabilities. Or, they could choose to conduct penetration testing, but then lack resources to fix the vulnerabilities they find. It is also important to safeguard port facility physical security against theft and illicit access to critical systems. The networked nature of cyber and physical systems means that neglecting one could expose the other to risk.

The objective of the attacking teams is to secure a profit at the expense of the port and the WLO. Attackers start the game with a set budget. They can earn additional coins by completing missions ranging from exfiltrating data to causing physical damage. To complete a mission successfully, an attacking team must allocate limited resources to hiring the right people for the job, which included technical experts to defeat defensive measures. For simplicity, the categories of expertise in this game are: social, physical, network, malware, operating system, applications, electronics, and cryptography. Attackers must also acquire the capabilities needed to accomplish their mission, such as tailored malware or radio-frequency identification scanners. This wargame emphasizes the full breadth of the cyber kill chain, including preplanning and lateral movement over time.10 Attackers may also take cyber actions that do not have immediate effects, laying the foundation for success later in the game.

The respective plans of attackers and defenders—and the logic behind them—interact via the game master, who determines the likelihood of success or failure. Outcomes are determined through discussion, with each team arguing their case about defensive measures taken at the port terminal, the complexity of the attack, and the personnel and capabilities dedicated to the job. This part of the game is where the collective expertise of each team really shines. Based on these discussions, the game master assesses the probability of an attack succeeding.

Chance is incorporated by rolling dice. For example, an attack with a 50 percent probability of success means that the attacking team must roll an eleven or higher on a twenty-sided die. More difficult attacks require a higher roll to succeed; easier attacks can succeed with a lower roll. The dice rolls determine if the attacker successfully completes all or part of their chosen mission.

Successful missions pay off in coins, building a unique narrative for the game. However, there is also the risk of discovery, modeled in the game as another roll of the dice by the team for “forensic points.” Depending on the complexity of the move, attacking teams incur higher or lower forensic points. Too much bravado or sloppy tradecraft risks teams being discovered by defenders and having all of their coins seized by the authorities. As is sometimes the case in real life, a bit of bad luck can mean the difference between striking it rich or losing it all.

When the success, failure, and payoff of all the teams’ actions have been decided, the next turn of the game begins with another round of event cards, planning, and outcome adjudication. Typically, each turn takes about an hour. There is no constraint on how many turns can be played, with consideration that higher stakes missions take longer to accomplish. Whenever the game ends, a victor is determined just for fun. Defender success is measured by number of attacks successfully repelled vice attempted intrusions into the port or related networks. For attackers, success depends on the number missions accomplished, their coin haul, and not getting caught.

Game takeaways from Newport and Las Vegas

Observations from only a few iterations of this game, with different players, do not constitute authoritative evidence. Even so, preliminary takeaways contain potentially important insights for maritime cyber and broader cybersecurity challenges facing critical infrastructure.

Attack surfaces and lead times

Large and varied attack surfaces challenge defenders and provide attackers with numerous opportunities for exploitation. This wargame only captured some of the complexities of real-world maritime infrastructure. Nevertheless, it illustrated the importance of interrelationships and dependencies in a cyber-physical system. Subject matter experts who played the game showed how hypothetical attackers might probe several points of entry that intersected with even this simplified version of a cargo port. The attempted exploits were both physical (e.g., breaking and entering or conducting reconnaissance at a local pub frequented by port security) and cyber (e.g., phishing, injecting malware via flash drives, or hacking shipboard systems using Raspberry Pi). The various attack options illustrate the myriad vulnerabilities of these complex facilities.

Put another way, no port is an island. Accidents and attacks outside the facility, such as disrupting a pump station or a nearby rail line, could still impact maritime operations by, for example, paralyzing road traffic around the cargo terminal. These interdependencies highlight the need to broaden the conceptual and operational boundaries of maritime cybersecurity as currently and traditionally conceived. In the wargame, defenders overlooked these external relationships, to their detriment.

While the multitude of attack options seemed to afford the attackers with endless choices, carrying out the attacks in this complicated environment took time. Successful attacks often required long lead times for planning and execution. In the game, as in real life, the cyber kill chain had multiple links spread out over time and, in some cases, over physical space. For example, some attacking teams probed physical security at the port early on, in an attempt to gather useful intelligence. Later, they exfiltrated data through lateral moves within the target network, exploiting access gained through phishing.

Both the large attack surfaces and the long lead times reaffirmed a well-known argument in cybersecurity that nevertheless bears repeating: defending a network is a lengthy and dynamic process, comprised of many different steps. Several attacks crossed multiple systems, spanning three or four moves in the game before a full picture of the offensive operation became apparent. The dramatic image of hackers running a rogue ship aground distracts from much of the preparatory, and seemingly mundane, work that would go into such an attack (e.g., orchestrating a phishing campaign against the cleaning company subcontracted to service the port bathrooms).

Key Takeaway

  • Maritime infrastructure consists of complex systems, which provide numerous opportunities for exploitation but also complicated kill chains.

Prioritization and resilience

The sheer number and variety of vulnerabilities to exploit and defend during game play posed serious challenges for players about how to allocate their scarce resources. Effective prioritization was a deciding factor for both attackers and defenders.       

For their part, attackers had to invest in capabilities and staffing to effectively penetrate target systems and accomplish mission objectives. Missteps or bad luck could result in a failed mission, setting attackers back in terms of time and money. For defenders, early investments to bolster security tended to have a large impact on their ability to thwart attacks later in the game. Defenders also needed to retain resources—and acquire skills—to dynamically (re)allocate defensive capabilities and capacities, which were then distributed across physical and network infrastructure, as well as across shipboard and terminal information systems. With limited resources at their disposal, poorly chosen priorities or bad luck could leave defenders struggling to respond to even basic incidents. Lack of defensive planning, or a purely reactive posture, provided attackers with dangerous freedom of movement.

Here again, the wargame only captured some of the real-life complexity, underscoring the very real challenge and necessity of prioritization. While critical infrastructure is, by definition, “critical,” some systems within it are more important than others, and some problems are easier to solve. Prioritizing investments where ease and importance overlap may seem obvious, but many of the tradeoffs are acute, presenting hard choices. As will be discussed, these choices are easier when public agencies and private firms share useful cyber intelligence. Each party may make different decisions about how to prioritize and allocate their respective resources, but both stand to benefit from pooling information about the threat environment.  

Making the right investments and allocating the proper resources to defense is only half the battle. When attacked, organizations also need resilience, namely the “ability to adapt to changing conditions and prepare for, withstand, and rapidly recover from disruption.11 In this game, as in real life, no defense was perfect: financial data leaked; ransomware jumped from contractor to vendor; and even positioning and navigation systems were compromised. Adapting and responding to unfortunate incidents is difficult, but necessary for minimizing disruptions to the most important MTS administrative and operational functions.

There is little doubt that bolstering the resilience of maritime cybersecurity will remain a challenge. Best practices and high standards can help, such as the US Coast Guard’s Navigation and Vessel Inspection Circular 01-20 and the International Maritime Organization’s guidance on cybersecurity.12 Since so many different operators and information systems intersect at port facilities, best practices within and across sectors are significant to forming strong links between the diverse entities involved. By providing a platform for practical learning, wargames can help individuals and organizations synthesize risk, identify priorities, build resilience, and highlight the significant—but often unappreciated—role that these various relationships can play in cybersecurity.

Key Takeaway

  • The range of cyber physical vulnerabilities in the MTS mean that prioritization and resilience are core challenges when allocating scarce resources.

Competition and coordination

Competition and coordination were reoccurring themes in this wargame, with significant policy implications. Attackers not only competed against defenders, but also against each other. Competition over scare resources, access points, and cyber exploits fueled tension among the APTs. In addition, some attacking teams were hurt by the actions, misfortunes, or errors of other team members. Attackers were both beneficiaries and potential victims of the difficulties of attribution in cyberspace, as some enterprising attackers tried to disguise their tracks by imitating others in false flag operations.

Instances of attacking teams directly targeting one another—as opposed to defenders—broke the binary concept of purely offensive and defensive roles in the game. These dynamics mirrored real life, helping dispel the notion that offensive and defensive moves in cyberspace inevitably aggregate to the attacker’s advantage. Different attackers have different motives. While a criminal enterprise may hack a port to steal cargo information to sell for financial gain, a state or hybrid actor may attempt to cripple port automation for political reasons. These different, and sometimes competing, objectives limit attackers’ incentives to cooperate with each other, let alone coordinate their actions. Leaked chat records from the Conti ransomware group highlight this discord inside real attacking teams, with interpersonal squabbles compounding conflicts between different APTs.13

Defenders suffered from conflicts of interest as well. The private firms that own and operate port facilities may not have the same incentives as government agencies to share information, especially if doing so invites scrutiny by regulators or law enforcement. These defenders also compete with each other for scarce cybersecurity talent and other resources.

While competition and conflict were evident among both defenders and attackers, Hacking Boundary indicates that defenders enjoy some advantages when it comes to institutionalizing cooperation, including a higher baseline level of trust. Honor among thieves may be harder to come by than even begrudging coordination between industry and government. Although defenders in the government, WLO, and terminal IT security teams had different incentives and threat perceptions, many still found ways to share information and coordinate action. On balance, this coordination gave defenders an edge in the game. Successful defenders established lines of communication sooner rather than later.

Real-world coordination between maritime owners, operators, and government agencies is easier said than done. Nevertheless, the potential payoff is considerable and physical proximity may help. Anecdotal evidence from our wargame suggests that players in the roles of port operators and government representatives conversed more when seated together. Perhaps it is no coincidence that communication between similar organizations in the real world correlates to a significant federal presence—Coast Guard headquarters, Department of Homeland Security regional centers, Federal Bureau of Investigation field offices, and the like—close to port facilities. Cybersecurity is social as well as technical, and face-to-face interaction can make a difference. However these relationships develop, the policies that build them before the next major cyber incident could prove to be invaluable.

Key Takeaway

  • Real-world coordination, whether among attackers or defenders, is a key dynamic in any cyber operation, and is easier said than done.

Conclusion

Cyber wargaming has demonstrated the potential to demystify and clarify threats and opportunities involving critical maritime infrastructure. The game Hacking Boundary engages players with a challenging, but realistic scenario, that reflects some of the serious risks facing the companies, crews, and government authorities operating port facilities around the country and around the world. The large attack surfaces, the importance of prioritization, and the implications of competition and coordination reinforce many well-established cybersecurity ideas. The relationship of these lessons to the maritime domain warrants further exploration.

The intersection between the maritime and cyber environments will likely grow in the years ahead. How these relationships and dependencies are conceptualized will likely determine our success or failure in protecting the MTS. The same goes for improving systemic resilience, including transportation by road, rail, and air – all of which increasingly relies on automation and networked information technology. Further iterations of this wargame and similar exercises stand to help by encouraging practitioners, academics, corporate executives, and government officials to think through potential threats and responses in order to secure these kinds of critical infrastructure. 

About the authors

Daniel Grobarcik is a research associate with the Cyber & Innovation Policy Institute at the U.S. Naval War College.

William Loomis is an associate director at the Atlantic Council’s Cyber Statecraft Initiative, within the Digital Forensic Research Lab.

Dr. Michael Poznansky is an associate professor with the Cyber & Innovation Policy Institute at the U.S. Naval War College.

Dr. Frank Smith is a professor and director of the Cyber & Innovation Policy Institute at the U.S. Naval War College.

The ideas expressed here do not represent the US Naval War College, US Navy, Department of Defense, or US Government.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

1     William Loomis et al., Raising the Colors: Signaling for Cooperation on Maritime Cybersecurity, October 4, 2021, https://www.atlanticcouncil.org/in-depth-research-reports/report/raising-the-colors-signaling-for-cooperation-on-maritime-cybersecurity/.
2     US Library of Congress, Congressional Research Service, Supply Chain Bottlenecks at US Ports, by John Frittelli and Liana Wong, IN11800 (2021), https://crsreports.congress.gov/product/pdf/IN/IN11800; Marc Jones, “Snarled-Up Ports Point to Worsening Global Supply Chain Woes – Report,” Reuters, May 3, 2022, https://www.reuters.com/business/snarled-up-ports-point-worsening-global-supply-chain-woes-report-2022-05-03/; Vivian Yee and James Glanz, “How One of the World’s Biggest Ships Jammed the Suez Canal,” New York Times, July 17, 2021, https://www.nytimes.com/2021/07/17/world/middleeast/suez-canal-stuck-ship-ever-given.html.
3    US Department of Transportation, Maritime Administration, “Maritime Transportation System (MTS),” last updated January 8, 2021, https://www.maritime.dot.gov/outreach/maritime-transportation-system-mts/maritime-transportation-system-mts.
4    William Loomis et al., Introduction: Cooperation on Maritime Cybersecurity, October 27, 2021, https://www.atlanticcouncil.org/in-depth-research-reports/report/cooperation-on-maritime-cybersecurity-introduction/.
5    Jason Ileto, “Cyber at Sea: Protecting Strategic Sealift in the Age of Strategic Competition,” Modern War Institute, May 10, 2022, https://mwi.usma.edu/cyber-at-sea-protecting-strategic-sealift-in-the-age-of-strategic-competition/; See also https://www.maritime.dot.gov/national-security/strategic-sealift/strategic-sealift/.
6    Andy Greenberg, “The Untold Story of NotPetya, the Most Devastating Cyberattack in History,” Wired, August 22, 2018, https://www.wired.com/story/notpetya-cyberattack-ukraine-russia-code-crashed-the-world/.
7    Nina Kollars, Sam J. Tangredi, and Chris C. Demchak, “The Cyber Maritime Environment: A Shared Critical Infrastructure and Trump’s Maritime Cyber Security Plan,” War on the Rocks, February 4, 2021, https://warontherocks.com/2021/02/the-cyber-maritime-environment-a-shared-critical-infrastructure-and-trumps-maritime-cyber-security-plan/.
8     Olafimihan Oshin, “Major US Port Target of Attempted Cyber Attack,” The Hill, September 24, 2021, https://thehill.com/homenews/state-watch/573749-major-us-port-target-of-attempted-cyber-attack/.
9    The game was developed and run by Ed McGrady at the Center for a New American Security.
10    US Department of Homeland Security, Management Directorate, OCRSO, Sustainability and Environmental Programs, Providing a roadmap for the Department in Operational Resilience and Readiness, July 2018, https://www.dhs.gov/sites/default/files/publications/dhs_resilience_framework_july_2018_508.pdf.
11    ”Lockheed Martin, “Cyber Kill Chain,” June 29, 2022, https://www.lockheedmartin.com/en-us/capabilities/cyber/cyber-kill-chain.html.
12    US Department of Homeland Security, United States Coast Guard, Navigation and Vessel Inspection Circular No. 01-20 (Washington, DC, 2002), Commandant Publication P16700.4, https://www.dco.uscg.mil/Portals/9/DCO%20Documents/5p/5ps/NVIC/2020/NVIC_01-20_CyberRisk_dtd_2020-02-26.pdf?ver=2020-03-19-071814-023; International Maritime Organization, “Guidelines on Maritime Cyber Risk Management,” MSC-FAL.1/Circ.3/Rev.1, June 14, 2021, https://wwwcdn.imo.org/localresources/en/OurWork/Security/Documents/MSC-FAL.1-Circ.3%20-%20Guidelines%20On%20Maritime%20Cyber%20Risk%20Management%20(Secretariat).pdf.
13    Gareth Corfield, “60,000 Conti Ransomware Gang Messages Leaked,” The Register, February 28, 2022, https://www.theregister.com/2022/02/28/conti_ransomware_gang_chats_leaked/; Maria Henriquez, “Inside Conti Ransomware Group’s Leaked Chat Logs,” Security Magazine, April 6, 2022, https://www.securitymagazine.com/articles/97379-inside-conti-ransomware-groups-leaked-chat-logs.

The post Wargaming to find a safe port in a cyber storm appeared first on Atlantic Council.

]]>
360/StratCom: How policymakers can set a democratic tech agenda for the interconnected world https://www.atlanticcouncil.org/content-series/360stratcom/360-stratcom-how-policymakers-can-set-a-democratic-tech-agenda-for-the-interconnected-world/ Thu, 08 Dec 2022 19:48:31 +0000 https://www.atlanticcouncil.org/?p=593715 The DFRLab assembled policymakers and civil-society leaders together to drive forward a democratic tech agenda that is rights-respecting and inclusive.  

The post 360/StratCom: How policymakers can set a democratic tech agenda for the interconnected world appeared first on Atlantic Council.

]]>
On December 7, the Atlantic Council’s Digital Forensic Research Lab (DFRLab) hosted360/StratCom, its annual government-to-government forum, bringing policymakers and civil-society leaders together to drive forward a democratic tech agenda for the increasingly interconnected world—and ensure that is rights-respecting and inclusive.  

The day kicked off with a panel on anti-lockdown protests and dissent in China moderated by Kenton Thibaut, DFRLab’s resident fellow for China. Following a deadly fire at a residential building in Xinjiang, protests erupted in cities across China, including on almost eighty university campuses. While the protests have been fueled by frustration with China’s strict zero-COVID policy, Xiao Qiang, a research scientist at the University of California, Berkeley, noted the protests have also grown to object to censorship and Xi Jinping’s leadership. The protests mark the failure of Xi’s “prevention and control” security approach, added Sheena Greitens, associate professor at the University of Texas. “It was really interesting, and I imagine troubling, from the standpoint of China’s leaders, to see that model fail initially at multiple places, multiple cities in China when these protests broke out,” she said. While the panelists agreed that China has publicly used a lighter touch in dealing with the protest organizers than it has historically, they expressed concern that this is because surveillance technology provides authorities the ability to identify and target protesters behind closed doors. Maya Wang, associate director of the Asia division at Human Rights Watch, said an important takeaway from the protests is that many people in China seek democracy. 

Next up was a discussion about the Freedom Online Coalition (FOC), a global alliance in pursuit of a democratic tech agenda that ensures a free, open, secure, and interoperable internet for all. With Canada serving as the current chair of the FOC, the session began with remarks from Canadian Deputy Foreign Minister David Morrison. He noted that what unites the FOC is the belief that one of the most pressing challenges is finding a way to benefit from digital technology in a way that protects human rights and upholds democratic values. Morrison noted four essential components of digital inclusion: connectivity, digital literacy, civic participation, and online safety.   

With the United States preparing to serve as the incoming FOC chair, Anne Neuberger, deputy national security adviser for cyber and emerging technologies at the White House, also gave her thoughts on the democratic tech agenda. Neuberger noted that, while the internet has transformed the world, it has also led to a series of troubling developments. “The internet remains a critical tool for those on the front lines of the struggle for human rights, activists; and everyday people from Tehran to Shanghai to Saint Petersburg depend on access to an unblocked, unfiltered internet to communicate and gain information otherwise denied to them by their government.” As FOC chair, the United States will have three main priorities, Neuberger outlined: bolstering existing efforts where the FOC adds unique value, such as condemning governments that misuse technology; strengthening coordination between FOC policies and the foreign assistance that participating states are providing to ensure that national-level technology frameworks around the globe are in alignment with human rights; and strengthening the FOC’s operating mechanics to ensure the organization can have a greater impact in the years to come. 

Another vital goal for the FOC is to recognize and articulate the connection between pluralistic, open societies and a secure, open internet, said Katherine Maher, nonresident senior fellow at the DFRLab and former chief executive officer of Wikipedia. In a panel focusing on how the FOC can live up to its promise , Maher noted that an open internet is a means to an end, as it helps people protect human rights. Moderator Jochai Ben-Avie, chief executive of Connect Humanity and a DFRLab nonresident fellow, echoed this sentiment. “Never before has the call been louder for democratic countries to take coordinated action in defense of a free and open and secure and interoperable internet,” he noted.  

Read more

Report

Dec 6, 2022

An introduction to the Freedom Online Coalition

By Rose Jackson, Leah Fiddler, Jacqueline Malaret

The Freedom Online Coalition (FOC) is comprised of thirty-four member countries committed to advancing Internet freedom and human rights online.

Digital Policy International Organizations

Later in the day, the discussion shifted to the European Union’s (EU) approach to tech governance in a session moderated by Rose Jackson, director of the DFRLab’s Democracy and Tech Initiative. Gerard de Graaf, the European Union’s first ambassador to Silicon Valley, remarked on recent tech industry layoffs, saying that he had been reassured by some tech companies that the cuts would not affect compliance with European regulations. “In the industry, there is an awareness that it’s probably not so wise to start cutting into the areas where, frankly, you probably now need to step up rather than reduce your resources,” he said.  

Meanwhile, Prabhat Agarwal, one of the lead drafters of EU tech legislation and head of unit at the EU’s DG CONNECT Digital Services and Platforms, said that he is working on designing transparency provisions. He noted three key areas that these provisions will cover: user-facing transparency to ensure tech platforms’ terms and conditions are so clear “that even children can understand”; expert transparency that would allow civil society, journalists, and academics the ability to access data intrinsic to their research; and regulator transparency that would enable governments to inspect what happens “under the hood” of the platforms.  

To close out this year’s 360/StratCom programming, Safa Shahwan Edwards, deputy director of the DFRLab’s Cyber Statecraft Initiative, led a conversation with Camille Stewart Gloster, US deputy national cyber director for technology and ecosystem. The discussion centered on how to define and grow a competitive tech workforce. Stewart Gloster noted that technology underpins each person’s life, and it is imperative to raise the collective level of understanding of the tradeoffs people around the world make daily, from privacy to security.  


Layla Mashkoor is an associate editor at the DFRLab. 

The post 360/StratCom: How policymakers can set a democratic tech agenda for the interconnected world appeared first on Atlantic Council.

]]>
The call for coordinated action for a free, open, and interoperable internet https://www.atlanticcouncil.org/content-series/360stratcom/the-call-for-coordinated-action-for-a-free-open-and-inoperable-internet/ Thu, 08 Dec 2022 14:33:27 +0000 https://www.atlanticcouncil.org/?p=593513 The DFRLab, as part of its annual 360/StratCom event, convened a discussion about the FOC, including the need to coordinate action to protect a free, open, secure, and interoperable internet.

The post The call for coordinated action for a free, open, and interoperable internet appeared first on Atlantic Council.

]]>

The Freedom Online Coalition (FOC), founded a decade ago, is one of a number of coalitions, alliances, and forums that exist to advance human rights online. As part of its annual 360/StratCom event, the Atlantic Council’s Digital Forensic Research Lab, convened a discussion about the FOC, including the need to coordinate action to protect a free, open, secure, and interoperable internet—and how the FOC should establish itself as a useful vehicle for coordinating digital policy. The panelists also discussed what steps the United States should take as it assumes the FOC leadership position from Canada for the years 2023 and 2024. 

David Morrison, Canadian deputy foreign minister of global affairs, introduced the conversation. Morrison reflected on the work Canada accomplished in 2022 as chair of the FOC, as well as what challenges remain as the United States takes control in 2023.  

This year, the FOC saw crises that required clear pushbacks against repression online, including Russian disinformation campaigns in Ukraine and the Iranian government’s censorship of the internet, both of which proved the value of the FOC. Morrison highlighted how the FOC can play a lead role in speaking out against such infringements of human rights online, in part because the FOC is a collective powered by civil society and industry.  

Read more

Report

Dec 6, 2022

An introduction to the Freedom Online Coalition

By Rose Jackson, Leah Fiddler, Jacqueline Malaret

The Freedom Online Coalition (FOC) is comprised of thirty-four member countries committed to advancing Internet freedom and human rights online.

Digital Policy International Organizations

Morrison then passed the microphone to Anne Neuberger—deputy national security advisor, cyber and emerging technologies—who spoke about US priorities as incoming FOC chair.  

Neuberger highlighted how the United States is happy to build upon Canada’s previous work as chair and revisited the role the United States played in the past, particularly in the organization’s founding. With the support of US President Joe Biden and a strong foundation set by Canada’s leadership in 2022, Neuberger said she is optimistic that the United States can expand the FOC’s role to improve strategic planning, counter the rise of digital misinformation, and promote safe spaces for marginalized groups such as women, LGBTQ communities, and the disability community. In addition, the United States remains committed to speaking out against Russian and Iranian oppression.  

Both Morrison and Neuberger celebrated the expansion of the FOC with the addition of Chile. With membership now at thirty-five countries, Morrison noted how the FOC represents a coalition of countries that believe in responding collectively to digital threats against democracy. 

To follow up the opening remarks provided by the Canadian and US government representatives, DFRLab nonresident fellow Jochai Ben-Avie moderated a panel featuring Tatiana Tropina, assistant professor in cybersecurity governance at Leiden University; Katherine Maher, nonresident senior fellow at DFRLab; and Jason Pielemeier, executive director of the Global Network Initiative, to provide insight into the role civil society and industry play in the FOC, as well as improving the coalition’s efficacy. The panelists discussed how the FOC should play a greater role in coordinating countries that believe in using democratic norms to advance human rights, acting as a vehicle to accomplish this because it has expertise, global reach, and a coalition of like-minded countries with the potential to work together. 

Looking at the potential of the FOC, the panelists noted the difference in geopolitical contexts between when the organization was founded and today, and that the FOC’s utility is particularly salient because of democratic backsliding in many parts of the world. The panel asserted that, while the optimism that the internet would be a democratizing force has fallen away due to the use of its technology to repress citizens, this should spur even greater motivation to engage within and beyond the FOC.  

Panelists then discussed another issue facing the FOC: increasing internal coordination. On one hand, they mentioned, the power of the FOC comes from its reach with the countries comprising its membership. On the other hand, there is a disconnect between the norms that the FOC stands for and the difficulties of actualizing these norms. As Tropina noted, the most pressing issue keeping the FOC from being more effective is not membership inclusion but clarifying the FOC’s role, stating how countries cannot play a leading role without doing the work themselves. The FOC should go “go back to basics and extend its membership based on some really identified values and principles,” she concluded. 

The panel concluded by acknowledging that, while it feels as if technology constantly outpaces the institutions created in the past, there are core identifying democratic values that stay constant, and that should drive the FOC’s future action.  


Erika Hsu is a young global professional with the Digital Forensic Research Lab.   

The post The call for coordinated action for a free, open, and interoperable internet appeared first on Atlantic Council.

]]>
The White House’s new deputy cyber director: Tech’s challenges are society’s challenges https://www.atlanticcouncil.org/content-series/360stratcom/the-white-houses-new-deputy-cyber-director-techs-challenges-are-societys-challenges/ Wed, 07 Dec 2022 23:49:25 +0000 https://www.atlanticcouncil.org/?p=593417 Camille Stewart Gloster, the inaugural deputy national cyber director for technology and ecosystem security, spoke at the DFRLab's 360/StratCom about her newly created office's ambitious agenda to address a wide scope of cyber challenges.

The post The White House’s new deputy cyber director: Tech’s challenges are society’s challenges appeared first on Atlantic Council.

]]>

The White House is engaging in a “whole-of-society strategy” with its newly created Office of the National Cyber Director (ONCD), which set an ambitious agenda to diagnose and address the implications of everything from regional cybersecurity and quantum computing to Web3 blockchain technologies and sustainably expanding the tech workforce.

That was the message from Camille Stewart Gloster, the inaugural deputy national cyber director for technology and ecosystem security, who has been charged with crafting the scope of the office empowered to bridge a range of tech equities and help better define and grow a competitive tech workforce.

ONCD “is focused on moving us towards an affirmative vision of a thriving digital ecosystem that is secure, equitable, and resilient that we all can share in,” Gloster said at 360/StratCom, the annual government-to-government forum hosted by the Atlantic Council’s Digital Forensic Research Lab (DFRLab).

This year, 360/StratCom focused on the work of civil society to ensure that universal human rights in the physical world are also protected in the virtual realm. Here are just a few of Gloster’s insights into the Biden administration’s approach, in a conversation with Safa Shahwan Edwards, deputy director of the DFRLab’s Cyber Statecraft Initiative.

Security and education add up to resilience

  • Gloster noted that the cyber challenge is “a shared problem across the public sector and the private sector.” As such, her office is working to produce a strategy that focuses on both cyber workforce education and digital safety awareness while striving to fill nearly seven hundred thousand open cybersecurity roles. “We don’t want to engage the same players exclusively,” Gloster said. “Yes, the big players must be a part of the conversation, but we want to make sure [to also include] civil society, academia, the small players, the innovators.”
  • That multifaceted approach led to the administration’s announcement in August a string of partnerships with top tech companies (Google, Apple, IBM, Microsoft), coding credentialing programs (Code.org, Girls who Code), cyber insurers (Coalition, Resilience), and more than 150 electric utility providers (through its expansion of the Industrial Control Systems Cybersecurity Initiative). 
  • Gloster also emphasized that educational institutions will be critical to enhancing cybersecurity at the local level. The University of Texas system recently announced that it would expand its existing short-term credentials in cyber, as well as create new ones, leveraging its UT San Antonio Cybersecurity Manufacturing Innovation Institute.
  • Later this week, Gloster will speak at Whatcom Community College, a remote college between Vancouver and Seattle, which was chosen in August to be the site for a National Science Foundation cybersecurity center providing education to “fast-track students from college to career.” At the heart of the conversation, Gloster said, is one core question: “How does a community college sit at the intersection of regional cyber awareness, regional technology awareness, and how does that catalyze and support the work that’s going on on a local or regional level?’”

Expanding cyber policy into the unknown

  • The ONCD isn’t limited to tackling workforce education and training, but it is also well-positioned to address everything from “the emerging tech supply chain, the intersection of human rights and technology, privacy—all of the future-looking pieces of the technology landscape,” Gloster said.
  • Cyber administrators will need to grapple with critical questions around quantum computing, which holds incredible promise for creating more effective vaccines and predicting threat models, as well as significant risk. “There’s a lot of good work that can come out of that, so how do we both prepare for the threats and the opportunities?”
  • The increasing use of Web3 technologies built on blockchains will continue to present new security challenges (beyond financial instability threats, such as recent consumer losses caused by the sudden collapse of the crypto exchange FTX). A number of Web3 companies are built with a “collective contribution model that is open source,” Gloster said, which could leave them more vulnerable to cyberattacks.

The role of all sorts of governments

  • The ONCD will need to work with the State Department and other interagency partners to coordinate around the national-security, economic, and human-rights implications of these new technologies. Groups like the thirty-four-nation Freedom Online Coalition provide an important avenue “to collaborate with our partners to really think about what democracy looks like now and in the future, and how technology underpins that,” Gloster said.
  • International governments will need to be proactive in addressing pending security concerns around new technologies. Both the White House and international organizations like the European Commission have signaled that more regulation is coming in 2023 to address cryptocurrencies and the metaverse, for instance.
  • While people often get lost in the conversation about tech, Gloster said preventing cyber threats in the future will require a significant understanding of human nature—and a workforce equipped with not just tech skills but also expertise in social and cultural contexts. “People create, promulgate, use, and are the malicious actors that weaponize or leverage technology. That means that we have to understand them as people.”

Nick Fouriezos is a writer with more than a decade of journalism experience around the globe.

The post The White House’s new deputy cyber director: Tech’s challenges are society’s challenges appeared first on Atlantic Council.

]]>
Evanina testifies to Senate Select Committee on Intelligence https://www.atlanticcouncil.org/insight-impact/in-the-news/evanina-testifies-for-the-senate-committee-on-intelligence/ Fri, 02 Dec 2022 15:11:02 +0000 https://www.atlanticcouncil.org/?p=580656 William Evanina testifies on the growing cyber threat posed to US business and academic institutions.

The post Evanina testifies to Senate Select Committee on Intelligence appeared first on Atlantic Council.

]]>

On September 21, the Scowcroft Center for Strategy and Security’s Nonresident Senior Fellow William Evanina testified before the Senate Select Committee on Intelligence. In his testimony, Evanina discussed the growing cyber threat posed to US business and academic institutions.

America faces an unprecedented sophistication and persistence of threats by nation state actors, cyber criminals, hacktivists and terrorist organizations. Corporate America and academia have become the new counterintelligence battlespace for our nation state adversaries, especially the Communist Party of China.

William Evanina

Forward Defense, housed within the Scowcroft Center for Strategy and Security, generates ideas and connects stakeholders in the defense ecosystem to promote an enduring military advantage for the United States, its allies, and partners. Our work identifies the defense strategies, capabilities, and resources the United States needs to deter and, if necessary, prevail in future conflict.

The post Evanina testifies to Senate Select Committee on Intelligence appeared first on Atlantic Council.

]]>
The cases for using the SBOMs we build https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/the-cases-for-using-sboms/ Tue, 22 Nov 2022 18:45:12 +0000 https://www.atlanticcouncil.org/?p=587890 Software bills of materials (SBOMs) provide key data suit for many uses. Industry and government can continue to sharpen their demand signals, shape implementation, and continue driving development and adoption.

The post The cases for using the SBOMs we build appeared first on Atlantic Council.

]]>

In the beginning, developers created package manifests and header files. Code was formless and required documentation. Tabs and spaces hovered on the surfaces of the editors, and the spirit of Dennis Ritchie hovered over the code.

And then a developer typed, “git commit” and behold, there was a commit, and the developer saw that the commit was good, so they separated BRANCH from MAIN. They called the BRANCH a version and MAIN the source, and there were pulls and pushes and the first release. And yet lo, users often had little idea what was in any of it. This went on to cause many problems, but that did not make it a bad idea.

Introduction: SBOMS, public policy, and you

Anyone in tech, cyber policy, or security circles has probably heard about software bills of materials (SBOMs) by now and considered how they or their organization might use SBOM data. Many recent efforts strive to answer this question—one good example is Microsoft’s Open-Source Software Secure Supply Chain framework.1 Asking about SBOM use is nonetheless a reasonable act of self-examination given their relatively recent appearance on the policy scene, mostly in the wake of major software supply chain incidents.2

SBOMs themselves are not new. One widely accepted SBOM format, the Software Package Data Exchange (SPDX), dates back to 2011.3 Notably, that original SBOM concept has its roots in complex physical manufacturing processes in industries like the automotive sector to understand intricate supply chains,  as well as in legal practices for recording the inheritance of licenses through a business.4 Meanwhile, those who have compiled software from source code are likely familiar with build manifests that indicate all the packages, libraries, and other bits of code needed to properly construct a final piece of software. The bigger a project, from a simple application to an entire operating system, the longer and more complex that manifest becomes. An SBOM is similar—a snapshot in time of each component making up a piece of software, with additional metadata tracking provenance (information about component authors and affiliations) and versioning.5

While SBOMs are intuitively useful and have received some notable policy attention of late—from the National Telecommunications and Information Administration’s (NTIA) minimum-viable elements project to mentions in executive orders (EOs) and Office of Management and Budget (OMB) memoranda—they are just one tool (more precisely, one class of data) in the wider arsenal for managing risk in software systems.6 7 Although conversation about SBOMs has largely (and understandably) focused on their generation, requirements, and format, their growing maturity demands wider consider consideration of next steps: developing clear use cases for SBOMs. An absence of mature, well-understood use cases for SBOMs threatens their future as an effective risk management tool.

Though SBOMs and their widespread adoption face other, arguably more dire, challenges—for example, the risks of mistimed regulation and disconnects between SBOM designers and consumers—policymakers and the security community can directly address use cases now. Letting the challenges of SBOM generation drown out demand signals from the user side of the pipeline risks inundating purchasers, developers, and acquisition officers alike with a torrent of useless spreadsheets and effete compliance certifications.

Indeed, these uses extend beyond just technology-consuming firms to include governments and other central risk assessment bodies. An absence of well-articulated SBOM use cases and illustrated relevance to communities of SBOM consumers holds twin challenges. First, it risks mission creep, where policymakers might begin to frame SBOMs as a silver bullet for all supply-chain woes without clear demarcation of the problems they are designed to address. Second, it undersells SBOMs to those who would consume them, leading to slower adoption, poor tooling, and the malformation of a potentially powerful data standard into yet more bloated security theater.

To address the opportunity for further usage conversations, this paper offers several grounded applications for SBOMs, focusing particularly on the benefits they offer their consumers, from chief information security officers (CISOs) to acquisition officers and from software consumers to the Cybersecurity and Infrastructure Security Agency (CISA). Incident response may be the most intuitive role for SBOMs—a way to determine impacted software when a widespread component is compromised or found vulnerable—but it is far from the only one. SBOMs can help development teams determine what packages they will be managing. They can feed software composition analysis (SCA), acting as an ingredient and source list. They can help compliance officers streamline licensing acquisition and manage the adoption of components produced by sanctioned or entity-listed companies. At the largest scale, they can map out portions of the software ecosystem, highlighting little-known relationships and concentrations of dependence, while shedding light on the benefits of using extant code and the risks of relying on external repositories. First though, this paper considers the state of contemporary SBOM policy conversations.

Still fighting yesterday’s battles

The year 2014 saw one of the first truly widespread, dire software supply-chain events: the OpenSSL “Heartbleed” vulnerability.8 Heartbleed put the many systems that relied on OpenSSL at significant risk, allowing malicious actors to extract sensitive information due to a relatively simple software flaw. The incident catalyzed a small surge in private-sector funding to open-source projects to support security efforts and raised questions about ways to effectively track the use of critical, community-developed software in systems spread around the world, as well as ways to coordinate responses to flaws found in such code. The US government immediately asked all federal agencies, as part of alerting the public through the Department of Homeland Security (DHS), to emphasize where websites and other internet services used OpenSSL libraries.9 However, that was only the tip of the iceberg—in fact, OpenSSL also lived on many mobile devices, embedded hardware systems, and phone and conference-call systems,10 as well as much networking infrastructure.

Collecting data on the usage of OpenSSL protocols among websites to understand Heartbleed exposure was a useful first step to an unwieldy triage process. Wider SBOM adoption at the time would have aided long-tail remediation of false negatives and subtle implementations. Further, had a CISA-style entity been able to ingest and use SBOM information on OpenSSL, the true sprawl of the library would have been more immediately apparent and accessible—perhaps even before the vulnerability was found, leading to a better, more targeted response and, crucially, enabling proactive investment and security before the incident.

Discussions of SBOMs and their development have the opportunity now to match the technical solutions enabled by SBOM data to the policy challenges around transparency, processes, and due diligence they can address, and use case refinement will drive that matching. SBOMs offer a mechanical view into the minutiae of documentation for software, summarizing all the pieces of code that make up modern applications and services. If the end goal is for the digital ecosystem to widely adopt SBOMs—both their production and practical use by recipients—much of the necessary intermediary work in ingesting and interpreting SBOM data remains unfinished. This is understandable: in the early SBOM days, deliberate decisions to limit scope—in the NTIA Minimum Elements for an SBOM process, for instance—helped reduce a sprawling problem set to a tractable project.11 Now that SBOMs are moving toward the mainstream, beginning to address broader use scenarios will help drive their adoption and maturity, and industry, in particular, can play a key role in pushing an aggressive development cycle with clearly defined uses for SBOMs, each contributing to different facets of cybersecurity.

The potential role for SBOMs in long-term remediation of Heartbleed-style events to provide a snapshot of the composition of software packages is clear. However, this security model requires that, upon build and deployment, developers and consumers update and transparently publish SBOMs for consumption. SBOMs only work well if they are common, standardized, and quickly updated—all a considerable way off from the current situation,12 as a February 2022 Linux Foundation (LF) study found that less than half of surveyed organizations were ”using” SBOMs.13 This survey likely represents an optimistic upper-bound as well—64 percent of respondents were LF member companies, likely skewing toward SBOM maturity; only 74 percent of the organizations classed as using SBOMs were producing and consuming them; and even partial or marginal organizational use would have counted for the survey (and is helpfully broken down within the analysis, which acknowledges and strives to address potential sample bias well).

This is not a critique of adoption speed and progress to date, but rather an acknowledgment that the next steps for SBOMs will require a gear shift that well-articulated use cases and a clear policy demand signal can help accomplish. The same survey queried about adoption plans, forecasting a promising 66 percent increase in the rate of SBOM production and consumption amongst respondents. Anecdotally, some industries at the forefront of SBOM development are already innovating these use cases. For instance, the healthcare sector—which acted as one of the testbeds for NTIA’s SBOM proof-of-concept studies14—use SBOM processes to highlight relationships with suppliers and OSS communities that merit increased support, as well as produce human-readable risk analysis information.15

The most common, general communications to policymakers about SBOMs are that they are ingredient lists most useful for assessing the scale and supporting the recall of tainted, defective components. This describes the minimum viable SBOM: a list of component software, only referred to upon the discovery of a defective part—and in the case of NTIA’s minimum viable SBOM, only one layer of dependencies is tracked.16

This paper is not a call to reinvent SBOM standards. Like so much of government cybersecurity policy, the extreme visibility of SBOMs is a reaction to crisis. Rather, it argues that use cases can and should shape the production and adoption of SBOM and the tools accompanying them. As mentioned earlier, some of this work is underway,17 but policy conversations can continue focusing on what SBOM data can enable and what tooling and production/adoption incentives will best drive development there at a sufficient pace. Policies can also help match the different methods of SBOM production to the most applicable usage. Use cases strengthen the SBOM value proposition with both code maintainers and consumers, as well as help overcome obdurate resistance from technology vendors with little desire to have their behavior “shaped.”18

Use cases

The policy challenge behind SBOMs is the question of adoption—compelling use-cases can motivate that while sensibly shaping the circumstances and specifics of regulation. Below are four foundational use cases for SBOMs, each with their respective audiences, outcomes, and positions in the product and incident lifecycles. This is by no means an exclusive list, but it represents diverse and important usage. Each asks for different levels of SBOM completeness, from a minimum-viable components list to a thorough accounting of support, funding, versioning, and deployment context that no current SBOM standard mandates.

  1. Procurement—for reducing compliance burdens and preventing duplicative purchases.
  2. Vulnerability Management and Threat Intelligence—for tracking compromised components and remediation planning.
  3. Incident Response—for validating liability claims and guiding patch efforts.19
  4. Ecosystem Mapping—for providing a bird’s-eye view of dependencies in an enterprise’s ecosystem and beyond.

Discussing the role for SBOMS in these cases and the larger impacts of their use, offers clarity for the government on how to incentivize and structure SBOM adoption and, for industry, on what tooling to focus development.

1. Guiding software procurement and adoption decisions

SBOMs can prove useful during the procurement process for any third-party software, beyond obvious security functions. Large organizations often make individual purchases rather than coordinating licensing centrally. So creating an inventory and consolidating duplicate purchases or capabilities for cost savings can make a chief financial officer’s (CFO) day. Licensing checks can also surface instances where entities adopt open-source software but cannot legally incorporate it into other products. These are quick wins because the acceptance or rejection of that software can be binary: if licensing prevents use or an existing contract covers a need, the procurement goes no further. Software asset registers and intellectual property scanners already strive to serve these functions, but, given the overlap in their data and that of SBOMs, there is room for tooling to support quick decisions making for all, as well as for the different data sources to support rather than supplant each other.

Binary decisions can form part of a standard procurement process in pre-negotiations with suppliers, but decisions involving judgment calls (such as the relative criticality of a known bug) tend to slow workflows and create angry calls from executives who want to know the reasons behind a derailed purchase. Again, deciding ahead of time which data points in an SBOM are deal-breakers will streamline the process of software procurement. Simple, written policies such as “never adopt or acquire software with components from X supplier”—which can refer to competitors, companies operating in sanctioned nations, entity-listed organizations, known risky projects, or anything else unambiguously identified—can work well here, especially with automation.

When it comes to adopting and integrating open-source software, many of these policies should already exist at most firms, but SBOM use with a standardized format can streamline validation. One must check on the status of a project referenced in an SBOM: how healthy, deep, and thorough its community support is, how much investment it enjoys, or, if tied to a proprietary offering, how dedicated to support the parent company is—none included in the SBOM per se, but retrievable from tools like OpenSSF Scorecard, SLSA levels, and more once upon identifying dependencies. Many CISOs already struggle with the need to collect supply-chain data at a granular level for risk management. While insufficient for high-security organizations, SBOMs are workable substitutes for medium or small enterprises that lack the in-house expertise to analyze all their software in depth, and their unaltered data can serve to inform and define procurement standards and policies alongside risk-management posture.

2. Adding smarts to vulnerability management and threat intelligence

One of the main use cases for SBOMs is identifying components affected by vulnerabilities. SBOMs provide visibility into software a level or two deeper than is common today, particularly provenance. They allow for better triage, cross-referencing dependencies, and remediation planning for identified vulnerabilities. SBOMs provide the roadmap through software relationships that enable this degree of dedicated care.

One constructive application of SBOMs in this context is improving the usefulness of vulnerability risk ratings to impacted organizations. One organization’s “critical” is not necessarily so for a different environment, use case, or business model. Application security professionals already know this, but wide adoption of SBOMs may change how they design a remediation strategy by clarifying what entity is ultimately responsible for fixing a vulnerability and how those outside an organization’s control might handle that request. Some dependencies may have quite capable maintainers that can be relied on while others might require significant external support. SBOM data highlighting dependencies can help teams identify what external parties they rely on for code support and adjust accordingly and ahead of incidents.

Developers may need to confirm whether a vulnerable component of a package is actually in use. If not, the organization can declare the risk “low” and simply note that policy will change if it incorporates that component in the future. There is a possible resource squeeze in the future for enterprises that need more application development and security staff to investigate the origins of disclosed vulnerabilities, determine remediation responsibilities, pass on notifications and updates to affected parties within the ecosystem, and sign off on version changes to internally generated SBOMs. SBOMs are part of enabling that level of decision-making, allowing better tracking of dependencies and changes to them to provide better insight into actual vulnerability exposure. Again, the data SBOMs provide are just part of the foundation on which to build these processes, complemented by other tools and data like GitBOM and Vulnerability Exploitability eXchange (VEX), highlight the importance of sharpened demand signals from SBOM consumers.

One of the main questions to ask with any SBOM is whether its source and contents are trustworthy. One useful method involves scanning the binary of the software to validate the accuracy of the SBOM—essentially checking that what is under the hood matches the parts list. Binary scanners are imperfect, and if the same scanners help generate an SBOM in the first place,20 they may not produce reliable SBOMs for consumers using them in their own vulnerability scanning.21 SBOMs and scanning can help each other, mutually improving the accuracy of package component determination.

The overall risk rating of a software vulnerability informs the risk of a partial or phased remediation. Whether waiting for a third party to deliver a patch or allocating limited internal resources against dependencies that take longer to resolve and downstream requirements from partners, organizations will be able to monitor SBOM-sourced vulnerability data as part of their infrastructure risk-management practices (in conjunction with centralized data like VEX).22 This monitoring can also help threat intelligence analysts better understand organizational exposure. Better dependency knowledge from an SBOM can help clarify where dependencies might be under-resourced, frequently targeted by adversaries, or otherwise deserving of extra scrutiny and resourcing. The frequency of versioning changes can even provide insight into changes that support critical components. Even simply improving organizational visibility into the attack surface of its dependencies will help prioritize resourcing, direct remediation planning, and expand overall cybersecurity for an organization making full use of its SBOMs.

3. Incident response and building a better packing slip

While the above uses focus on using SBOMs for response planning prior to an incident, SBOMs also have utility right of “boom,” or after the fact. In many cases, initially, SBOMs can act as verification for incident reports and recommendations—a pointer to where things went wrong in a compromise. As corroborating evidence, a verified SBOM from an environment, system, or other package can help in the review of an incident and determine the impact on parallel systems or previous system versions. The core value within incident response and forensics is accurately comparing versions, changes, and their respective release times. An SBOM may provide some simple insight—after all, if an organization cannot confirm or deny whether a system was affected, does it have to declare a breach anyway? Having an SBOM that raises unanswerable questions is a business risk to examine with the leadership—business risks that otherwise might not have surfaced.

SBOMs can also aid in crisis communication among partners, affected organizations, and customers during and following an incident. Most product-security organizations already have a workflow to add SBOM information to, but they may require some additional information, such as a timeline matching SBOM versions to the systems under investigation. A challenge with this level of forensics is that organizations rarely have the right level of logging and sufficient log retention to be able to confirm authoritatively which versions of components were in use at the time of an incident. Suppliers may need to help customers determine whether an incident affected them, and sometimes that information may simply be unavailable.

One more use of SBOMs in incident response is to validate that an assertion about the contents listed by an SBOM were reasonably accurate at the time of release and that no known and unaddressed vulnerabilities existed. Organizations can reference attestations later if events or evidence indicate something different. While SBOMs are often compared to the ingredients list on a food-product label for software, another analogy could consider them a packing slip, describing what a supplier claimed was in a box at the time of its sealing. If a checksum to verify the absence of tampering fails, an SBOM can help guide responders to tracking down the discrepancies between shipped and delivered software.

4. A systemic view of software risk

In addition to using SBOMs between and within companies, SBOMs can also serve government agencies and other third parties in mapping dependency chains and concentration risk across the software ecosystem. Recent, widespread vulnerabilities, including log4shell, emphasize the degree to which single dependencies can underpin vast quantities of software. Without a systemic view into dependency patterns, government agencies and others will struggle immensely to assess risk across and within sectors. Given access to SBOMs from multiple sources, government could use that aggregated data to assemble a rough map of dependencies across slices of the digital ecosystem—a picture not just the dependencies of one application, but of many, and more importantly, where they overlap. While contemporary software composition analysis (SCA) can provide similar insight into widely-depended-on software,23 running SCA tools across the far larger set of software considered here would likely prove far less feasible or replicable. To protect both intellectual property and the critical nodes such a map might highlight, government would need to take extra care in protecting this data, but it would prove useful in identifying under-secured or under-resourced dependencies ripe for proactive investment and support. Vulnerabilities in one company’s codebase or within a popular open-source repository can have global impact. Widespread ignorance about software dependencies hampers proactive support that might include security auditing, maintainer funding, development of alternate dependencies, or any other number of methods to reduce the risk of high-leverage dependency.

Governments and private-sector companies currently lack measures that describe the scale of use of different pieces of software. Metrics such as download counts, license purchases, or userbase size do not provide information about deployment or reliance, either upstream or downstream. A package with only  a single user could still be critically important if all kinds of different software depend on it. However, without relationship mapping, the entire ecosystem remains blind to that package’s position as an essential link in the supply chain. SBOMs can reduce this problem by providing data, when aggregated from many sources, for an ecosystem-wide view of software dependencies to CISA and other entities, even if only for part of an enterprise. CISA is likely to be tasked with some of this work should the Securing Open Source Software Act of 2022 (S.4913), pass into law, or under III.B.2 of OMB M-22-18 on Enhancing the Security of the Software Supply Chain through Secure Software Development Practices.24 25

As more workloads move into the cloud, understanding and assessing the risk present in those systems is vital. One important use of SBOMs for software-as-a-service (SaaS) consumers is encouraging greater transparency in vulnerability reporting and mitigation inside cloud services. Over time, this information will help support more precise decision-making about the security practices of different vendors. While some companies have made policy choices about what to reveal to customers and what to withhold,26 SBOMs are useful tools for other companies to define their own policies, and for customers to push for what they (or regulators) find most comfortable. As part of this effort, good questions will need clear answers regarding how SBOMs can be most useful amidst widely varying configurations and associated products present in different SaaS deployments. Wider generation, use, and consumption provide incentives to determine and sharpen answers to these.

SBOMs can help, though differences between cloud and on-premises software create challenges. One is the speed at which the cloud changes. If SBOMs change minute-to-minute with cloud configurations, they might produce too much information and impede meaningful use by recipients. However, operating off out-of-date information is also risky. Additionally, cloud instances often utilize many different third-party services, so tracking the versioning of each service for each instance or configuration within an SBOM is difficult. Building SBOMs with this aggregate use case in mind will be important to managing this deluge of data, and a key to that is a clearer demand signal from consumers of cloud SBOMs, in and outside of the public sector, about how they aim to incorporate that data into their risk-management practices.

A standardized method for companies (and other entities) to inform each other of dependencies, used and combined at scale, would ease the task of assessing risk across sectors. For SBOMs to fulfill this role, the information contained within them must be consistently organized, filled, and updated, which might pose a challenge to organizational resources. Such data would be most useful when combined with assessments of the context surrounding any piece of software. Even so, SBOMs, as currently imagined, still provide a valuable piece of the puzzle not otherwise measurable. Better data on the arrangement of and relationships with the larger software ecosystem would allow CISA and other agencies to target resources more effectively toward shoring up mission-critical software.

Why define use cases at all?

Clearly defining the use cases will help guide and preserve the inertia of SBOM adoption and development, from shaping the automated tools for SBOM ingestion to pointing toward new product offerings and molding federal procurement policy. Only considering the challenges of SBOM generation while disregarding the other end of the pipe risks drowning purchasers, developers, and acquisition officers alike in a sea of useless spreadsheets and symbolic compliance certifications.

Though SBOMs and this paper’s considered uses of them are as important to proprietary software components as open-source ones, for the latter, they provide the beginnings of a more fundamental guidance, too. Unlike in traditional supply chains for physical goods or in the exchange of proprietary code, OSS dependence rarely sees an exchange of money or a contractual agreement.27. Rather, there is simply a quick “pip install XX –user” and “import YY as ZZ,” often from the public repository. SBOM adoption can eventually change the nature of that informal incorporation, and policymakers still have a chance to sculpt, for better or for worse, the roles and responsibilities that will redefine the ecosystem.

A key policy challenge is determining exactly which entities are ultimately responsible for producing and publishing SBOMs. Suppliers to finished goods manufacturers, due to various global and national regulations, often must detail the source of their materials—whether from forced or child labor, farmed or created under sustainable practices, acquired legally, and so on. The answers have implications for marketing as well as compliance and legal departments. Someone in the chain of the software development lifecycle must be responsible for the creation of SBOMs, but the trust framework for the completeness and veracity of their claims has yet to be developed, and debate over who, precisely, is responsible for making them and what levers are appropriate for achieving compliance persists. Burdening open-source developers and maintainers with that task, though, is an overreach in the absence of ubiquitous tooling to generate SBOMs automatically.

At the regulatory level, all this is challenging, as countries take multiple approaches to what entity is responsible for providing compliance and conformance assurances. This also complicates how governments support the security of open-source software supply chains, as each may have a different goal or preferred method despite aligned motivations. In the United States, CISA wants to assist, even lead, efforts to help support the securing of critical open-source software. However, the culture of open-source communities, the history of their development, and the very tenets that make open source a vital font of innovation all buck against direct government regulation in such stewardship, especially given that open-source code, legally in the United States, is a form of free speech.28Importantly, governments supporting the open-source ecosystem will not be able to rely on blanket requirements, and their assistance in identifying critical projects, supporting tooling development, and investing in developers and communities will provide more fruitful results.

SBOMs, sufficiently standardized and adopted, offer data that can serve critical policy challenges when combined with appropriate tooling and processes, allowing a better understanding of and investment in dependencies before incidents occur, as well as more complete vulnerability remediation fixes afterward. Applied and used correctly, SBOMs can make the ecosystem’s most capable actors responsible for its coherence. Incorrectly executed, burdensome requirements for SBOM generation could sterilize the open-source world’s thriving innovation.

So, what should you do about it?

SBOM generators have an outsized say in the use cases of SBOMs because they determine what each bill of materials contains. In developing tools for aggregation, analysis, and production of SBOMs, generators could do the following to speed adoption and provide a more complete, practical set of capabilities to SBOM consumers:

  • Develop tooling to convert from raw SBOM data to actionable information more intuitively. CISOs will not have the time or resourcing to parse through vast, rapidly changing informal tracking of dependency information, but automated checks with customizable, risk-tolerance leveling and other policies can make SBOMs a practical tool during acquisition and incorporation decision-making processes. Adding context, alongside SBOMs, that clearly declares what they do and do not contain and what purposes they serve can help here.
  • Develop tooling to provide more practical and varied information based on SBOM contents. Many of the use cases discussed above require a touch more detail than conveyed by current SBOM formats. This next layer of tooling, in tandem with products that coordinate SBOM consumption, will provide value both to their manufacturers and users.

The OMB and CISA have recently begun moving towards SBOM requirements at the federal level, likely in tandem with updates to government procurement processes and working with critical infrastructure sectors. They face a key challenge:

  • Provide better support for smaller enterprises that cannot easily adopt and produce SBOMs in a compliant manner. CISA might pursue this through added tooling and support in their small-to-medium business (SMB) programs and by tailoring any legal requirements to the unique needs and exposures of different sectors. These need not be new tools adding more complexity and variation to the SBOM landscape, but rather increased funding and guidance for SMBs to access tools normally available only to larger enterprises. Large IT vendors can also act as an intermediary in this provision by offering tooling and support for SMBs with government subsidies.

CISA could model practices to gather SBOM data beyond that used by a single enterprise. Wider collection of SBOM data is necessary for the envisioned aggregate use case. Although this process is more straightforward for open-source systems, there are valid concerns about SBOMs revealing proprietary information and providing attackers with the tools to identify vulnerable targets, particularly among software-as-a-service vendors, whose products are otherwise difficult to scrutinize. Industry could work with government to identify solutions to this information problem; doing so would increase the supply-chain insight SBOMs could provide. Aggregating and analyzing in-house collections of SBOMs first would be a good starting point and force government and industry to directly address the tradeoffs between identifying nodes of systemic risk to better secure them and pointing attackers to those nodes—some of which will be under-supported—through their identification.

SBOM users will need to provide the demand signals to producers that shape the future utility of software bills of materials. Often, users and consumers will be the same party, or at least departments within the same company, but they may also be small firms less focused on tech development, non-profits, or companies without the resources to do more than implement well-documented tooling. This responsibility is also a chance to extract significant value from SBOMs.

  • Accept the imperfect SBOM and iterate: If a complete SBOM must trace dependencies all the way down to another complete SBOM, they will rarely exist except for the simplest of components. Imperfect is not impractical. The processes that develop around SBOM use must not assume or depend on complete information. Industry and government could explicitly discuss how to navigate imperfect SBOMs and thresholds for acceptable inaccuracy while ensuring users can adopt and iterate on necessarily imperfect standards.
  • Innovate your use cases: Depending on the depth of information contained in or pointed to by an SBOM, consuming organizations might highlight the use of memory-unsafe languages, insecure calls, unmaintained libraries, or methods highlighted in Open Web Application Security Project (OWASP) and other “top of” lists to block these technologies from their environment. Risk managers can even develop tools converting detailed SBOMs into tolerable-risk metrics.
  • Build with ease for the user in mind: Part of strengthening the utility and longevity of SBOMs is enabling the use of this rich source of data in a wide range of possible ways. Tooling should reflect the expectation that many users are non-expert and/or lack considerable resources for IT administration and security, prioritizing simplicity and intelligibility over maximal functionality. Enterprise support for SBOM-tool users can help here too.

Conclusion

Businesses and developers hold mixed sentiments toward requiring SBOM production in regulations. Keen observers will find working groups with names like “SBOMs Everywhere” with employees from the very same companies funding letters (thinly veiled by trade associations), denouncing some efforts to promulgate SBOM requirements through policy.29 Part of this fractured view of SBOMs reflects the early stages of SBOM maturity, and part, the variety of opinions and incentives within large organizations too often treated as monolithic entities. More importantly, it reflects a disconnect among available government levers, SBOM functionality, and industry incentives. Procurement requirements are one of government’s most effective levers for shaping cybersecurity practices, and industry insistence that government wait for trivial or even default compliance before regulation is circular—if SBOMs were standard practice already, there would be no need to specifically request them to begin with, and government requiring higher security standards from its vendors is far from aberrant. The mismatch between federal security needs and the state of SBOM adoption and maturity is a significant opportunity for industry to continue to deepen its partnership with government and other would-be SBOM users to keep up the pace on SBOM development while shaping the tools serving SBOMs and the challenges that they can address.

A key question persists: what do SBOM producers stand to gain, short of compliance, from their considerable toil? Many prior requirements of large suppliers and component suppliers—self-attestations or FedRAMP requirements, for example—might have necessitated great expenditure in return for relatively small benefits to individual entities. Without making a clear case for SBOM use and the resultant tools that provide return on investment, policymakers advancing SBOMs risk mortgaging their future as a marketing tool— another sticker slapped on the proverbial product denoting begrudging compliance with federal requirements. Successful policy supporting SBOMs must put them on a sustainable path, tying hard and fast requirements to clear benefits for the ecosystem and the entities within it. Part of this must translate to better articulating how SBOMs can be consumed and used toward a variety of ends and by a diversity of organizational types.

Lacking a clear, tangible value proposition, particularly to considerations like the bottom line, future contracts, operations, or other more immediately recognizable benefits will create friction between parties that desire to use SBOMs and those that will not willfully provide them, even while governments and other organizations push to have SBOMs a standard part of their procurements. It is worth noting that some of the best analogs to SBOMs share a similarly fraught origin. Nutrition labels, ingredient lists, and food-goods advertising regulations span a century-long tug-of-war between government, industry, and consumer.30 The transition from prepared-from-scratch meals to off-the-shelf purchasing helped spur Food and Drug Administration (FDA) regulation, as consumers required better visibility into their purchases.31 Notably, some companies already use SBOMs or similar data internally, of their own accord, and presumably, for some of the benefits enumerated here—Google and Microsoft are easy enough examples to find public records of this.32 33

This paper aims to remove some of the friction against SBOM adoption and strengthen their long-term utility as a source of data for important risk management decisions, showing potential consumers clear benefits from using SBOMs, nudging producers and tool developers towards new offerings, and making clear to policymakers the importance of decisions they are already considering. SBOMs, initially marketed in cybersecurity as a solution to the fact that one cannot secure dependencies one does not know about, can enable so much more along the way. It is time they were sold as such.

About the authors:

Amélie Koran is a nonresident senior fellow at the Cyber Statecraft Initiative under the Atlantic Council’s Digital Forensic Research Lab (DFRLab) and the current director of external technology partnerships for Electronic Arts, Inc. Koran has a wide and varied background of nearly thirty years of professional experience in technology and leadership in the public and private sectors.

Wendy Nather is a nonresident senior fellow at the Cyber Statecraft Initiative under the Atlantic Council’s Digital Forensic Research Lab (DFRLab) and leads the Advisory CISO team at Cisco.

Stewart Scott is an assistant director with the Atlantic Council’s Cyber Statecraft Initiative under the Digital Forensic Research Lab (DFRLab). He works on the Initiative’s systems security portfolio, which focuses on software supply chain risk management and open source software security policy.

Sara Ann Bracket is a research assistant at the Atlantic Council’s Cyber Statecraft Initiative under the Digital Forensic Research Lab (DFRLab). She focuses her work on open-source software security, software bills of material, and software supply-chain risk management and is currently an undergraduate at Duke University.

Acknowledgments:

The authors of this paper would like to thank external reviewers John Speed Meyers, Aeva Black, William Bartholomew, and Allan Friedman, who all took significant time to provide input during its development, as well as Donald Partyka and Anais Gonzalez for designing the final document and others who contributed invaluable feedback along the way.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

1    “Open Source Software (OSS) Secure Supply Chain (SSC) Framework” (2022; repr., GitHub: Microsoft, August 4, 2022), https://github.com/microsoft/oss-ssc-framework/blob/165ba893f2080e75bc69acaa6ea3fc8550315738/specification/Open_Source_Software_(OSS)_Secure_Supply_Chain_(SSC)_Framework.pdf.
2    Incidents, rather than attacks, as several also included valid use cases and functionality leading to cascading failures or vulnerabilities—all important to recognize.
3    Adrian Bridgwater, “Linux Foundation Eases Open Source Licensing Woes,” Computer Weekly, August 19, 2011, https://web.archive.org/web/20210820144000/https:/www.computerweekly.com/blog/Open-Source-Insider/Linux-Foundation-eases-open-source-licensing-woes.
4    The Linux Foundation, “The Linux Foundation’s SPDXTM Workgroup Releases New Version of Software Package Data ExchangeTM Standard – Linux Foundation,” August 30, 2012, https://www.linuxfoundation.org/press/press-release/the-linux-foundations-spdx-workgroup-releases-new-version-of-software-package-data-exchange-standard-2.
5    In practice, many real SBOM-generation processes are more complex—build processes might resolve placeholder dependencies, with only the end result reflected in an SBOM, for example.
6    National Telecommunications and Information Administration (NTIA), “The Minimum Elements For a Software Bill of Materials (SBOM)” (Washington, DC: United States Department of Commerce, July 12, 2021), https://www.ntia.doc.gov/report/2021/minimum-elements-software-bill-materials-sbom.
7    Exec. Order. No. 14028 on Improving the Nation’s Cybersecurity, Federal Register, 86 FR 26633 (May 12, 2021), https://www.federalregister.gov/documents/2021/05/17/2021-10460/improving-the-nations-cybersecurity.
8    Timothy B. Lee, “The Heartbleed Bug, Explained,” Vox, May 14, 2015, https://www.vox.com/2014/6/19/18076318/heartbleed.
9    Larry Zelvin, “Reaction on ‘Heartbleed’: Working Together to Mitigate Cybersecurity Vulnerabilities | Homeland Security,” Department of Homeland Security, April 11, 2014 [Updated September 20, 2018], https://www.dhs.gov/blog/2014/04/11/reaction-%E2%80%9Cheartbleed%E2%80%9D-working-together-mitigate-cybersecurity-vulnerabilities-0.
10    Cisco Security, “Cisco Security Advisory: OpenSSL Heartbeat Extension Vulnerability in Multiple Cisco Products,” Cisco, April 9, 2014, http://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20140409-heartbleed.
11    NTIA, “The Minimum Elements For a Software Bill of Materials (SBOM).”
12    The Cybersecurity Coalition, “Comments on NTIA’s Request for Information (RFI) on ‘Software Bill of Materials Elements and Considerations,’” June 17, 2021, https://assets.website-files.com/60cd84aeadd2475c6229482f/60ec9f0a15e85933daa3b5ca_Coalition%20SBOM%20Response-Final%206-17-21.pdf.
13    Stephen Hendrick, “The State of Software Bill of Materials (SBOM) and Cybersecurity Readiness” (The Linux Foundation | Research, January 2022), https://8112310.fs1.hubspotusercontent-na1.net/hubfs/8112310/LF%20Research/State%20of%20Software%20Bill%20of%20Materials%20-%20Report.pdf. The survey does well acknowledging and striving to address the above-mentioned sources of possible bias explicitly, too.
14    National Telecommunications and Information Administration, “Healthcare SBOM Proof of Concept” (NTIA, April 29, 2021), https://www.ntia.doc.gov/files/ntia/publications/ntia_sbom_healthcare_update-2021-04-29.pdf.
15    Sourced from conversations with New York Presbyterian.
16    NTIA, “The Minimum Elements For a Software Bill of Materials (SBOM),” 12.
17    Velichka Atanasova, “Let’s Get SBOM Ready – Open Source Blog,” VMWare, April 14, 2022, https://blogs.vmware.com/opensource/2022/04/14/sbom-ready/.
18    Alliance for Digital Innovation et al., “Cautionary Notes on Codifying Use of SBOMs,” September 14, 2022, https://fcw.com/media/multi_association_letter_on_sbom_final_9.14.2022.pdf.
19    To differentiate vulnerability management and incident response, consider the former tracking vulnerabilities, relevant threat intelligence around dependencies, preemptive response planning, and determining whether a vulnerability impacts an enterprise. The latter comes into play after that determination—guiding patch efforts, outreach to third-party maintainers, mitigation, and tailoring general remediation plans to specific incidents.
20    One of several ways to generate an SBOM.
21    Ariadne Conill, “Not All SBOMs Are Created Equal,” Chainguard, April 22, 2022, https://www.chainguard.dev/unchained/not-all-sboms-are-created-equal.
22    National Telecommunications and Information Administration (NTIA), “Vulnerability-Exploitability eXchange (VEX) – An Overview,” September 27, 2021, https://www.ntia.gov/files/ntia/publications/vex_one-page_summary.pdf.
23    Frank Nagle et al., “Census II of Free and Open Source Software — Application Libraries” (Linux Foundation Research; OpenSSF; Laboratory for Innovation Sciences at Harvard: Harvard Laboratory for Innovation Science (LISH) and Open Source Security Foundation (OpenSSF), March 2, 2022), https://lish.harvard.edu/publications/census-ii-free-and-open-source-software-%E2%80%94-application-libraries.
24    “Securing Open Source Software Act of 2022,” S.4913, 117th Cong. (2022), https://www.congress.gov/bill/117th-congress/senate-bill/4913.
25    Shalanda Young, United States, Office of Management and Budget, OMB Memo to the Heads of Executive Departments and Agencies, M-22-18, “Enhancing the Security of the Software Supply Chain through Secure Software Development Practices,” September 14, 2022, https://www.whitehouse.gov/wp-content/uploads/2022/09/M-22-18.pdf.
26    Kevin Beaumont [@GossiTheDog], “For Anybody Who Doesn’t Know, August 2022’s Windows Patches Included Fixes for NSA and GCHQ Reported Cryptographic Bugs. but MS Didn’t Tell You and Didn’t Issue a CVE.,” Tweet, Twitter, October 12, 2022, https://twitter.com/GossiTheDog/status/1580244775638212608.
27    Iliana Etaoin, “There Is No ‘Software Supply Chain,’” iliana.fyi, September 19, 2022, https://iliana.fyi/blog/software-supply-chain/
28    Alison Dame-Boyle, “EFF at 25: Remembering the Case That Established Code as Speech,” Electronic Frontier Foundation, April 16, 2015, https://www.eff.org/deeplinks/2015/04/remembering-case-established-code-speech.
29    Alliance for Digital Innovation et al., “Cautionary Notes on Codifying Use of SBOMs,” September 14, 2022.
30    Institute of Medicine (US) Committee on Examination of Front-of-Package Nutrition Rating Systems and Symbols, “Front-of-Package Nutrition Rating Systems and Symbols: Phase I Report,” in History of Nutrition Labeling, ed. Ellen A. Wartella, Alice H. Lichtenstein, and Caitlin S. Boon (Washington, DC: National Academies Press (US), 2010), https://www.ncbi.nlm.nih.gov/books/NBK209859/.
31    Department of Nutritional Sciences, University of Texas at Austin, “Factual Food Labels: A Closer Look at the History,” April 6, 2018, https://he.utexas.edu/ntr-news-list/food-labels-history.
32    Jessica Lyons Hardcastle, “Google SLSA, Linux Foundation Drops SBOM for Supply Chain Security Boost,” SDxCentral, June 18, 2021, https://www.sdxcentral.com/articles/news/google-slsa-linux-foundation-drops-sbom-for-supply-chain-security-boost/2021/06/.
33    Simon Bisson, “How Microsoft Will Publish Info to Comply with Executive Order on Software Bill of Materials,” TechRepublic, May 6, 2022, https://www.techrepublic.com/article/microsoft-publish-info-comply-executive-order-software-bill-materials/.

The post The cases for using the SBOMs we build appeared first on Atlantic Council.

]]>
GRU 26165: The Russian cyber unit that hacks targets on-site https://www.atlanticcouncil.org/content-series/tech-at-the-leading-edge/the-russian-cyber-unit-that-hacks-targets-on-site/ Fri, 18 Nov 2022 13:44:53 +0000 https://www.atlanticcouncil.org/?p=586134 Russian hackers are not always breaching targets from afar, typing on their keyboards in Moscow bunkers or St. Petersburg apartment buildings. Enter GRU Unit 26165, a military cyber unit with hackers operating remotely and on-site. Going forward, Western intelligence and law enforcement personnel, as well as multinational organizations, would be wise to pay attention. 

The post GRU 26165: The Russian cyber unit that hacks targets on-site appeared first on Atlantic Council.

]]>
Russian hackers are not always breaching targets from afar, typing on their keyboards in Moscow bunkers or St. Petersburg apartment buildings. For some Russian government hackers, foreign travel is part of the game. They pack up their equipment, get on international flights, and covertly move around abroad to hack into computer systems.  

Enter GRU Unit 26165 (of the military intelligence agency Glavnoye Razvedyvatelnoye Upravlenie), a military cyber unit with hackers operating remotely and on-site. Despite the security risks on-site cyber operations pose to governments and international organizations, and the questions they raise about how the West should track and combat Russian state hacking, Russia’s activities in this realm are not receiving sufficient policy attention. 

GRU Unit 26165, the 85th Main Special Communications Center 

In March 2018, after the GRU tried to murder former Russian intelligence officer Sergei Skripal and his daughter Yulia in Salisbury, England using a Novichok nerve agent, the Kremlin came under international fire. British intelligence officials blamed the GRU, where Skripal used to work (and later became a British informant); the multinational Organization for the Prohibition of Chemical Weapons (OPCW), which enforces the Chemical Weapons Convention, launched an investigation; and in June of the same year, OPCW countries voted to let the body attribute chemical weapons attacks to particular actors. (A year later, the OPCW would formally ban Novichok nerve agents.) Additional journalistic investigations into the perpetrators, meanwhile, continued to point to the GRU’s involvement. 

Although the OPCW’s investigation was not made public for months, the Russian government decided to move quickly against the organization, turning to a tactical cyber unit to do so. 

OPCW Headquarters

On April 10, 2018, four Russian nationals landed at Amsterdam Schiphol Airport in the Netherlands. With diplomatic passports in hand, they were met by a member of the Russian embassy in The Hague. After loading a car with technical equipment—including a wireless network panel antenna to intercept traffic—the four individuals scouted the OPCW’s headquarters in The Hague for days, taking photos and circling the building before being intercepted by the Dutch General Intelligence and Security Service (Algemene Inlichtingen- en Veiligheidsdienst or AIVD) and sent back to Moscow. Seemingly, the plan had been for the operatives to hack into the OPCW’s systems to disrupt investigations into the attempted GRU chemical weapon attack.  

The Netherlands made all of this public on October 4, 2018, with Dutch intelligence identifying the four operators by name—Aleksei Sergeyevich Morenets and Evgenii Mikhaylovich Serebriakov were described as “cyber operators” and Oleg Mikhaylovich Sotnikov and Alexey Valerevich Minin were described as “HUMINT (human intelligence) support.” The AIVD linked all of these individuals to Russia’s GRU. A Department of Justice (DOJ) indictment issued on the same day went a step further, linking the hackers—Morenets and Serebriakov—to GRU Unit 26165. 

Unit 26165, otherwise known as Fancy Bear, was already known for breaking into systems from afar, including the Democratic National Committee in 2016 and World Athletics (previously the International Amateur Athletic Federation) in 2017. Yet, the revelations around the attempted OPCW hack made clear that Unit 26165 does much more. The full DOJ indictment, subsequently published by the National Security Archive at The George Washington University, alleged that Morenets “was a member of a Unit 26165 team that traveled with technical equipment to locations around the world to conduct on-site hacking operations to target and maintain persistent access to WiFi networks used by victim organizations and personnel.” Serebriakov also belonged to such a team. While Unit 26165 often conducts remote hacks from Russia, the indictment stated that “if the remote hack was unsuccessful or if it did not provide the conspirators with sufficient access to victims’ networks,” Unit 26165 would carry out “‘on-site’ or ‘close access’ hacking operations.” 

The OPCW incident was not the first time these particular hackers went abroad to conduct operations. According to the DOJ, Morenets traveled to Rio de Janeiro, Brazil, and Lausanne, Switzerland, in 2016 to breach the into WiFi networks used by people with access to the US Anti-Doping Agency, the World Anti-Doping Agency, and the Canadian Center for Ethics in Sport. Serebriakov, the indictment stated, also participated in these on-site hacking operations. Both individuals allegedly planned to target the Spiez Laboratory in Switzerland after the OPCW hack. The indictment alleged that Ivan Sergeyevich Yermakov, also part of GRU Unit 26165, provided remote reconnaissance support for his colleagues’ on-site hacking operation against the OPCW. 

Additionally, it is speculated that these on-site hackers were supported by another GRU unit, which is where the other two Russians caught in the Netherlands by the AIVD enter the picture. Sotnikov and Minin were described generically by the Dutch as HUMINT support for the two hackers, and as “Russian military intelligence officers” by the DOJ’s full indictment. Neither of these government documents mentions a specific GRU unit associated with Sotnikov or Minin. 

Published in tandem with the October 4, 2018 state disclosures was a new Bellingcat investigation linking Morenets’ Russian car to the Unit 26165 building in Russia. It also linked Minin’s car registration to the GRU “Conservatory.” The Conservatory—formally numbered GRU Unit 22177—is the Russian Defense Ministry’s Military Academy and a training site for the GRU, located in Moscow near GRU headquarters and other GRU training facilities. Due to Minin’s connection to 22177 and the Dutch and US governments’ vague references to Sotnikov and Minin as “HUMINT support” and “Russian military intelligence officers” separate from Unit 26165, numerous articles have speculated that operatives from another GRU unit were tasked to support the mission in The Hague. 

Stepping back, assessing the picture 

Policymakers should use this information as a case study for how Russian government hackers—and, theoretically, state hackers from other adversary countries—move around the world to break into systems. The use of on-site cyber operations abroad seems unique to this GRU team, with many possible motivations at play. It is unclear how high up the oversight chain these on-site operations go. What is clear, though, is that Western governments cannot restrict their hunt for Russian hackers to the digital sphere; they must also remember how Russian hacking fits into broader Russian intelligence activities, including overseas. 

There are several takeaways and implications that result from this information. The on-site, overseas cyber operations of GRU Unit 26165 appears to stand out from other Russian government cyber units. Of course, cyber capabilities are a part of intelligence operations more broadly, and many human operations around the world leverage cyber reconnaissance on an ongoing basis. Nonetheless, when the United Kingdom (UK) released its own statement on Russian government cyber activity in October 2018, it clearly differentiated between the activities of Unit 26165 in the Netherlands, Brazil, and Switzerland and those of Unit 74455 (Sandworm), which it stressed “were carried out remotely—by GRU teams based within Russia.” The DOJ indictment appears to suggest, although this is not totally clear, that hackers going abroad are part of at least one specific sub-team within the broader cyber unit. Further, the DOJ indictment lists numerous examples of on-site hacks or hack attempts, but publicly available information has not exposed the same kind of on-site operations by Russia’s Foreign intelligence Service, the SVR. 

The motivations behind the on-site operations of Unit 26165 are also a key question. Based on publicly available information, its proclivity for “close access” operations leans toward disrupting high-profile investigations into potentially embarrassing Russian government activity. The first set of reported hacks targeted international investigations into allegations of Russian doping at the Olympics; the second set of hacks targeted the international investigation into the attempted murder of the Skripals with chemical weapons. It is possible, therefore, that protecting the Kremlin’s image is a high priority. Simultaneously, the DOJ indictment stated that Unit 26165 carries out on-site operations when remote operations are unsuccessful, suggesting a more functional, effects-oriented motive for sending hackers overseas. 

However, there is another possibility: The GRU may simply be using on-site operations when it needs to draw attention away from its own failures. The botched attempt to murder Sergei and Yulia Skripal was carried out by GRU Unit 29155, a Russian military intelligence and assassination team with close relationships to the Signal Scientific Center federal research facility and the Ministry of Defense’s State Institute for Experimental Military Medicine in St. Petersburg, entities suspected of managing Russia’s Novichok program. GRU operatives are well-known for their high-risk appetites and sometimes overt violence, even relative to other Russian intelligence organs like the Federal Security Service (FSB), Russia’s domestic security agency. (That said, the FSB is a violent organization, too, carrying out repressive tactics in Russia and, in 2019, assassinating a Georgian asylum seeker in Berlin.) 

This tendency is playing out in cyberspace already, given that GRU teams are behind the NotPetya malware attack, shutdowns of Ukrainian power grids, and other more destructive, publicly visible operations. Such cyber activities, in line with broader intelligence cultures, stand in contrast to agencies like the SVR, which appears to place a premium on covertness, both online and offline. Wanting to frantically undermine an investigation into its own failed operation, it is not out of the question that the GRU sent Unit 26165 operatives overseas. That Unit 26165 hackers Morenets and Serebriakov may have had support from other parts of the GRU (HUMINT operators Sotbikov and Minin) in the OPCW plot suggests possible broader intra-agency coordination. But again, it is easy—and sometimes misguided—to assume there is more coordination within the Russian security services than actually occurs. 

All of this raises a final and more interesting question always at play in the Russian cyber ecosystem: How far up the chain does oversight of on-site hacks go? 

Issue Brief

Sep 19, 2022

Untangling the Russian web: Spies, proxies, and spectrums of Russian cyber behavior 

By Justin Sherman

This issue brief analyzes the range of Russian government’s involvement with different actors in the large, complex, and often opaque cyber web, as well as the risks and benefits the Kremlin perceives or gets from leveraging actors in this group. The issue brief concludes with three takeaways and actions for policymakers in the United States, as well as in allied and partner countries.

Cybersecurity Russia

Cyber and information operations with high political sensitivity, which Moscow conceptualizes more cohesively than in the West, are more likely to be supervised by the Kremlin. The US intelligence community assessed, for example, that the influence actions targeting the 2016 US election were “approved at the highest levels of the Russian government,” and a similar conclusion was reached vis-à-vis President Vladimir Putin and Russia’s election interference in 2020. This may also be true for more traditional intelligence operations. When the UK finished its investigation into the murder of former Russian spy Alexander Litvinenko, who was killed on British soil with the radioactive material Polonium-210, it concluded that Putin and Russian Security Council head Nikolai Patrushev “probably” approved the killing. 

The GRU’s botched murder attempt on the Skripals garnered significant international attention. At the time, Russian officials were already criticizing the OPCW’s investigations into the Assad regime’s use of chemical weapons in Syria—called an attempt “to make the OPCW draw hasty but at the same time far-reaching conclusions” by Russia’s deputy foreign minister. When the investigation into the Skripal poisonings began, senior officials like Russian Foreign Minister Sergei Lavrov falsely claimed that a lab used by the OPCW picked up traces of a nerve agent possessed by NATO countries but not Russia. Putin, meanwhile, has always held particular contempt for people he perceives as betraying the Russian nation, once saying that “traitors always meet a bad end,” suggesting a kind of personal anger directed at individuals like Sergei Skripal who became agents for the West. The Olympic doping investigations, too, proved an embarrassment for Moscow. 

In this vein, it is quite possible that higher-level Kremlin officials may direct the GRU to act against investigations like OPCW’s, prompting the GRU to deploy Unit 26165 hackers to the Netherlands. It is also plausible that the activities of Unit 26165 merely reflect broader intelligence collection priorities, spying on those trying to “hurt” Russia, such as investigators looking into Russian athlete doping. Since there are few publicly known cases of Unit 26165 conducting “close access” operations, perhaps these are not representative samples, with the GRU carrying out these activities on its own after all. 

Regardless, the GRU is clearly sending hackers overseas to carry out operations. Going forward, Western intelligence and law enforcement personnel, as well as multinational organizations, would be wise to pay attention. 

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post GRU 26165: The Russian cyber unit that hacks targets on-site appeared first on Atlantic Council.

]]>
The 5×5—The rise of cyber surveillance and the Access-as-a-Service industry https://www.atlanticcouncil.org/content-series/the-5x5/the-5x5-the-rise-of-cyber-surveillance-and-the-access-as-a-service-industry/ Wed, 16 Nov 2022 05:01:00 +0000 https://www.atlanticcouncil.org/?p=586322 Experts discuss the rise of cyber surveillance and the impact of the Access-as-a-Service industry on the United States and its allies.

The post The 5×5—The rise of cyber surveillance and the Access-as-a-Service industry appeared first on Atlantic Council.

]]>
This article is part of The 5×5, a monthly series by the Cyber Statecraft Initiative, in which five featured experts answer five questions on a common theme, trend, or current event in the world of cyber. Interested in the 5×5 and want to see a particular topic, event, or question covered? Contact Simon Handler with the Cyber Statecraft Initiative at SHandler@atlanticcouncil.org.

Approximately one year ago, on November 3, 2021, the US Commerce Department added four companies, including Israel-based NSO Group, to its Entity List for supporting cyber surveillance and access-as-a-service activities, “that are contrary to the national security or foreign policy interests of the United States.” Foreign governments used NSO Group’s products, notably its Pegasus spyware, to target individuals, such as journalists and activists, and suppress dissent. Just one month later, reporting indicated that Apple tipped off the US Embassy in Uganda that an undisclosed foreign government had targeted the iPhones of eleven embassy employees. 

A New York Times report published on November 12 reveals how close the United States was to using Pegasus for its own investigative purposes. The FBI, which previously acknowledged having acquired a Pegasus license for research and development, contemplated use of the tool in late 2020 and early 2021 and developed guidelines for how federal prosecutors would disclose its use in criminal proceedings. The FBI ultimately decided not to buy from NSO, amid the many stories of abuse of the tool by foreign governments, but the revelation underscores the double-edged nature of cyber surveillance technologies designed to support law enforcement and intelligence missions. 

There are dozens of firms in the Access-as-a-Service industry developing and proliferating a powerful class of surveillance technologies. We brought together a group of experts to discuss the rise of cyber surveillance and the impact of this industry on the United States and its allies. 

#1 What implications can foreign governments’ domestic cyber surveillance programs have on US national security?

Siena Anstis, senior legal advisor, Citizen Lab, Munk School of Global Affairs & Public Policy, University of Toronto

“The proliferation of spyware presents a national security risk to the United States. These technologies facilitate not only the targeting of human rights defenders and civil society, but also provide an across-the-board opportunity to undertake acts of espionage through their ability to exploit vulnerabilities in popular applications and operating systems that impact everyone. This was well-illustrated by the targeting of US diplomats in 2021 with NSO Group’s Pegasus spyware. No one is safe from being targeted with this highly intrusive, silent, and increasingly hard to detect technology. This risk extends to the US government.” 

Winnona DeSombre, nonresident fellow, Cyber Statecraft Initiative, Digital Forensic Research Lab (DFRLab), Atlantic Council

“We live in an increasingly interconnected world when it comes to data and surveillance. From an individual perspective, US citizens who work on national security regularly interface with relatives and friends abroad who may be surveilled. US military service members use Tiktok, an app whose data flows back to China. Domestic surveillance in another country does not just touch that country’s citizens, but it also touches any US national who interfaces with that country’s people and corporations.” 

Lars Gjesvik, doctoral research fellow, Norwegian Institute of International Affairs

“Way back in ancient 2013, the US intelligence community warned that private companies were developing tools that aided foreign states in targeting US systems. Clearly, this has been of some concern for a decade and has some implications for national security. There is no doubt that such commercially available tools have done great harm when it comes to human rights and targeting civil society, and you have some reported cases like Project Raven where commercial tools start to become a national security problem as well.” 

Kirsten Hazelrig, policy lead, The MITRE Corporation

“There are absolutely direct threats to US interests from the use of cyber surveillance abroad—any newspaper will relay confirmed reports of US officials being targeted abroad by tools such as Pegasus. However, this is simply a new tool for an age-old game of espionage. Perhaps more insidious is how tools and programs can be abused to enable the spread of authoritarianism, degrade human rights, and erode democratic values. I am not sure if anyone fully understands the implications to national security if these capabilities are allowed to spread unchecked.” 

Ole Willers, postdoctoral researcher, Department of Organisation, Copenhagen Business School:  

“Within the context of cyber surveillance programs, the distinction between domestic and foreign operations is not always as clearcut. Domestic campaigns oftentimes target individuals located in other jurisdictions, including the United States. The targeting of Canadian-based activist Omar Abdulaziz by Saudi Arabian surveillance operations is a prominent example.”

#2 Where do cyber capabilities fit into the spectrum of surveillance technologies?

Anstis: “Spyware technology provides governments with the ability to undertake highly intrusive surveillance. Sophisticated versions of this technology provide complete entry into targeted devices, including the contents of encrypted communication apps, camera, microphone, documents stored on the phone, and more. This impacts not only targeted individuals, but also exposes those who communicate with these people such as friends, family, and colleagues. Governments have a variety of surveillance technologies at their disposal, and spyware is undoubtedly one of the most stealthy and intrusive tools on the market that makes it difficult, if not impossible, for journalists, human rights defenders, activists, and other members of civil society critical of the government to do their work.” 

DeSombre: “Cyber capabilities that feed into offensive cyber operations are usually far more tailored than surveillance technology writ large, especially compared to dragnet surveillance technologies. The little bit of overlap occurs when governments want to surveil targets who they believe are of higher value or harder to get to, in which case authoritarian governments will break out the more expensive capabilities like zero-days or purchase expensive spyware licenses like those offered by NSO and Candiru.” 

Gjesvik: “The term ‘surveillance technologies’ is quite broad, and it depends greatly on how you define it. But if you think about the capabilities and services provided to intelligence, law enforcement, or military agencies, then it is a question of how sophisticated they are and their scope. The most sophisticated cyber capabilities offered by the top-tier companies probably equal the capabilities of most intelligence agencies, and there is no real difference functionally in them being used domestically or against strategic adversaries.” 

Hazelrig: “Surveillance technologies are broad sets of tools that enable a human actor to achieve an objective, be it to improve traffic, indict a criminal, track terrorist movements, stalk a partner, or steal a competitor’s data. Cyber capabilities can range as widely as these objectives and their targets. They may range from low-end spyware to extremely sophisticated technology, and are almost always paired with additional tools and tradecraft that make them impossible to evaluate devoid of operational context.” 

Willers: “If we define cyber capabilities in terms of the various activities oriented towards gaining stealth access to digital information, their importance for surveillance operations can hardly be overstated. Whereas traditional surveillance technologies continue to play a role, cyber capabilities offer forms of access that are much more comprehensive. Access to a smartphone is fundamentally different from the traditional wiretap and allows for the real-time surveillance of location patterns, communications, web searches, financial transactions, and more.”

#3 What is the Access-as-a-Service industry and what kind of relationship should the United States and its allies have with it?

Anstis: “The Access-as-a-Service industry describes companies that provide services to different actors—often states—to access data or systems. In the past few years, we have seen an acceleration in human rights abuses associated with this industry and a growing formalization of the sector with private investors and states increasingly interested in the growth of these companies. Considering the litany of human rights abuses that follows the growing availability of the technologies and services offered by this industry, the United States and other states have an obligation to regulate and limit the availability of these technologies and the industry’s business practices.” 

DeSombre: “The Access-as-a-Service industry makes offensive cyber operations incredibly simple to pull off—aggregating disparate capabilities that take years of investment to make (zero-days, malware, training, infrastructure, processes) into a single solution that a government can purchase off the shelf and use easily. It is not necessarily a bad industry—the United States and its allies also rely on privatized talent to conduct cyber operations. However, the United States and its allies must be proactive about shaping responsible behavior within the industry to ensure these services are not purchased en masse by authoritarian regimes and adversaries.” 

Gjesvik: “Simply put, it is an industry that sells access to digital data and systems. A wide swathe of technologies and services fits into this definition. Considering what relationship Western states should have with it should start with acknowledging that most states rely on private contractors and capabilities to some extent. There are clear problems of democratic oversight and misuse, but having their intelligence agencies and law enforcement lose access to digital evidence and data is probably not something governments would accept, and smaller states would struggle to develop the capabilities themselves. It is hard to decide on a relationship with a surveillance industry without deciding on the role of surveillance in modern societies, and I do not think we have done that.” 

Hazelrig: “Access-as-a-Service, or the related but more colorfully named “hacker-for-hire” industry, are loose terms for the criminal actors that sell the information, capabilities, and services necessary to conduct cyber intrusions. These actors sell their wares with little regard as to impact and intent, enabling ransomware and other attacks.” 

Willers: “The Access-as-a-Service industry is a niche market that sells data access to state agencies, and it has repeatedly been singled out for facilitating the proliferation of offensive cyber capabilities to authoritarian states. The United States and its allies face a dilemma in that they rely on the Access-as-a-Service industry to provide domestic law enforcement and intelligence agencies with cutting edge technology. Simultaneously, they have a strong incentive to limit the availability of these technologies to other customers. Balancing these interests has proven extremely difficult, which is why I see a need to limit our dependency on the private sector within this context.” 

More from the Cyber Statecraft Initiative:

#4 In what ways does government surveillance compare and contrast with corporate surveillance?

Anstis: “Government surveillance is similar to corporate surveillance in that both exploit the fact that we increasingly live our lives on internet-connected devices. The data we generate in our daily interactions, which is then collected by companies and governments, can be used for a variety of purposes that target and exploit us—from the crafting of targeted advertising to location tracking to the mapping of a human right activist’s network. However, government surveillance differs in at least one important respect: governments have the power to not only surveil, but also to detain, torture, kidnap, or otherwise enact acts of violence against an individual. Spyware technologies facilitate the government’s ability to engage in these activities.” 

DeSombre: “The podcast I help run just made an episode on this! Effectively, corporate surveillance and government surveillance have two separate goals: corporations collect your data to sell (usually to advertisers who then target you with personalized advertisements), while the government collects data for law enforcement or national security purposes. US government surveillance has hard rules it must follow for collecting on US citizens, although some of this is circumvented by buying corporate data. US and EU companies are now getting increasingly constrained by data privacy laws as well. But these types of regulations on both companies and governments differ vastly from country to country.” 

Gjesvik: “When you think about who conducts the surveillance, the big difference would be the extent to which government surveillance is supposedly in the end about protecting its citizens while corporate surveillance is mainly about the interests of the corporation. If it is about who actually does the surveillance then the distinction between governments and private actors can be pretty blurry, as can the level of capabilities.” 

Hazelrig: “The technical aspects of government and commercial surveillance are similar, and often share tools and techniques. However, the practices around their use are widely different. For a large part, democratic states limit surveillance through public opinion and law. There is admittedly misuse and abuse, but an intent and organizational structure to ‘do good.’ This is not necessarily true of commercial capabilities that may be sold without understanding of or care about intended use. As the opaque commercial market evolves, we are just beginning to understand the full spectrum of uses and impacts. Democratic states need to develop norms for law enforcement and other acceptable uses of cyber intrusion and surveillance capabilities, and to enforce actions against those that violate these norms and the industry that supplies them.”

Willers: “Both can be problematic considering that privacy is a fundamental human right in the European Union. Access to personal information has become a key asset across many industries, but the gathering of this information is a purely private and for-profit undertaking, however problematic it may be. State surveillance derives from a desire to provide public safety, which can be a good thing as long as it remains proportional and rooted in democratic norms—conditions that cannot be taken for granted.”

#5 How has the Access-as-a-Service industry evolved over the past two decades and where do you see it going from here?

Anstis: “The Access-as-a-Service industry has become increasingly formalized in the past two decades, with growing interest from investors and states in terms of funding the industry, as well as accessing the services and technologies offered. I see the next few years as a critical turning point in the industry’s development. Countless human rights abuses have brought increased awareness that the services and technologies offered by the Access-as-a-Service industry have serious human rights ramifications—as well as national security concerns—that need to be addressed. With ongoing investigations in the European Parliament, the United States, and elsewhere into companies that participate in this industry, I hope that we will see more specific steps aimed at curbing and controlling it.” 

DeSombre: “Like every part of the cybersecurity ecosystem since the early 2000s, the Access-as-a-Service industry has grown, professionalized, and turned towards mobile, embedded, and other non-desktop systems. Your laptop is not the only place with interesting data!” 

Gjesvik: “This is a pretty opaque industry, and there is not a ton of structured encompassing data available that I am aware of, but there are some broad trends. The first is globalization, a quite substantive expansion of tools and technologies available, and a lot more money to be made as well. Going forward, I am probably most interested in the extent to which the industry is controllable by any state actor. Will recent efforts by the United States and the European Union succeed in limiting the worst excesses? Or will it just accelerate the diversification of suppliers?” 

Hazelrig: “So long as there have been criminal hackers, there have been ways for those with the right connections to procure intrusion services. However, about a decade ago, we started to see the emergence of professional firms that sold these services commercially, primarily to governments around the globe. The past couple of years has brought casual proliferation and a booming ‘consumer’ market—shady companies advertise euphemistically-phrased services on mainstream platforms such as LinkedIn, and many online criminal marketplaces have whole sections of specialty products and services from which to choose.” 

Willers: “The origins of the Access-as-a-Service industry can be traced back to a combination of privatization dynamics in the telecommunication sector during the 1990s, the rise of digital communication systems, and the political focus on surveillance in the aftermath of the September 11 terrorist attacks. Since then, the industry has developed at the speed of technology, and there is good reason to doubt that the United States remains in a position to control it. Limiting access to technology is difficult, especially when it is as mobile as spyware technology. This is why I doubt that the United States or any other country alone can control the operations of the market.” 

Simon Handler is a fellow at the Atlantic Council’s Cyber Statecraft Initiative within the Digital Forensic Research Lab (DFRLab). He is also the editor-in-chief of The 5×5, a series on trends and themes in cyber policy. Follow him on Twitter @SimonPHandler.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post The 5×5—The rise of cyber surveillance and the Access-as-a-Service industry appeared first on Atlantic Council.

]]>
The cyber strategy and operations of Hamas: Green flags and green hats https://www.atlanticcouncil.org/in-depth-research-reports/report/the-cyber-strategy-and-operations-of-hamas-green-flags-and-green-hats/ Mon, 07 Nov 2022 05:01:00 +0000 https://www.atlanticcouncil.org/?p=579898 This report seeks to highlight Hamas as an emerging and capable cyber actor, and help the policy community understand how similar non-state groups may leverage the cyber domain in the future.

The post The cyber strategy and operations of Hamas: Green flags and green hats appeared first on Atlantic Council.

]]>

Executive summary

Cyberspace as a domain of conflict often creates an asymmetric advantage for comparably less capable or under-resourced actors to compete against relatively stronger counterparts.1 As such, a panoply of non-state actors is increasingly acquiring capabilities and integrating offensive cyber operations into their toolkits to further their strategic aims. From financially driven criminal ransomware groups to politically inspired patriot hacking collectives, non-state actors have a wide range of motivations for turning to offensive cyber capabilities. A number of these non-state actors have histories rooted almost entirely in armed kinetic violence, from professional military contractors to drug cartels, and the United States and its allies are still grappling with how to deal with them in the cyber context.2 Militant and terrorist organizations have their own specific motivations for acquiring offensive cyber capabilities, and their operations therefore warrant close examination by the United States and its allies to develop effective countermeasures.

While most academic scholarship and government strategies on counterterrorism are beginning to recognize and address the integral role of some forms of online activity, such as digital media and propaganda on behalf of terrorist organizations, insufficient attention has been given to the offensive cyber capabilities of these actors. Moreover, US strategy,3 public intelligence assessments, and academic literature on global cyber threats to the United States overwhelmingly focuses on the “big four” nation-state adversaries—China, Russia, Iran, and North Korea. Before more recent efforts to address the surge in financially driven criminal ransomware operations, the United States and its allies deployed policy countermeasures overwhelmingly designed for use against state actors.

To the extent that US counterterrorism strategy addresses the offensive cyber threat from terrorist organizations, it is focused on defending critical infrastructure against the physical consequences of a cyberattack. Hamas, despite being a well-studied militant and terrorist organization, is expanding its offensive cyber and information capabilities, a fact that is largely overlooked by counterterrorism and cyber analysts alike. Overshadowed by the specter of a catastrophic cyberattack from other entities, the real and ongoing cyber threats posed by Hamas prioritize espionage and information operations.

This report seeks to highlight Hamas as an emerging and capable cyber actor, first by explaining Hamas’s overall strategy, a critical facet for understanding the group’s use of cyber operations. Next, an analysis will show how Hamas’s cyber activities do not indicate a sudden shift in strategy but, rather, a realignment that augments operations. In other words, offensive cyber operations are a new way for Hamas to do old things better. Finally, the policy community is urged to think differently about how it approaches similar non-state groups that may leverage the cyber domain in the future. This report can be used as a case study for understanding the development and implementation of cyber tools by non-state entities.

As the title of this report suggests, Hamas is like a green hat hacker—a term that is not specific to the group but recognized in the information security community as someone who is relatively new to the hacking world, lacking sophistication but fully committed to making an impact and keen to learn along the way.4 Hamas has demonstrated steady improvement in its cyber capabilities and operations over time, especially in its espionage operations against internal and external targets. At the same time, the organization’s improvisation, deployment of relatively unsophisticated tools, and efforts to influence audiences are all hallmarks of terrorist strategies. This behavior is in some ways similar to the Russian concept of “information confrontation,” featuring a blend of technical, information, and psychological operations aimed at wielding influence over the information environment.5

Understanding these dynamics, as well as how cyber operations fit into the overall strategy, is key to the US development of effective countermeasures against terrorist organizations’ offensive cyber operations.

“Pwn” goal

In the summer of 2018, as teams competed in the International Federation of Association Football (FIFA) World Cup in Russia, Israeli soldiers followed the excitement on their smartphones from an Israel Defense Forces (IDF) base thousands of miles away. Like others in Israel, the soldiers were using a new Android application called Golden Cup, available for free from the Google Play store. The program was promoted in the lead up to the tournament as “the fastest app for live scores and fixtures for the World Cup.”6 The easy-to-use application delivered as advertised—and more.

Once installed, the application communicated with its command-and-control server to surreptitiously download malicious payloads onto user devices. The payloads infected the target devices with spyware, a variety of malware that discreetly monitors the target’s device and steals its information, usually for harmful use against the target individual.7 In this particular case, the spyware was intentionally deployed after the application was downloaded from the Google Play store in order to bypass Google’s security screening process.8 This allowed the spyware operator to remotely execute code on user smartphones to track locations, access cameras and microphones, download images, monitor calls, and exfiltrate files.

Golden Cup users, which included Israeli civilians and soldiers alike, did not realize that their devices were infected with spyware. As soldiers went about their daily routines on bases, the spyware operators reaped reams of data from the compromised smartphones. In just a few weeks of discreet collection, before discovery by IDF security, the adversary successfully collected non-public information about various IDF bases, offices, and military hardware, such as tanks and armored vehicles.9

The same adversary targeted Israeli soldiers with several other malicious Android applications throughout the summer of 2018. A fitness application that tracks user running routes collected the phone numbers of soldiers jogging in a particularly sensitive geographic location. After collecting these numbers, the adversary targeted the soldiers with requests to download a second application that then installed spyware. Additional targeting of Israeli soldiers that same summer included social engineering campaigns encouraging targets to download various spyware-laced dating applications with names like Wink Chat and Glance Love, prompting the IDF to launch the aptly named Operation Broken Heart in response.10

Surprisingly, this cyber espionage campaign was not the work of a nation-state actor. Although the clever tradecraft exhibited in each operation featured many of the hallmarks of a foreign intelligence service, neither Israel’s geopolitical nemesis Iran nor China,11 an increasingly active Middle East regional player, was involved.12 Instead, the campaign was the work of Hamas.

1. Introduction

The asymmetric advantage afforded by cyberspace is leading a panoply of non-state actors to acquire and use offensive cyber capabilities to compete against relatively stronger counterparts. The cyber threat from criminal ransomware organizations has been well documented, yet a range of other non-state actors traditionally involved in armed kinetic violence, from professional military contractors to drug cartels, is also trying their hand at offensive cyber operations, and the United States and its allies are still grappling with how to respond. Each actor has a discreet motivation for dabbling in cyber activities, and lumping them all into one bucket of non-state actors can complicate efforts to study and address their actions. The operations of militant and terrorist organizations in particular warrant close examination by the United States and its allies in order to develop effective countermeasures.

A robust online presence is essential for modern terrorist organizations. They rely on the internet to recruit members, fund operations, indoctrinate target audiences, and garner attention on a global scale—all key functions for maintaining organizational relevance and for surviving.13 The 2022 Annual Threat Assessment from the US Intelligence Community suggests that terrorist groups will continue to leverage digital media and internet platforms to inspire attacks that threaten the United States and US interests abroad.14 Recent academic scholarship on counterterrorism concurs, acknowledging the centrality of the internet to various organizations, ranging from domestic right-wing extremists to international jihadists, and their efforts to radicalize, organize, and communicate.

The US government has taken major steps in recent years to counter terrorist organizations in and through cyberspace. The declassification of documents on Joint Task Force Ares and Operation Glowing Symphony, which began in 2016, sheds light on complex US Cyber Command efforts to combat the Islamic State in cyberspace, specifically targeting the group’s social media and propaganda efforts and leveraging cyber operations to support broader kinetic operations on the battlefield.15 The latest US National Strategy for Counterterrorism, published in 2018, stresses the need to impede terrorist organizations from leveraging the internet to inspire and enable attacks.16

Indeed, continued efforts to counter the evolving social media and propaganda tools of terrorist organizations will be critical, but this will not comprehensively address the digital threat posed by these groups. Counterterrorism scholarship and government strategies have paid scant attention to the offensive cyber capabilities and operations of terrorist organizations, tools that are related but distinct from other forms of online influence. Activities of this variety do not necessarily cause catastrophic physical harm, but their capacity to influence public perception and, potentially, the course of political events should be cause for concern.

Several well-discussed, politically significant non-state actors with histories rooted almost entirely in kinetic violence are developing, or otherwise acquiring, offensive cyber capabilities to further their interests. More scrutiny of these actors, their motivations, and how they strategically deploy offensive cyber capabilities in conjunction with evolving propaganda and kinetic efforts is warranted to better orient toward the threat.

Hamas, a Palestinian political party and militant terrorist organization that serves as the de facto governing body of the Gaza Strip, is one such actor. The group’s burgeoning cyber capabilities, alongside its propaganda tactics, pose a threat to Israel, the Palestinian Authority, and US interests in the region—especially in tandem with the group’s capacities to fund, organize, inspire, and execute kinetic attacks. This combination of capabilities has historically been the dominion of more powerful state actors. However, the integration of offensive cyber capabilities into the arsenals of traditionally kinetic non-state actors, including militant organizations, is on the rise due to partnerships with state guarantors and the general proliferation of these competencies worldwide.

This report seeks to highlight the offensive cyber and information capabilities and behavior of Hamas. First, a broad overview of Hamas’s overall strategy is provided, an understanding of which is key for evaluating its cyber activities. Second, this report analyzes the types of offensive cyber operations in which Hamas engages, showing that the adoption of cyber capabilities does not indicate a sudden shift in strategy but, rather, a realignment of strategy and an augmentation of operations. In other words, offensive cyber operations are a new way to do old things better. Third, this report aims to push the policy community to think differently about its approach to similar non-state groups that may leverage the cyber domain in the future.

2. Overview of Hamas’s strategy

Principles and philosophy

Founded in the late 1980s, Harakat al-Muqawamah al-Islamiyyah, translated as the Islamic Resistance Movement and better known as Hamas, is a Palestinian religious political party and militant organization. After Israel disengaged from the Gaza Strip in 2005, Hamas used its 2006 Palestinian legislative election victory to take over militarily from rival political party Fatah in 2007. The group has served as the de facto ruler of Gaza ever since, effectively dividing the Palestinian Territories into two entities, with the West Bank governed by the Hamas-rejected and Fatah-controlled Palestinian Authority.17

Hamas’s overarching objectives are largely premised on its founding principles—terminating what it views as the illegitimate State of Israel and establishing Islamic, Palestinian rule.18 The group’s grand strategy comprises two general areas of focus: resisting Israel and gaining political clout with the Palestinian people. These objectives are interconnected and mutually reinforcing, as Hamas’s public resistance to Israel feeds Palestinian perceptions of the group as the leader of the Palestinian cause.19

Map of Israel and the Palestinian Territories.
Source: Nations Online Project

Despite Hamas’s maximalist public position on Israel, the organization’s leaders are rational actors who logically understand the longevity and power of the State of Israel. Where the group can make meaningful inroads is in Palestinian politics, trying to win public support from the more secular, ruling Fatah party and positioning itself to lead a future Palestinian state. Looming uncertainty about the future of an already weak Palestinian Authority, led by the aging President Mahmoud Abbas, coupled with popular demand for elections, presents a potential opportunity for Hamas to fill a leadership vacuum.20

To further these objectives, Hamas attracts attention by frequently generating and capitalizing on instability. The group inflames already tumultuous situations to foster an environment of extremism, working against those who are willing to cooperate in the earnest pursuit of a peaceful solution to the Israel–Palestine conflict. Hamas uses terror tactics to influence public perception and to steer political outcomes, but still must exercise strategic restraint to avoid retaliation that could be militarily and politically damaging. Given these self-imposed restraints, Hamas seeks alternative methods of influence that are less likely to result in blowback.

Terrorism strategy

Hamas’s terror tactics have included suicide bombings,21 indiscriminate rocket fire,22 sniper attacks,23 incendiary balloon launches,24 knifings,25 and civilian kidnappings,26 all in support of its larger information strategy to project a strong image and to steer political outcomes. Through these activities, Hamas aims to undermine Israel and the Palestinian Authority27 and challenge the Palestine Liberation Organization’s (PLO)28 standing as the “sole representative of the Palestinian people.”

Terrorism forms the foundation of Hamas’s approach, and the organization’s leadership openly promotes such activities.29 While the group’s terror tactics have evolved over time, they have consistently been employed against civilian targets to provoke fear, generate publicity, and achieve political objectives. Israeli communities targeted by terrorism, as well as Palestinians in Gaza living under Hamas rule, suffer from considerable physical and psychological stress,30 driving Israeli policymakers to carry out military operations, often continuing a vicious cycle that feeds into Hamas’s information campaign.

These terrorist tactics follow a coercive logic that aligns with Hamas’s greater messaging objectives. Robert Pape’s “The Strategic Logic of Suicide Terrorism” specifically names Hamas as an organization with a track record of perpetrating strategically timed suicide terrorist attacks for coercive political effect.31 In 1995, for example, Hamas conducted a flurry of suicide attacks, killing dozens of civilians in an attempt to pressure the Israeli government to withdraw from certain locations in the West Bank. Once negotiations were underway between Israel and the PLO, Hamas temporarily suspended the attacks, only to resume them against Israeli targets when diplomatic progress appeared to stall. Israel would eventually partially withdraw from several West Bank cities later that year.32

Similarly, just several months before Israel’s 1996 general election, incumbent Labor Party Prime Minister Shimon Peres led the polls by roughly 20 percent in his reelection bid against Benjamin Netanyahu and the Likud Party. However, a spate of Hamas suicide bombings cut Peres’s lead and Netanyahu emerged victorious.33 The attacks were designed to weaken the reelection bid of Peres, widely viewed as the candidate most likely to advance the peace process, and strengthen the candidacy of Netanyahu. Deliberate terror campaigns such as these demonstrate the power Hamas wields over Israeli politics.34

The Israeli security establishment has learned lessons from the phenomenon of suicide terrorism, implementing countermeasures to foil attacks. Since the mid-2000s, Hamas has shifted its focus to firing rockets of various ranges and precision from the Gaza Strip at civilian population centers in Israel.35 The rocket attacks became frequent after Israel’s disengagement from Gaza in 2005, ebbing and flowing in alignment with significant political events.36 For instance, the organization targeted towns in southern Israel with sustained rocket fire in the lead up to the country’s general election in 2009 to discourage Israelis from voting for pro-peace candidates.37

A rocket fired from the Gaza Strip into Israel, 2008.
Source: Flickr/paffairs_sanfrancisco

Strategic restraint

Each of these terror tactics has the powerful potential to generate publicity with Israelis, Palestinians, and audiences elsewhere. However, unrestrained terrorism comes at a cost, something Hamas understands. Hamas must weigh its desire to carry out attacks with the concomitant risks, including an unfavorable international perception, military retaliation, infrastructure damage, and internal economic and political pressures.

Hamas addresses this in a number of ways. First, it limits its operations, almost exclusively, to Israel and the Palestinian Territories. Hamas has learned from the failures of other Palestinian terrorist organizations, whose operations beyond Israel’s borders were often counterproductive, attracting legitimate international criticism of these groups.38 Such operations also run the risk of alienating critical Hamas benefactors like Qatar and Turkey.39 These states, which maintain important relationships with the United States—not to mention burgeoning ties with Israel—could pressure Hamas to course correct, if not outright withdraw their support for the organization.40 The continued flow of billions of dollars in funding from benefactors like Qatar is critical, not just to Hamas’s capacity to conduct terror attacks and wage war,41 but also to its efforts to reconstruct infrastructure and provide social services in the Gaza Strip, both key factors for building its political legitimacy among Palestinians.42

Second, with each terrorist attack, Hamas must weigh the potential for a forceful Israeli military response. The cycle of terrorism and retaliation periodically escalates into full-scale wars that feature Israeli air strikes and ground invasions of Gaza. These periodic operations are known in the Israeli security establishment as “mowing the grass,” a component of Israel’s strategy to keep Hamas’s arsenal of rockets, small arms, and infrastructure, including its elaborate underground tunnel network, from growing out of control like weeds in an unkempt lawn.43 Hamas’s restraint has been apparent since May 2021, when Israel conducted Operation Guardian of the Walls, a roughly two-week campaign of mostly airstrikes and artillery fire aimed at slashing the group’s rocket arsenal and production capabilities, crippling its tunnels, and eliminating many of its top commanders. Hamas is thought to be recovering and restocking since the ceasefire, carefully avoiding engaging in provocations that could ignite another confrontation before the group is ready.

Third, and critically, since mid-2021, the last year-plus of the Israel–Hamas conflict has been one of the quietest in decades due to the Israeli Bennett–Lapid government’s implementation of a sizable civil and economic program for Gaza.44 The program expands the number of permits for Palestinians from Gaza to work in Israel, where the daily wages of one worker are enough to support an additional ten Palestinians.45 Israel’s Defense Ministry signed off on a plan to gradually increase work permit quotas for Palestinians from Gaza to an unprecedented 20,000, with reports suggesting plans to eventually increase that number to 30,000.46 For an impoverished territory with an unemployment rate of around 50 percent, permits to work in Israel improve the lives of Palestinians and stabilize the economy. The program also introduced economic incentives for Hamas to keep the peace—conducting attacks could result in snap restrictions on permits and border crossing closures, leading to a public backlash, as well as internal political blowback within the group. The power of this economic tool was evident throughout Israel’s Operation Breaking Dawn in August 2022, during which Israel conducted a three-day operation to eliminate key military assets and personnel of the Palestinian Islamic Jihad (PIJ), another Gaza-based terrorist organization. Israel was careful to communicate its intention to target PIJ, not Hamas. Ordinarily a ready-and-willing belligerent in such flare-ups, Hamas did nothing to restrain the PIJ but remained conspicuously on the sidelines, refraining from fighting out of its interest in resuming border crossings as quickly as possible.47

Searching for alternatives

Given these limitations, blowbacks, and self-imposed restraints, Hamas is finding alternative methods of influence. Under the leadership of its Gaza chief Yahya Sinwar, Hamas is endeavoring to inspire Arab Israelis and West Bank Palestinians to continue the struggle by taking up arms and sparking an intifada while the group nurses itself back to strength.48 To further this effort, Hamas is turning to more insidious means of operating in the information space to garner support and ignite conflagrations without further jeopardizing its public reputation, weapons stockpiles, infrastructure, or the economic well-being of the Palestinians living under its control. Like many state actors working to advance strategic ambitions, Hamas has turned to offensive cyber operations as a means of competing below the threshold of armed conflict.

Deploying offensive cyber capabilities involves exceptionally low risks and costs for operators. For groups like Hamas that are worried about potential retaliation, these operations present an effective alternative to kinetic operations that would otherwise provoke an immediate response. Most national cyber operation countermeasures are geared toward state adversaries and, in general, finding an appropriate response to non-state actors in this area has been challenging. Many state attempts to retaliate and deter have been toothless, resulting in little alteration of the adversary’s calculations.49

3. Hamas’s cyber strategy

The nature of the cyber domain allows weak actors, like Hamas, to engage and inflict far more damage on powerful actors, like Israel, than would otherwise be possible in conventional conflict.50 This asymmetry means that cyberspace offers intrinsically covert opportunities to store, transfer, and deploy consequential capabilities with far less need for organizational resources and financial or human capacity than in industrial warfare. Well-suited to support information campaigns, cyber capabilities are useful for influencing an audience without drawing the attention and repercussions of more conspicuous operations, like terrorism. In these ways, cyber operations fit into Hamas’s overall strategy and emphasis on building public perception and influence. Making sense of this strategy allows a greater understanding of past Hamas cyber operations, and how the group will likely operate in the cyber domain going forward.

More than meets the eye

Aerial imagery of a Hamas cyber operations facility destroyed by the Israel Defense Forces in the Gaza Strip in May 2019.
Source: Israel Defense Forces

Hamas’s cyber capabilities, while relatively nascent and lacking the sophisticated tools of other hacking groups, should not be underestimated. It comes as a surprise to many security experts that Hamas—chronically plagued by electricity shortages in the Gaza Strip, with an average of just ten to twelve hours of electricity per day—even possesses cyber capabilities.51 Israel’s control over the telecommunications frequencies and infrastructure of the Gaza Strip raises further doubts about how Hamas could operate a cyber program.52 However, in 2019, Israel deemed the offensive cyber threat to be critical enough that after thwarting an operation, the IDF carried out a strike to destroy Hamas’s cyber headquarters,53 one of the first acknowledged kinetic operations by a military in response to a cyber operation. However, despite an IDF spokesperson’s claim that “Hamas no longer has cyber capabilities after our strike,” public reporting has highlighted various Hamas cyber operations in the ensuing months and years.54

This dismissive attitude toward Hamas’s cyber threat also overlooks the group’s operations from outside the confines of the Gaza Strip. Turkish President Recep Tayyip Erdoğan and his AKP Party share ideological sympathies with Hamas and have extended citizenship to Hamas leadership.55 The group’s leaders have allegedly used Turkey as a base for planning attacks and even as a safe haven for an overseas cyber facility.56 Hamas maintains even more robust relationships with other state supporters, namely Iran and Qatar, which provide financing, safe havens, and weapons technology.57 With the assistance of state benefactors, Hamas will continue to develop offensive cyber and information capabilities that, if overlooked, could result in geopolitical consequences.

For at least a decade, Hamas has engaged in cyber operations against Israeli and Palestinian targets. These operations can be divided in two broad operational categories that align with Hamas’s overall strategy: espionage and information. The first category, cyber espionage operations, accounts for the majority of Hamas’s publicly reported cyber activity and underpins the group’s information operations.

Espionage operations

Like any state or non-state actor, Hamas relies on quality intelligence to provide its leadership and commanders with decision-making advantages in the political and military arenas. The theft of valuable secrets from Israel, rival Palestinian factions, and individuals within its own ranks provides Hamas with strategic and operational leverage, and is thus prioritized in its cyber operations.

The Internal Security Force (ISF) is Hamas’s primary intelligence organization, comprised of members of the al-Majd security force from within the larger Izz al-Din al-Qassam Brigades, a military wing of Hamas. The ISF’s responsibilities range from espionage to quashing political opposition and dissent from within the party and its security apparatus.58 The range of the ISF’s missions manifests through Hamas’s cyber operations.

Tactical evolution

Naturally, Israel is a primary target of Hamas’s cyber espionage. These operations have become commonplace over the last several years, gradually evolving from broad, blunt tactics into more tailored, sophisticated approaches. The group’s initial tactics focused on a “spray and pray” approach, distributing impersonal emails with malicious attachments to a large number of targets, hoping that a subset would bite. For example, an operation that began in mid-2013 and was discovered in February 2015 entailed Hamas operators luring targets with the promise of pornographic videos that were really malware apps. The operators relied on their victims—which included targets across the government, military, academic, transportation, and infrastructure sectors—withholding information about the incidents from their workplace information technology departments, out of shame for clicking on pornography at work, thereby maximizing access and time on the target.59

Later, Hamas operations implemented various tactical updates to increase their chances of success. In September 2015, the group began including links rather than attachments, non-pornographic lures such as automobile accident videos, and additional encryption of the exfiltrated data.60 Another campaign, publicized in February 2017, involved a more personalized approach using social engineering techniques to target IDF personnel with malware from fake Facebook accounts.61 In subsequent years, the group began rolling out a variety of smartphone applications and marketing websites to surreptitiously install mobile remote access trojans on target devices. In 2018, the group implanted spyware on smartphones by masquerading as Red Alert, a rocket siren application for Israelis.62 Similarly in 2020, Hamas targeted Israelis through dating apps with names like Catch&See and GrixyApp.63 As previously mentioned, Hamas also cloaked its spyware in a seemingly benign World Cup application that allowed the group to collect information on a variety of IDF military installations and hardware, including armored vehicles. These are all areas Hamas commanders have demonstrated interest in learning more about in order to gain a potential advantage in a future kinetic conflict.64

According to the Israeli threat intelligence firm Cybereason, more recent discoveries indicate a “new level of sophistication” in Hamas’s operations.65 In April 2022, a cyber espionage campaign targeting individuals from the Israeli military, law enforcement, and emergency services used previously undocumented malware featuring enhanced stealth mechanisms. This indicates that Hamas is taking more steps to protect operational security than ever.66 The infection vector for this particular campaign was through social engineering on platforms like Facebook, a hallmark of many Hamas espionage operations, to dupe targets into downloading trojanized applications. Once the malware is downloaded, Hamas operators can access a wide range of information from the device’s documents, camera, and microphone, acquiring immense data on the target’s whereabouts, interactions, and more. Information collected off of military, law enforcement, and emergency services personnel can be useful on its own or for its potential extortion value.

As part of its power struggle with the Palestinian Authority and rival Fatah party, Hamas targets Palestinian political and security officials with similar operations. In another creative cyber espionage operation targeting the Palestinian Authority, Hamas operators used hidden malware to exfiltrate information from the widely used cloud platform Dropbox.67 The same operation targeted political and government officials in Egypt,68 an actor Hamas is keen to surveil given its shared border with the Gaza Strip and role brokering ceasefires and other negotiations between Israel and Hamas.

Other common targets of Hamas’s cyber espionage campaigns are members of its own organization. One of the ISF’s roles is counterintelligence, a supremely important field to an organization that is rife with internecine political rivalries,69 as well as paranoia about the watchful eyes of Israeli and other intelligence services. According to Western intelligence sources, one of the main missions of Hamas’s cyber facility in Turkey is deploying counterintelligence against Hamas dissenters and spies.70 Hamas is sensitive to the possibility of Palestinians within its ranks and others acting as “collaborators” with Israel, and the group occasionally summarily executes individuals on the suspicion of serving as Israeli intelligence informants.71

Information operations

While the bulk of Hamas’s cyber operations place a premium on information gathering, a subset involves using this information to further its efforts to influence the public. This broadly defined category of information operations comprises everything from hack-and-leaks to defacements to social media campaigns that advance beneficial narratives.

Hack-and-leak operations, when hackers acquire secret or otherwise sensitive information and subsequently make it public, are clear attempts to shift public opinion and “simulate scandal.”72 The strategic dissemination of stolen documents, images, and videos—potentially manipulated—at critical junctures can be a windfall for a group like Hamas. In December 2014, Hamas claimed credit for hacking the IDF’s classified network and posting multiple videos taken earlier in the year of Israel’s Operation Protective Edge in the Gaza Strip.73 The clips, which were superimposed with Arabic captions by Hamas,74 depicted sensitive details about the IDF’s operation, including two separate instances of Israeli forces engaging terrorists infiltrating Israel—one group infiltrating by sea en route to Kibbutz Zikim and one group via a tunnel under the border into Kibbutz Ein HaShlosha—to engage in kidnappings. One of the raids resulted in a fight that lasted for roughly six hours and the death of two Israelis.75 By leaking the footage, including images of the dead Israelis, Hamas sought to project itself as a strong leader to Palestinians and to instill fear among Israelis, boasting about its ability to infiltrate Israel, kill Israelis, and return to Gaza. These operations are intended to demonstrate Hamas’s strength on two levels: first, their ability to hack and steal valuable material from Israel and second, their boldness in carrying out attacks to further the Palestinian national cause.

Defacement is another tool in Hamas’s cyber arsenal. This sort of operation, a form of online vandalism that usually involves breaching a website to post propaganda, is not so much devastating as it is a nuisance.76 The operations are intended to embarrass the targets, albeit temporarily, and generate a psychological effect on an audience. In 2012, during Israel’s Operation Cast Lead in the Gaza Strip, Hamas claimed responsibility for attacks on Israeli websites, including the IDF’s Homefront Command, asserting that the cyber operations were “an integral part of the war against Israel.”77 Since then, Hamas has demonstrated its ability to reach potentially wider audiences through defacement operations. Notably, in July 2014 during Operation Protective Edge, Hamas gained access to the satellite broadcast of Israel’s Channel 10 television station for a few minutes, broadcasting images purportedly depicting Palestinians injured by Israeli airstrikes in the Gaza Strip. The Hamas hackers also displayed a threat in Hebrew text: “If your government does not agree to our terms, then prepare yourself for an extended stay in shelters.”78

Hamas has conducted defacement operations itself and has relied on an army of “patriotic hackers.” Patriotic hacking, cyberattacks against a perceived adversary performed by individuals on behalf of a nation, is not unique to the Israeli–Palestinian conflict. States have turned to sympathetic citizens around the world for support, often directing individual hackers to deface adversaries’ websites, as Ukraine did after Russia’s 2022 invasion.79 Similarly, Hamas seeks to inspire hackers from around the Middle East to “resist” Israel, resulting in the defacement of websites belonging to the Tel Aviv Stock Exchange and Israel’s national airline El Al by Arab hackers.80

In tandem with its embrace of patriotic hackers, Hamas seeks to multiply its propaganda efforts by enlisting the help of Palestinians on the street for less technical operations. To some extent, Hamas uses social media in similar ways to other terrorist organizations to inspire violence, urging Palestinians to attack Jews in Israel and the West Bank, for instance.81 However, the group goes a step further, encouraging Palestinians in Gaza to contribute to its efforts by providing guidelines for social media posting. The instructions, provided by Hamas’s Interior Ministry, detail how Palestinians should post about the conflict and discuss it with outsiders, including preferred terminology and practices such as, “Anyone killed or martyred is to be called a civilian from Gaza or Palestine, before we talk about his status in jihad or his military rank. Don’t forget to always add ‘innocent civilian’ or ‘innocent citizen’ in your description of those killed in Israeli attacks on Gaza.” Other instructions include, “Avoid publishing pictures of rockets fired into Israel from [Gaza] city centers. This [would] provide a pretext for attacking residential areas in the Gaza Strip.”82 Information campaigns like these extend beyond follower indoctrination and leave a tangible mark on international public discourse, as well as structure the course of conflict with Israel.

Hamas’s ability to leverage the cyber domain to shape the information landscape can have serious implications on geopolitics. Given the age and unpopularity of Palestinian President Mahmoud Abbas—polling shows that 80 percent of Palestinians want him to resign—as well as the fragile state of the Palestinian Authority,83 the Palestinian public’s desire for elections, and general uncertainty about the future, Hamas’s information operations can have a particularly potent effect on a discourse that is already contentious. The same can be said, to some extent, for the information environment in Israel, where political instability has resulted in five elections in just three and a half years.84 When executed strategically, information operations can play an influencing, if not deciding, role in electoral outcomes, as demonstrated by Russia’s interference in the 2016 US presidential election.85 A well-timed hack-and-leak operation, like Russia’s breach of the Democratic National Committee’s networks and dissemination of its emails, could majorly influence the momentum of political events in both Israel and Palestine.86 Continued failure to reach a two-state solution in the Israeli–Palestinian conflict will jeopardize Israel’s diplomatic relationships,87 as well as stability in the wider Middle East.88

4. Where do Hamas’s cyber operations go from here?

As outlined in its founding charter, as long as Hamas exists, it will place a premium on influencing audiences—friendly, adversarial, and undecided—and mobilizing them to bend political outcomes toward its ultimate objectives.89 Terrorism has been a central element of the group’s influence agenda, but cyber and information operations offer alternative and complementary options for engagement. It stands to reason that as Hamas’s cyber capabilities steadily evolve and improve, those of similar organizations will do the same.

Further Israeli efforts to curb terrorism through a cocktail of economic programs and advancements in defensive technologies, such as its integrated air defense system, raise questions about how Hamas and similar groups’ incentive structures may change their calculi in light of evolving state countermeasures. There is no Iron Dome in cyberspace. Militant and terrorist organizations are not changing their strategies of integrating cyber and information operations into their repertoires. Instead, they are finding new means of achieving old goals. Important questions for future research include:

  • If states like Iran transfer increasingly advanced kinetic weaponry to terrorist organizations like Hamas, PIJ, Hezbollah, Kata’ib Hezbollah, and the Houthis, to what extent does this assistance extend to offensive cyber capabilities? What will this support look like in the future, and will these groups depend on state support to sustain their cyber operations?
  • What lessons is Hamas drawing from the past year of relative calm with Israel that may influence the cadence and variety of its cyber operations? How might these lessons influence similar organizations around the world?
  • What sorts of operations, such as financially motivated ransomware and cybercrime, has Hamas not engaged in? Will Hamas and comparable organizations learn from and adopt operations that are similar to other variously motivated non-state actors?
  • What restrictions and incentives can the United States and its allies implement to curb the transfer of cyber capabilities to terrorist organizations?

Cyber capabilities are advancing rapidly worldwide and more advanced technologies are increasingly accessible, enabling relatively weak actors to compete with strong actors like never before. Few controls exist to effectively counter this proliferation of offensive cyber capabilities, and the technical and financial barriers for organizations like Hamas to compete in this domain remain low.90 Either by obtaining and deploying highly impactful tools, or by developing relationships with hacking groups in third-party countries to carry out operations, the threat from Hamas’s cyber and information capabilities will grow.

Just like the group’s rocket terror program, which began with crude, short-range, and inaccurate Qassam rockets that the group cobbled together from scratch, Hamas’s cyber program began with rather unsophisticated tools. Over the years, as the group obtained increasingly sophisticated, accurate, and long-range rockets from external benefactors like Iran, so too have Hamas’s cyber capabilities advanced in scale and sophistication.

Conclusion

Remarking on Hamas’s creative cyber campaigns, a lieutenant colonel in the IDF’s Cyber Directorate noted, “I’m not going to say they are not powerful or weak. They are interesting.”91 Observers should not view Hamas’s foray into cyber operations as an indication of a sudden organizational strategic shift. For its entire existence, the group has used terrorism as a means of garnering public attention and affecting the information environment, seizing strategic opportunities to influence the course of political events. As outside pressures change the group’s incentives to engage in provocative kinetic operations, cyber capabilities present alternative options for Hamas to advance its strategy. Hamas’s cyber capabilities will continue to advance, and the group will likely continue to leverage these tools in ways that will wield maximum influence over the information environment. Understanding how Hamas’s strategy and incentive structure guides its decision to leverage offensive cyber operations can provide insights, on a wider scale, about how non-state actors develop and implement cyber tools, and how the United States and its allies may be better able to counter these trends.

About the author

Acknowledgements

The author would like to thank several individuals, without whose support this report would not look the same. First and foremost, thank you to Trey Herr and Emma Schroeder, director and associate director of the Atlantic Council’s Cyber Statecraft Initiative, respectively, for helping from the start of this effort by participating in collaborative brainstorming sessions and providing extensive editorial feedback throughout. The author also owes a debt of gratitude to several individuals for generously offering their time to review various iterations of this document. Thanks to Ambassador Daniel Shapiro, Shanie Reichman, Yulia Shalomov, Stewart Scott, Madison Cullinan, and additional individuals who shall remain anonymous for valuable insights and feedback throughout the development of this report. Additionally, thank you to Valerie Bilgri for editing and Donald Partyka and Anais Gonzalez for designing the final document.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

1     Michael Schmitt, “Normative Voids and Asymmetry in Cyberspace,” Just Security, December 29, 2014, https://www.justsecurity.org/18685/normative-voids-asymmetry-cyberspace/.
2     Emma Schroeder et al., Hackers, Hoodies, and Helmets: Technology and the Changing Face of Russian Private Military ContractorsAtlantic Council, July 25, 2022, https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/technology-change-and-the-changing-face-of-russian-private-military-contractors; Cecile Schilis-Gallego and Nina Lakhani, “It’s a Free For All: How Hi-Tech Spyware Ends Up in the Hands of Mexico’s Cartels,” Guardian (UK), December 7, 2020, https://www.theguardian.com/world/2020/dec/07/mexico-cartels-drugs-spying-corruption.
3     The White House, National Security Strategy, October 2022, https://www.whitehouse.gov/wp-content/uploads/2022/10/Biden-Harris-Administrations-National-Security-Strategy-10.2022.pdf.; Emma Schroeder, Stewart Scott, and Trey Herr, Victory Reimagined: Toward a More Cohesive US Cyber StrategyAtlantic Council, June 14, 2022, https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/victory-reimagined/.
4     Clare Stouffer, “15 Types of Hackers + Hacking Protection Tips for 2022,” Norton, May 2, 2022, https://us.norton.com/internetsecurity-emerging-threats-types-of-hackers.html#Greenhat.
5     Janne Hakala and Jazlyn Melnychuk, “Russia’s Strategy in Cyberspace,” NATO Strategic Communications Centre of Excellence, June 2021, https://stratcomcoe.org/cuploads/pfiles/Nato-Cyber-Report_15-06-2021.pdf.
6     Roy Iarchy and Eyal Rynkowski, “GoldenCup: New Cyber Threat Targeting World Cup Fans,” Broadcom Software, July 5, 2018, https://symantec-enterprise-blogs.security.com/blogs/expert-perspectives/goldencup-new-cyber-threat-targeting-world-cup-fans.
7     “Spyware,” MalwareBytes, https://www.malwarebytes.com/spyware.
8     Taylor Armerding, “Golden Cup App Was a World Cup of Trouble,” Synopsys, July 12, 2022, https://www.synopsys.com/blogs/software-security/golden-cup-app-world-cup-trouble/.
9     Yaniv Kubovich, “Hamas Cyber Ops Spied on Hundreds of Israeli Soldiers Using Fake World Cup, Dating Apps,” Haaretz, July 3, 2018, https://www.haaretz.com/israel-news/hamas-cyber-ops-spied-on-israeli-soldiers-using-fake-world-cup-app-1.6241773.
11     J.D. Work, Troubled Vision: Understanding Recent Israeli–Iranian Offensive Cyber ExchangesAtlantic Council, July 22, 2020, https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/troubled-vision-understanding-israeli-iranian-offensive-cyber-exchanges/.
12     Amos Harel, “How Deep Has Chinese Intelligence Penetrated Israel?” Haaretz, February 25, 2022, https://www.haaretz.com/israel-news/.premium-how-deep-has-chinese-intelligence-penetrated-israel-1.10633942.
13     “Propaganda, Extremism and Online Recruitment Tactics,” Anti-Defamation League, April 4, 2016, https://www.adl.org/education/resources/tools-and-strategies/table-talk/propaganda-extremism-online-recruitment.
14     Office of the Director of National Intelligence, Annual Threat Assessment of the US Intelligence Community, February 7, 2022, https://www.dni.gov/files/ODNI/documents/assessments/ATA-2022-Unclassified-Report.pdf.
15     National Security Archive, “USCYBERCOM After Action Assessments of Operation GLOWING SYMPHONY,” January 21, 2020, https://nsarchive.gwu.edu/briefing-book/cyber-vault/2020-01-21/uscybercom-after-action-assessments-operation-glowing-symphony.
16     The White House, National Strategy for Counterterrorism of the United States of America, October 2018, https://www.dni.gov/files/NCTC/documents/news_documents/NSCT.pdf.
17     “Hamas: The Palestinian Militant Group That Rules Gaza,” BBC, July 1, 2022, https://www.bbc.com/news/world-middle-east-13331522.
18    “The Covenant of the Islamic Resistance Movement,” August 18, 1988, https://avalon.law.yale.edu/20th_century/hamas.asp.
19    Gur Laish, “The Amorites Iniquity – A Comparative Analysis of Israeli and Hamas Strategies in Gaza,” Infinity Journal 2, no. 2 (Spring 2022), https://www.militarystrategymagazine.com/article/the-amorites-iniquity-a-comparative-analysis-of-israeli-and-hamas-strategies-in-gaza/.
20     Khaled Abu Toameh, “PA Popularity Among Palestinians at an All-Time Low,” Jerusalem Post, November 18, 2021, https://www.jpost.com/middle-east/pa-popularity-among-palestinians-at-an-all-time-low-685438.
21     “16 Killed in Suicide Bombings on Buses in Israel: Hamas Claims Responsibility,” CNN, September 1, 2004, http://edition.cnn.com/2004/WORLD/meast/08/31/mideast/.
22     “Hamas Rocket Fire a War Crime, Human Rights Watch Says,” BBC News, August 12, 2021, https://www.bbc.com/news/world-middle-east-58183968.
23     Isabel Kershner, “Hamas Militants Take Credit for Sniper Attack,” New York Times, March 20, 2007, https://www.nytimes.com/2007/03/20/world/middleeast/19cnd-mideast.html.
24     “Hamas Operatives Launch Incendiary Balloons into Israel,” AP News, September 4, 2021, https://apnews.com/article/technology-middle-east-africa-israel-hamas-6538690359c8de18ef78d34139d05535.
25     Mai Abu Hasaneen, “Israel Targets Hamas Leader after Call to Attack Israelis with ‘Cleaver, Ax or Knife,’” Al-Monitor, May 15, 2022, https://www.al-monitor.com/originals/2022/05/israel-targets-hamas-leader-after-call-attack-israelis-cleaver-ax-or-knife.
26     Ralph Ellis and Michael Schwartz, “Mom Speaks Out on 3 Abducted Teens as Israeli PM Blames Hamas,” CNN, June 15, 2014, https://www.cnn.com/2014/06/15/world/meast/west-bank-jewish-teens-missing.
27     The Palestinian National Authority (PA) is the official governmental body of the State of Palestine, exercising administrative and security control over Area A of the Palestinian Territories, and only administrative control over Area B of the Territories. The PA is controlled by Fatah, Hamas’s most significant political rival, and is the legitimate ruler of the Gaza Strip, although Hamas exercises de facto control of the territory.
28     The Palestine Liberation Organization (PLO) is the political organization that is broadly recognized by the international community as the sole legitimate representative of the Palestinian people. The PLO recognizes Israel, setting it apart from Hamas, which is not a member of the organization.
29    Hamas is designated as a foreign terrorist organization by the US State Department and has earned similar designations from dozens of other countries and international bodies, including Australia, Canada, the European Union, the Organization of American States, Israel, Japan, New Zealand, and the United Kingdom. Jotam Confino, “Calls to Assassinate Hamas Leadership as Terror Death Toll Reaches 19,” Jewish Chronicle, May 12, 2022, https://www.thejc.com/news/world/calls-to-assassinate-hamas-leadership-as-terror-death-tolls-reaches-19-19wCeFxlx3w40gFCKQ9xSx; Byron Kaye, “Australia Lists All of Hamas as a Terrorist Group,” Reuters, March 4, 2022, https://www.reuters.com/world/middle-east/australia-lists-all-hamas-terrorist-group-2022-03-04; Public Safety Canada, “Currently Listed Entities,” Government of Canada, https://www.publicsafety.gc.ca/cnt/ntnl-scrt/cntr-trrrsm/lstd-ntts/crrnt-lstd-ntts-en.aspx; “COUNCIL IMPLEMENTING REGULATION (EU) 2020/19 of 13 January 2020 implementing Article 2(3) of Regulation (EC) No 2580/2001 on Specific Restrictive Measures Directed Against Certain Persons and Entities with a View to Combating Terrorism, and Repealing Implementing Regulation (EU) 2019/1337,” Official Journal of the European Union, January 13, 2020, https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L:2020:008I:FULL&from=EN; Organization of American States, “Qualification of Hamas as a Terrorist Organization by the OAS General Secretariat,” May 17, 2021, https://www.oas.org/en/media_center/press_release.asp?sCodigo=E-051/21; Ministry of Foreign Affairs, “Japan’s Foreign Policy in Major Diplomatic Fields,” Japan, 2005, https://www.mofa.go.jp/policy/other/bluebook/2005/ch3-a.pdf; “UK Parliament Approves Designation of Hamas as a Terrorist Group,” Haaretz, November 26, 2021, https://www.haaretz.com/israel-news/.premium-u-k-parliament-approves-designation-of-hamas-as-a-terrorist-group-1.10419344.
30     Nathan R. Stein et al., “The Differential Impact of Terrorism on Two Israeli Communities,” American Journal of Orthopsychiatry, American Psychological Association, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3814032/.
31     Robert A. Pape, “The Strategic Logic of Suicide Terrorism,” The American Political Science Review, August 2003, https://www.jstor.org/stable/3117613?seq=6#metadata_info_tab_contents.
32     “Arabs Celebrate Israeli Withdrawal,” South Florida Sun-Sentinel, October 26, 1995, https://www.sun-sentinel.com/news/fl-xpm-1995-10-26-9510260008-story.html.
33    Brent Sadler, “Suicide Bombings Scar Peres’ Political Ambitions,” CNN, May 28, 1996, http://www.cnn.com/WORLD/9605/28/israel.impact/index.html.
34    Akiva Eldar, “The Power Hamas Holds Over Israel’s Elections,” Al-Monitor, February 11, 2020, https://www.al-monitor.com/originals/2020/02/israel-us-palestinians-hamas-donald-trump-peace-plan.html.
35    Yoram Schweitzer, “The Rise and Fall of Suicide Bombings in the Second Intifada,” The Institute for National Security Studies, October 2010, https://www.inss.org.il/wp-content/uploads/sites/2/systemfiles/(FILE)1289896644.pdf; Beverley Milton-Edwards and Stephen Farrell, Hamas: The Islamic Resistance Movement (Polity Press, 2013), https://www.google.com/books/edition/Hamas/ozLNNbwqlAEC?hl=en&gbpv=1.
36    Ministry of Foreign Affairs, “Rocket Fire from Gaza and Ceasefire Violations after Operation Cast Lead (Jan 2009),” State of Israel, March 16, 2016, https://embassies.gov.il/MFA/FOREIGNPOLICY/Terrorism/Pages/Palestinian_ceasefire_violations_since_end_Operation_Cast_Lead.aspx.
37    “PA: Hamas Rockets Are Bid to Sway Israeli Election,” Associated Press, September 2, 2009, https://web.archive.org/web/20090308033654/http://haaretz.com/hasen/spages/1062761.html.
38     National Consortium for the Study of Terrorism and Responses to Terrorism, “Global Terrorism Database,” University of Maryland, https://www.start.umd.edu/gtd/search/Results.aspx?page=2&casualties_type=&casualties_max=&perpetrator=838&count=100&expanded=yes&charttype=line&chart=overtime&ob=GTDID&od=desc#results-table
39     US Congress, House of Representatives, Subcommittee on the Middle East and North Africa and Subcommittee on Terrorism, Nonproliferation, and Trade, Hamas Benefactors: A Network of Terror, Joint Hearing before the Subcommittee on the Middle East and North Africa and the Subcommittee on Terrorism, Nonproliferation, and Trade of the Committee on Foreign Affairs, 113th Congress, September 9, 2014, https://www.govinfo.gov/content/pkg/CHRG-113hhrg89738/html/CHRG-113hhrg89738.htm.
40     “Hamas Faces Risk, Opportunity from Warming Israel–Turkey Ties,” France 24, March 16, 2022, https://www.france24.com/en/live-news/20220316-hamas-faces-risk-opportunity-from-warming-israel-turkey-ties; Sean Mathews, “Israeli Military Officials Sent to Qatar as US Works to Bolster Security Cooperation,” Middle East Eye, July 8, 2022, https://www.middleeasteye.net/news/qatar-israel-military-officials-dispatched-amid-us-efforts-bolster-security.
41     Nitsana Darshan-Leitner, “Qatar is Financing Palestinian Terror and Trying to Hide It,” Jerusalem Post, February 18, 2022, https://www.jpost.com/opinion/article-696824.
42     Shahar Klaiman, “Qatar Pledges $500M to Rebuild Gaza, Hamas Vows Transparency,” Israel Hayom, May 27, 2021, https://www.israelhayom.com/2021/05/27/qatar-pledges-500m-to-gaza-rebuild-hamas-vows-transparency; Jodi Rudoren, “Qatar Emir Visits Gaza, Pledging $400 Million to Hamas,” New York Times, October 23, 2012, https://www.nytimes.com/2012/10/24/world/middleeast/pledging-400-million-qatari-emir-makes-historic-visit-to-gaza-strip.html.
43     Adam Taylor, “With Strikes Targeting Rockets and Tunnels, the Israeli Tactic of ‘Mowing the Grass’ Returns to Gaza,” May 14, 2021, https://www.washingtonpost.com/world/2021/05/14/israel-gaza-history/.
44     “What Just Happened in Gaza?” Israel Policy Forum, YouTube, https://www.youtube.com/watch?v=XqHjQo0ybvM&t=59s.
45     Michael Koplow, “Proof of Concept for a Better Gaza Policy,” Israel Policy Forum, August 11, 2022, https://israelpolicyforum.org/2022/08/11/proof-of-concept-for-a-better-gaza-policy; Tani Goldstein, “The Number of Workers from Gaza Increased, and the Peace Was Maintained,” Zman Yisrael, April 4, 2022, https://www.zman.co.il/302028/popup/.
46     Aaron Boxerman, “Israel to Allow 2,000 More Palestinian Workers to Enter from Gaza,” Times of Israel, June 16, 2022, https://www.timesofisrael.com/israel-to-allow-2000-more-palestinian-workers-to-enter-from-gaza/.
47     “Operation Breaking Dawn Overview,” Israel Policy Forum, August 8, 2022, https://israelpolicyforum.org/2022/08/08/operation-breaking-dawn-overview/.
48     Aaron Boxerman, “Hamas’s Sinwar Threatens a ‘Regional, Religious War’ if Al-Aqsa is Again ‘Violated,’” Times of Israel, April 30, 2022, https://www.timesofisrael.com/sinwar-warns-israel-hamas-wont-hesitate-to-take-any-steps-if-al-aqsa-is-violated/.
49     Safa Shahwan Edwards and Simon Handler, “The 5×5—How Retaliation Shapes Cyber Conflict,” Atlantic Council, https://www.atlanticcouncil.org/commentary/the-5×5-how-retaliation-shapes-cyber-conflict/.
50     Andrew Phillips, “The Asymmetric Nature of Cyber Warfare,” USNI News, October 14, 2012, https://news.usni.org/2012/10/14/asymmetric-nature-cyber-warfare.
51    “Gaza: ICRC Survey Shows Heavy Toll of Chronic Power Shortages on Exhausted Families,” International Committee of the Red Cross, July 29, 2021, https://www.icrcnewsroom.org/story/en/1961/gaza-icrc-survey-shows-heavy-toll-of-chronic-power-shortages-on-exhausted-families.
52    Daniel Avis and Fadwa Hodali, “World Bank to Israel: Let Palestinians Upgrade Mobile Network,” Bloomberg, February 8, 2022, https://www.bloomberg.com/news/articles/2022-02-08/world-bank-to-israel-let-palestinians-upgrade-mobile-network.
53    Israel Defense Forces (@IDF), “CLEARED FOR RELEASE: We thwarted an attempted Hamas cyber offensive against Israeli targets. Following our successful cyber defensive operation, we targeted a building where the Hamas cyber operatives work. HamasCyberHQ.exe has been removed,” Twitter, May 5, 2019, https://twitter.com/IDF/status/1125066395010699264.
54    Zak Doffman, “Israel Responds to Cyber Attack with Air Strike on Cyber Attackers in World First,” Forbes, May 6, 2019, https://www.forbes.com/sites/zakdoffman/2019/05/06/israeli-military-strikes-and-destroys-hamas-cyber-hq-in-world-first/?sh=654fbba9afb5.
55    “Turkey Said to Grant Citizenship to Hamas Brass Planning Attacks from Istanbul,” Times of Israel, August 16, 2020, https://www.timesofisrael.com/turkey-said-to-grant-citizenship-to-hamas-brass-planning-attacks-from-istanbul/.
56    Anshel Pfeffer, “Hamas Uses Secret Cyberwar Base in Turkey to Target Enemies,” Times (UK), October 22, 2020, https://www.thetimes.co.uk/article/hamas-running-secret-cyberwar-hq-in-turkey-29mz50sxs.
57    David Shamah, “Qatari Tech Helps Hamas in Tunnels, Rockets: Expert,” Times of Israel, July 31, 2014, https://www.timesofisrael.com/qatari-tech-helps-hamas-in-tunnels-rockets-expert; Dion Nissenbaum, Sune Engel Rasmussen, and Benoit Faucon, “With Iranian Help, Hamas Builds ‘Made in Gaza’ Rockets and Drones to Target Israel,” Wall Street Journal, May 20, 2021, https://www.wsj.com/articles/with-iranian-help-hamas-builds-made-in-gaza-rockets-and-drones-to-target-israel-11621535346.
58     “Internal Security Force (ISF) – Hamas,” Mapping Palestinian Politics, European Council on Foreign Relations, https://ecfr.eu/special/mapping_palestinian_politics/internal_security_force/.
59     “Operation Arid Viper: Bypassing the Iron Dome,” Trend Micro, February 16, 2015, https://www.trendmicro.com/vinfo/es/security/news/cyber-attacks/operation-arid-viper-bypassing-the-iron-dome; “Sexually Explicit Material Used as Lures in Recent Cyber Attacks,” Trend Micro, February 18, 2015, https://www.trendmicro.com/vinfo/us/security/news/cyber-attacks/sexually-explicit-material-used-as-lures-in-cyber-attacks?linkId=12425812.
60     “Operation Arid Viper Slithers Back into View,” Proofpoint, September 18, 2015, https://www.proofpoint.com/us/threat-insight/post/Operation-Arid-Viper-Slithers-Back-Into-View.
61     “Hamas Uses Fake Facebook Profiles to Target Israeli Soldiers,” Israel Defense Forces, February 2, 2017, https://www.idf.il/en/minisites/hamas/hamas-uses-fake-facebook-profiles-to-target-israeli-soldiers/.
62     Yossi Melman, “Hamas Attempted to Plant Spyware in ‘Red Alert’ Rocket Siren App,” Jerusalem Post, August 14, 2018, https://www.jpost.com/arab-israeli-conflict/hamas-attempted-to-plant-spyware-in-red-alert-rocket-siren-app-564789.
63     “Hamas Android Malware on IDF Soldiers—This is How it Happened,” Checkpoint, February 16, 2020, https://research.checkpoint.com/2020/hamas-android-malware-on-idf-soldiers-this-is-how-it-happened/.
64     Yaniv Kubovich, “Hamas Cyber Ops Spied on Hundreds of Israeli Soldiers Using Fake World Cup, Dating Apps,” Haaretz, July 3, 2018, https://www.haaretz.com/israel-news/hamas-cyber-ops-spied-on-israeli-soldiers-using-fake-world-cup-app-1.6241773; Ben Caspit, “Gilad Shalit’s Capture, in His Own Words,” Jerusalem Post, March 30, 2013, https://www.jpost.com/features/in-thespotlight/gilad-schalits-capture-in-his-own-words-part-ii-308198.
65     Omer Benjakob, “Exposed Hamas Espionage Campaign Against Israelis Shows ‘New Levels of Sophistication,’” Haaretz, April 7, 2022, https://www.haaretz.com/israel-news/tech-news/2022-04-07/ty-article/.premium/exposed-hamas-espionage-campaign-shows-new-levels-of-sophistication/00000180-5b9c-dc66-a392-7fdf14ff0000.
66     Cybereason Nocturnus, “Operation Bearded Barbie: APT-C-23 Campaign Targeting Israeli Officials,” Cybereason, April 6, 2022, https://www.cybereason.com/blog/operation-bearded-barbie-apt-c-23-campaign-targeting-israeli-officials?hs_amp=true.
67     Cybereason Nocturnus, “New Malware Arsenal Abusing Cloud Platforms in Middle East Espionage Campaign,” Cybereason, December 9, 2020, https://www.cybereason.com/blog/new-malware-arsenal-abusing-cloud-platforms-in-middle-east-espionage-campaign.
68     Sean Lyngaas, “Hackers Leverage Facebook, Dropbox to Spy on Egypt, Palestinians,” December 9, 2020, CyberScoop, https://www.cyberscoop.com/molerats-cybereason-gaza-espionage-palestine/.
69     Adnan Abu Amer, “Hamas Holds Internal Elections Ahead of Palestinian General Elections,” Al-Monitor, February 26, 2021, https://www.al-monitor.com/originals/2021/02/hamas-internal-elections-gaza-west-bank-palestinian.html.
71     “Hamas Kills 22 Suspected ‘Collaborators,’” Times of Israel, August 22, 2014, https://www.timesofisrael.com/hamas-said-to-kill-11-suspected-collaborators; “Hamas Executes Three ‘Israel Collaborators’ in Gaza,” BBC, April 6, 2017, https://www.bbc.com/news/world-middle-east-39513190.
72     James Shires, “Hack-and-Leak Operations and US Cyber Policy,” War on the Rocks, August 14, 2020, https://warontherocks.com/2020/08/the-simulation-of-scandal/.
73     Ben Tufft, “Hamas Claims it Hacked IDF Computers to Leak Sensitive Details of Previous Operations,” Independent, December 14, 2014, https://www.independent.co.uk/news/world/middle-east/hamas-claims-it-hacked-idf-computers-to-leak-sensitive-details-of-previous-operations-9923742.html.
74     Tova Dvorin, “Hamas: ‘We Hacked into IDF Computers,’” Israel National News, December 14, 2014, https://www.israelnationalnews.com/news/188618#.VI2CKiusV8E
75     Ari Yashar, “IDF Kills Hamas Terrorists Who Breached Border,” Israel National News, July 8, 2014, https://www.israelnationalnews.com/news/182666; Gil Ronen and Tova Dvorin, “Terrorists Tunnel into Israel: Two Soldiers Killed,” Israel National News, July 19, 2014, https://www.israelnationalnews.com/news/183076.
76     “Website Defacement Attack,” Imperva, https://www.imperva.com/learn/application-security/website-defacement-attack/.
77     Omer Dostri, “Hamas Cyber Activity Against Israel,” The Jerusalem Institute for Strategy and Security, October 15, 2018, https://jiss.org.il/en/dostri-hamas-cyber-activity-against-israel/.
78     WAQAS, “Israel’s Channel 10 TV Station Hacked by Hamas,” Hackread, July 16, 2014, https://www.hackread.com/hamas-hacks-israels-channel-10-tv-station/.
79     Joseph Marks, “Ukraine is Turning to Hacktivists for Help,” Washington Post, March 1, 2022, https://www.washingtonpost.com/politics/2022/03/01/ukraine-is-turning-hacktivists-help/.
80     “Israeli Websites Offline of ‘Maintenance’ as Hamas Praises Hackers,” The National, January 15, 2012, https://www.thenationalnews.com/world/mena/israeli-websites-offline-of-maintenance-as-hamas-praises-hackers-1.406178.
81     Dov Lieber and Adam Rasgon, “Hamas Media Campaign Urges Attacks on Jews by Palestinians in Israel and West Bank,” Wall Street Journal, May 2, 2022, https://www.wsj.com/articles/hamas-media-campaign-urges-attacks-on-jews-by-palestinians-in-israel-and-west-bank-11651511641.
82     “Hamas Interior Ministry to Social Media Activists: Always Call the Dead ‘Innocent Civilians’; Don’t Post Photos of Rockets Being Fired from Civilian Population Centers,” Middle East Media Research Institute, July 17, 2014, https://www.memri.org/reports/hamas-interior-ministry-social-media-activists-always-call-dead-innocent-civilians-dont-post#_edn1.
83     Joseph Krauss, “Poll Finds 80% of Palestinians Want Abbas to Resign,” AP News, September 21, 2021, https://apnews.com/article/middle-east-jerusalem-israel-mahmoud-abbas-hamas-5a716da863a603ab5f117548ea85379d.
84     Patrick Kingsley and Isabel Kershner, “Israel’s Government Collapses, Setting Up 5th Election in 3 Years,” New York Times, June 20, 2022, https://www.nytimes.com/2022/06/20/world/middleeast/israel-election-government-collapse.html.
85     Patrick Howell O’Neill, “Why Security Experts Are Braced for the Next Election Hack-and-Leak,” MIT Technology Review, September 29, 2020, https://www.technologyreview.com/2020/09/29/1009101/why-security-experts-are-braced-for-the-next-election-hack-and-leak/.
86     Eric Lipton, David E. Sanger, and Scott Shane, “The Perfect Weapon: How Russian Cyberpower Invaded the US,” New York Times, December 13, 2016, https://www.nytimes.com/2016/12/13/us/politics/russia-hack-election-dnc.html.
87     Ben Samuels, “No Normalization with Israel Until Two-State Solution Reached, Saudi FM Says,” Haaretz, July 16, 2022, https://www.haaretz.com/middle-east-news/2022-07-16/ty-article/.premium/no-normalization-with-israel-until-two-state-solution-reached-saudi-fm-says/00000182-0614-d213-adda-17bd7b2d0000.
88     Ibrahim Fraihat, “Palestine: Still Key to Stability in the Middle East,” Brookings Institution, January 28, 2016, https://www.brookings.edu/opinions/palestine-still-key-to-stability-in-the-middle-east/.
89     Israel Foreign Ministry, “The Charter of Allah: The Platform of the Islamic Resistance Movement (Hamas),” Information Division, https://irp.fas.org/world/para/docs/880818.htm.
90     “The Proliferation of Offensive Cyber Capabilities,” Cyber Statecraft Initiative, Digital Forensic Research Lab, Atlantic Council, https://www.atlanticcouncil.org/programs/digital-forensic-research-lab/cyber-statecraft-initiative/the-proliferation-of-offensive-cyber-capabilities/.
91     Neri Zilber, “Inside the Cyber Honey Traps of Hamas,” The Daily Beast, March 1, 2020, https://www.thedailybeast.com/inside-the-cyber-honey-traps-of-hamas.

The post The cyber strategy and operations of Hamas: Green flags and green hats appeared first on Atlantic Council.

]]>
The 5×5—Non-state armed groups in cyber conflict https://www.atlanticcouncil.org/content-series/the-5x5/the-5x5-non-state-armed-groups-in-cyber-conflict/ Wed, 26 Oct 2022 04:01:00 +0000 https://www.atlanticcouncil.org/?p=579094 Five experts from various backgrounds assess the emerging threats posed by non-state armed groups in cyber conflict.

The post The 5×5—Non-state armed groups in cyber conflict appeared first on Atlantic Council.

]]>
This article is part of The 5×5, a monthly series by the Cyber Statecraft Initiative, in which five featured experts answer five questions on a common theme, trend, or current event in the world of cyber. Interested in the 5×5 and want to see a particular topic, event, or question covered? Contact Simon Handler with the Cyber Statecraft Initiative at SHandler@atlanticcouncil.org.

Non-state organizations native to cyberspace, like patriotic hacking collectives and ransomware groups, continue to impact geopolitics through cyber operations. But, increasingly, non-state armed groups with histories rooted entirely in kinetic violence are adopting offensive cyber capabilities to further their strategic objectives. Each of these groups has its own motivations for acquiring these capabilities and its strategy to employ them, making developing effective countermeasures difficult for the United States and its allies. In Ukraine, the Russian government is increasingly outsourcing military activities to private military companies, such as the Wagner Group, and it may continue to do so for cyber and information operations. In Mexico, drug cartels are purchasing state-of-the-art malware to target journalists and other opponents. Elsewhere, militant and terrorist organizations such as Hezbollah and Boko Haram have employed cyber capabilities to bolster their existing operations and efficacy in violence against various states.

The proliferation of offensive cyber capabilities and low barriers to acquiring and deploying some of these powerful tools suggest that the cyber capacities of non-state armed groups will only continue to grow. We brought together five experts from various backgrounds to assess the emerging cyber threats posed by non-state armed groups and discuss how the United States and its allies can address them.

#1 How significant is the cyber threat posed by non-state armed groups to the United States and its allies? What kinds of entities should they be concerned about?

Sean McFate, nonresident senior fellow, Africa Center, Atlantic Council; professor, Georgetown University’s Walsh School of Foreign Service and the National Defense University:

“Currently, the most powerful non-state armed groups that use cyber do it on behalf of a state, offering a modicum of plausible deniability. For example, The Concord Group in Russia is owned by Yevgeny Prigozhin, an oligarch close to Putin. Under the Concord Group is the Wagner Group (mercenaries) and the Internet Research Agency, also known as “the troll farm.” Outsourcing these capabilities lowers the barrier of entry into modern conflicts and allows the Kremlin to purse riskier stratagems.”

Steph Shample, non-resident scholar, Cyber Program, Middle East Institute; senior analyst, Team Cymru:

“The cyber threat posed by independent actors or criminal groups—not advanced persistent threats (APT)—is high, and the first impact is primarily financial. Ransomware flourishes among non-state groups, and can makes these actors, at times, millions of dollars. Consider the SamSam ransomware operations, carried out by Iranian nationals. According to the publicized indictments, the two actors were not found to have ties to the Iranian government, but they took in $6 million in profit—and that is just what was traceable. The second impact is reputational damage for businesses. Once they are impacted by a cyber incident, building the trust of users back is often more difficult than recouping financial loss. Entities to worry about include fields and industries that do not have robust cyber protection or excessive funds, as malicious actors often go after them. These industries include academia, healthcare, and smaller government entities like cities and municipalities.”

Aaron Brantly, associate professor of political science and director, Tech4Humanity lab, Virginia Tech:

“Non-state armed groups do not pose a significant cyber threat at present to the United States and its allies. There are very few examples of non-state actors not affiliated or acting as proxies for states that have the capacity to develop and utilize vulnerabilities to achieve substantial effect. The threat posed by these groups increases when they act as proxies and leverage state capacity and motivation. It is conceivable that non-state armed groups may use cyberattacks to engage in criminal attacks to achieve financial benefits to fund kinetic activities. Yet, developing the capacity to carry out armed attacks and cyberattacks often require members with different skillsets.”

Maggie Smith, research scientist and assistant professor, Army Cyber Institute, United States Military Academy:

The views expressed are those of the author, and do not reflect the official position of the Army Cyber Institute, United States Military Academy, Department of the Army, or Department of Defense.

“I find the most confounding factor of non-state groups to be their motivations for attacking particular targets. Motivations can be financial, ideological, religious, grievance-based, or entities could be targeted for fun—the options are endless and they are not static. Therefore, our traditional intelligence and the indicators and warnings that typically tip and cue us to threats, may not be there. This makes defending against non-state actors that much more unpredictable, confusing, and challenging than defending against states.”

Jon Lindsay, associate professor, School of Cybersecurity and Privacy, Georgia Institute of Technology (Georgia Tech):

“The greatest threat to the United States remains other nuclear-armed states, as well as collective existential threats like climate change and pandemics. Non-state actors are a serious but less severe threat, and cyber is the least severe tool in their kits. Cyber is a minor feature of a minor threat to the United States and its allies.”

#2 How do strategies vary among different types of non-state armed groups and compare with those of states when it comes to cyber capabilities?

Lindsay: “A really interesting feature of the cyber revolution is the democratization of deception. The classic strategies of intelligence—espionage, subversion, disinformation, counterintelligence, and secret diplomacy—that were once practiced mainly by states are now within reach of many actors. The more interesting variation may be in capabilities—states can do more for many reasons—than in strategy. Like it or not, we are all actors, intermediaries, and targets of intelligence.”

McFate: “Outsourcing cyber threats allows states to circumnavigate international and domestic laws. This creates moral hazard in foreign policymaking because it lessens the likelihood of punishment by the international community.”

Brantly: “Whether terrorist organizations or insurgencies, armed groups historically use violence to achieve effects. The strategy of armed groups is to shift the public view of an organization, or issue in such a way as to compel a state actor to respond. Cyber threats do not achieve the same level of visibility that kinetic violence does, and are therefore strategically and tactically less useful to non-state groups. By contrast, state actors seek intelligence and signaling capabilities that control escalation. Because cyberattacks are frequently considered less impactful due to several factors including reversibility, levels of violence, etc., they are a robust tool to enable broader strategic objectives.”

Shample: “There is often overlap. If we again think about APT groups, or those directly sponsored by state governments—the “big four” US adversaries include Iran, China, North Korea, and Russia. All of these countries have mandatory conscription, so all men (and in selective cases, women) have to serve in these countries’ militaries. That mandatory military training can be fulfilled by going through one of their cyber academies and acting as what the United States and Five Eyes community considers a “malicious cyber actor.” Mandatory service is completed eventually, but then these actors can go and act on their own accord, using the training they received to cover their online tracks. State-trained individuals become part of the non-state actor community. They take their learned skills, they share them with other actors on forums and chat platforms, and voila. With training and sophistication, along with a way to evade tracking from their home countries, these individuals continue to improve their skills and networks online, which is a very serious problem. They are sophisticated and able to keep acting in a criminal capacity. The more sophisticated actors can also sell ready-to-use kits, such as Ransomware-as-a-Service, phishing kits, and so on that are premade and do not take high skill to use. The trained malicious actor can not only act independently, but they could have an additional stream of revenue selling kits and supplies to other malicious actors. It is an entire underground ecosystem that I see on closed forums all the time.”

Smith: “One difference is that strategies are more ad hoc or responsive and shift when a non-state group’s motivation for attacking changes. For example, Killnet, the now-infamous pro-Russian hacker group that has been conducting distributed denial-of-service attacks (DDoS) against European nations since March, started off as a DDoS tool that criminal and threat actors could purchase. Just after updating the version of the tool in March, the non-state, but pro-Russian criminals behind Killnet pulled the tool offline and declared that the name was now an umbrella term applied to hacktivism against Russia’s enemies.”

#3 What makes cyber capabilities attractive (or not) to these kinds of non-state groups?

Lindsay: “The obvious answer: cyber tools are low cost and low risk. Cyber becomes an attractive option to actors that lack the means or resolve to use more effective instruments of power. The more that an actor is concerned about adverse consequences like retaliation, punishment, and law enforcement, the more likely they are to use cyber capabilities.”

McFate: “Cyber is important, but not in ways people often think. It gives us new ways of doing old things: sabotage, theft, propaganda, deception, and espionage. Cyber war’s real power is malign information, not sabotage like Stuxnet. In an information age, disinformation is more important than firepower. Who cares about the sword if you can manipulate the mind that wields it?”

Brantly: “Cyber capabilities are less attractive to non-state armed groups because their cost-to-impact ratio is less than kinetic violence. At present, insurgents are unlikely to win by using cyberattacks, and terrorist organizations are unlikely to draw the desired levels of attention to their cause through cyber means that they would by comparable kinetic means. Where attacks disrupt, embarrass an adversary, or facilitate financial concerns of non-state armed groups, such attacks are more likely.”

Shample: “Pseudo-anonymity, of course. They can act from anywhere, target any entity, use obfuscation technology to cover their tracks, and target cryptocurrency to raise money. First, they can cover their tracks completely/partially. Second, they may have enough obscurity to provide plausible cover and not be officially tracked and charged, despite suspicion. Third, they can make a decent amount of money and/or cause damage without any personal harm that comes back to themselves. Fourth, they are able to be impactful and gain notoriety amongst the criminal contingent. The criminal underground is very ego driven, so if an actor can successfully impact a large business or organization, and in so doing make world-wide news, this only helps them gain traction and followers in their community. And they build, keep learning, and repeat, fueled by their financial success and notoriety.”

Smith: “Cyber capabilities are attractive for a lot of reasons—e.g., they can be executed remotely, purchased, obfuscated, difficult to positively attribute, among other attributes that make them easier to execute than a kinetic attack—but if I were a malicious cyber actor, I would be in the business because nation states are still figuring how to respond to cyberattacks. There is not an internationally agreed upon definition for what constitutes a cyberattack, when a cyberattack becomes an act of war, or any concrete estimation for what a proportional response to a cyberattack should be. Additionally, the legal mechanisms for prosecuting cyber activities are still being developed, so as a criminal, the fuzziness and ability to attack an asset within a country without clear consequences is very attractive—especially when law enforcement cyber capabilities are stretched thin and the courts have yet to catch up to technology (or have judges that do not understand the technology used in a case).”

More from the Cyber Statecraft Initiative:

#4 Where does existing theory or policy fall short in addressing the risks posed by the offensive cyber operations of non-state armed groups?

Lindsay: “Generally, we need more theory and empirical research about intelligence contests of any kind. Secret statecraft, and not only by states, is an understudied area in security studies, and it is also a hot research frontier. I do think that the conventional wisdom tends to overstate the threat of cyber from any kind of group, but it is consistent with the paranoid style of American politics.” 

McFate: “How many conferences have you been to where ‘experts’ bicker about whether a cyberattack constitutes war or not? Who cares? US policymakers and academic theorists think about war like pregnancy: you either are or are not. But, in truth, there is no such thing as war or peace; it is really war and peace. Our adversaries do not suffer from this bizarre false dichotomy and exploit our schizoid view of international relations. They wage war but disguise it as peace to us. Cyberattacks are perfect weapons because we spend more time on definitions than on solutions. We need more supple minds at the strategic helm.” 

Brantly: “Many scholars have focused on proxy actors operating in and through cyberspace. The theories and policies developed on the motivations and actions of proxies is robust. This subfield has grown substantially within the last three to four years. Some theorizing has focused on the use of cyber means by terrorist organizations, but most of the research in this area has been speculative. Little theorizing has been done on the use of cyberattacks by non-state armed groups that are not operating as proxies or terrorist organizations. Although there are few examples of such organizations using cyberattacks, increased analysis on this area is potentially warranted.” 

Shample: “The United States and its allies are overly focused on state-sponsored actors. This is because they can issue things like sanctions against state-tied actors, and have press conferences publicizing pomp and circumstance. They ignore the criminal contingent because they usually cannot publicly sanction them. This is short-sighted. The United States needs to combine its intelligence and military efforts to focus on all malicious actors, state-sponsored, criminal groups, and individual/independent actors. Stop worrying about sanctions—malicious APTs often laugh at sanctions from countries without extradition, and the sanctions will quite literally never impact them. They joke about them on underground forums and then continue attacking.” 

Smith: “An area that I am working on is the threats posed by non-state actors during periods of conflict—even ones that we cheer on from afar. The Russian invasion of Ukraine and the subsequent rise of the Ukrainian IT Army and pro-Russian groups like Killnet really complicate the conflict and have shown how organized non-military, non-state-sponsored, and mixed-nationality groups can have a direct impact on the modern battlefield. For entities like US Cyber Command and our foreign counterparts, this is an area of concern, as it is really the modern instantiation of civilians on the battlefield. When do those civilians become enemy combatants and how to we deal with them? Those questions are not answered yet and they are further complicated by the various motivations among groups that I discussed above.”

#5 How can the United States and its allies address the cyber threats posed by the many disparate non-state armed groups around the world?

Lindsay: “We should start by accepting that cyber conflict is both inevitable and tolerable. Cyberattacks are part of the societal search algorithm for identifying vulnerabilities that need to be patched, which helps us to build a better society. The United States and its allies should continue to work on the low-hanging fruit of cybercrime, privacy, and intelligence coordination (which are not really hanging that low), rather than focusing on bigger but more mythical threats. The small stuff will help with the big stuff.” 

McFate: “Three ways. First, better defense. Beyond the ‘ones and zeros’ warriors, we need to find ways to make Americans smarter consumers of information. Second, we need to get far more aggressive in our response. I feel like the United States is a goalie at a penalty shootout. If you want to deter cyberattacks, then start punching back hard until the bullies stop. Destroy problematic servers. Go after the people connected to them. Perhaps the United States should explore getting back into the dark arts again, as it once did during the Cold War. Lastly, enlist the private sector. ‘Hack back’ companies can chase down hackers like privateers. It is crazy in 2022 that we do not allow this, especially since the National Security Agency does not protect multinational corporations or civil society’s cybersecurity.”

Brantly: “The United States and its allies have already addressed cyber threats posed by different groups through the establishment of civilian and military organizations designed to identify and counter all manner of cyber threats. The United States has pushed out security standards through the National Institute of Standards and Technology, and US Cyber Command and the military cyber commands have worked to provide continuous intelligence on the cyber activities of potential adversaries. Continuing to strengthen organizations and standards that identify and counter cybersecurity threats remains important. Building norms around what is and is not acceptable behavior in cyberspace and what are critical cybersecurity practices among public and private sector actors will continue to constrain malicious behavior within this evolving domain of interaction. There is no single golden solution. Rather, addressing cybersecurity threats posed by all manner of actors requires multiple ongoing concurrent policy, regulatory, normative, and organizational actions.”

Shample: “If all entities working cyber operations (law enforcement, intelligence, and military) worked together and with the private sector more, the world would benefit. The private sector can move quicker with respect to changing infrastructure and the quickness of tracking malicious actors. Cyber criminals know they need to set up, act, and then usually tear down their infrastructure, change, and rebuild from scratch so as to avoid tracking. Cyber truly takes all efforts, all kinds of people working it together to be effective. There is too much focus on state-sponsored vs. criminal, and there is too much information not shared among practitioners. Counterterrorism focused analysis needs to be combined with combatting weapons and human trafficking and counter-narcotics, which all then come back to a financial focus. Terrorists like ISIS and others have been observed funding their operations by selling weapons, drugs, or humans, and then putting those funds into cryptocurrency. We have pillars of specialists that focus on one area, but there needs to be more combined efforts vs. singular-focused efforts. Underground forums need to be monitored. Telegram, discord, and dark web forums all need more monitoring. There needs to be a collective effort to combat serious cyber threats, versus dividing efforts and keeping ‘separate’ tracking. Government, military, and law enforcement need to work with the private sector and share the appropriate amount of information to take down criminal networks. There are too many solo efforts vs. a collective one to truly eradicate the malicious cyber criminals.”

Smith: “First, there is no silver bullet because there are so many variables to consider for each threat as it arises—context, composition, etc. are all confounding factors to consider. But I think that international partnerships and domestic partnerships with the private sector and critical infrastructure owners are the key to addressing non-state cyber actors and the threats they pose. The more we communicate and share intelligence and information among partners, the better we will be at anticipating threats and mitigating risk, while also ensuring that we are steadily working to create an ecosystem of support, skills, knowledge, processes and partnerships to combat the multi-modal threats coming from non-state cyber actors.”

Simon Handler is a fellow at the Atlantic Council’s Cyber Statecraft Initiative within the Digital Forensic Research Lab (DFRLab). He is also the editor-in-chief of The 5×5, a series on trends and themes in cyber policy. Follow him on Twitter @SimonPHandler.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post The 5×5—Non-state armed groups in cyber conflict appeared first on Atlantic Council.

]]>
Guevara in El Heraldo de México: on the effectiveness of state’s strategic capabilities (in Spanish) https://www.atlanticcouncil.org/insight-impact/in-the-news/guevara-in-el-heraldo-de-mexico-on-the-effectiveness-of-states-strategic-capabilities-in-spanish/ Tue, 25 Oct 2022 17:07:00 +0000 https://www.atlanticcouncil.org/?p=588159 On October 25, TSI NRSF Inigo Guevara authored an op-ed in El Heraldo de México discussing what makes a state’s strategic capabilities effective (text in Spanish).

The post Guevara in El Heraldo de México: on the effectiveness of state’s strategic capabilities (in Spanish) appeared first on Atlantic Council.

]]>

The Transatlantic Security Initiative, in the Scowcroft Center for Strategy and Security, shapes and influences the debate on the greatest security challenges facing the North Atlantic Alliance and its key partners.

The post Guevara in El Heraldo de México: on the effectiveness of state’s strategic capabilities (in Spanish) appeared first on Atlantic Council.

]]>