Digital Policy - Atlantic Council https://www.atlanticcouncil.org/issue/digital-policy/ Shaping the global future together Fri, 16 Aug 2024 19:34:25 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://www.atlanticcouncil.org/wp-content/uploads/2019/09/favicon-150x150.png Digital Policy - Atlantic Council https://www.atlanticcouncil.org/issue/digital-policy/ 32 32 The UN finally advances a convention on cybercrime . . . and no one is happy about it https://www.atlanticcouncil.org/blogs/new-atlanticist/the-un-finally-adopts-a-convention-on-cybercrime-and-no-one-is-happy/ Wed, 14 Aug 2024 20:47:22 +0000 https://www.atlanticcouncil.org/?p=785503 The treaty risks empowering authoritarian governments, harming global cybersecurity, and endangering human rights.

The post The UN finally advances a convention on cybercrime . . . and no one is happy about it appeared first on Atlantic Council.

]]>
On August 8, a contentious saga on drastically divergent views of how to address cybercrime finally came to a close after three years of treaty negotiations at the United Nations (UN). The Ad Hoc Committee set up to draft the convention on cybercrime adopted it by consensus, and the relief in the room was palpable. The member states, the committee, and especially the chair, Algerian Ambassador Faouzia Boumaiza-Mebarki, worked for a long time to come to an agreement. If adopted by the UN General Assembly later this year, as is expected, it will be the first global, legally binding convention on cybercrime. However, this landmark achievement should not be celebrated, as it poses significant risks to human rights, cybersecurity, and national security.

How did this happen? Russia, long opposed to the Council of Europe’s 2001 Budapest Convention on cybercrime, began this process in 2017. Then, in 2019, Russia, along with China, North Korea, Myanmar, Nicaragua, Syria, Cambodia, Venezuela, and Belarus, presented a resolution to develop a global treaty. Despite strong opposition from the United States and European states, the UN General Assembly adopted a resolution in December 2019, by a vote of seventy-nine in favor and sixty against (with thirty abstentions), that officially began the process. Already, it was clear that the member states did not share one vision. Indeed, they could not even agree on a name for the convention until last week. What they ended up with is a mouthful: “Draft United Nations convention against cybercrime: Strengthening international cooperation for combating certain crimes committed by means of information and communications technology systems and for the sharing of evidence in electronic form of serious crimes.”

This exceedingly long name reveals one of the biggest problems with this convention: its scope. At its heart, this convention is intended to allow law enforcement from different countries to cooperate to prevent, investigate, and prosecute cybercrime, which costs trillions of dollars globally each year. However, the convention covers much more than the typical cybercrimes that come to mind, such as ransomware, and includes crimes committed using technology, which reflects the different views as to what constitutes cybercrime. As if that were not broad enough, Russia, China, and other states succeeded in pushing for negotiations on an additional protocol that would expand the list of crimes even further. Additionally, under the convention, states parties are to cooperate on “collecting, obtaining, preserving, and sharing of evidence in electronic form of any serious crime”—which in the text is defined as a crime that is punishable by a maximum of four years or more in prison or a “more serious penalty,” such as the death penalty.

Rights-respecting states should not allow themselves to be co-opted into assisting abusive practices under the guise of cooperation.

In Russia, for example, association with the “international LGBT movement” can lead to extremism charges, such as the crime of displaying “extremist group symbols,” like the rainbow flag. A first conviction carries a penalty of up to fifteen days in detention, but a repeat offense carries a penalty of up to four years. That means a repeat offense would qualify as a “serious crime” under the cybercrime convention and be eligible for assistance from law enforcement in other jurisdictions that may possess electronic evidence relevant to the investigation—including traffic, subscriber, and even content data. Considering how much of modern life is carried out digitally, there will be some kind of electronic evidence for almost every serious crime under any domestic legislation. Even the UN’s own human rights experts cautioned against this broad definition.

Further, under the convention, states parties are obligated to establish laws in their domestic system to “compel” service providers to “collect or record” real-time traffic or content data. Many of the states behind the original drive to establish this convention have long sought this power over private firms. At the same time, states parties are free to adopt laws that keep requests to compel traffic and content data confidential—cloaking these actions in secrecy. Meanwhile, grounds for a country to refuse a cooperation request are limited to instances such as where it would be against that country’s “sovereignty,” security, or other “essential” interest, or if it would be against that country’s own laws. The convention contains a vague caveat that nothing in it should be interpreted as an obligation to cooperate if a country “has substantial grounds” to believe the request is made to prosecute or punish someone for their “sex, race, language, religion, nationality, ethnic origin, or political opinions.”

Russia claimed that such basic safeguards, which do offer some protection in the example regarding LGBT activity as “extremist,” were merely an opportunity for some countries to “abuse” the opportunity to reject cooperation requests. Those safeguards, conversely, could also be abused by the very same states that opposed them. The Iranian delegation, for its part, proposed a vote to delete that provision, as well as all other human rights safeguards and references to gender, on the day the text was adopted. These provisions had already been weakened significantly throughout the negotiation process and only survived thanks to the firm stance taken by Australia, Canada, Colombia, Iceland, the European Union, Mexico, and others that drew a red line and refused to accept any more changes.

The possible negative consequences of this convention are not limited to human rights but can seriously threaten global cybersecurity and national security. The International Chamber of Commerce, a global business organization representing millions of companies, warned during negotiations that “people who have access to or otherwise possess the knowledge and skills necessary” could be forced “to break or circumvent security systems.” Worse, they could even be compelled to disclose “previously unknown vulnerabilities, private encryption keys, or proprietary information like source code.” Microsoft agreed. Its representative, Nemanja Malisevic, added that this treaty will allow “for unauthorized disclosure of sensitive data and classified information to third states” and for “malicious actors” to use a UN treaty to “force individuals with knowledge of how a system functions to reveal proprietary or sensitive information,” which could “expose the critical infrastructure of a state to cyberattacks or lead to the theft of state secrets. Malisevic concluded that this “should terrify us all.”

Similarly, independent media organizations called for states to reject the convention, which the International Press Institute has called a “surveillance treaty.” Civil society organizations including Electronic Frontier FoundationAccess NowHuman Rights Watch, and many others have also long been ringing the alarm bell. They continue to do so as the final version of the convention adopted by the committee has failed to adequately address their concerns.

Given the extent and cross-border nature of cybercrime, it is evident that a global treaty is both necessary and urgent—on that, the international community is in complete agreement. Unfortunately, this treaty, perhaps a product of sunk-cost fallacy thinking or agreed to under duress for fear of an even worse version, does not solve the problems the international community faces. If the UN General Assembly adopts the text and the required forty member states ratify it so that it comes into force, experts are right to warn that governments intent on engaging in surveillance will have the veneer of UN legitimacy stamped on their actions. Rights-respecting states should not allow themselves to be co-opted into assisting abusive practices under the guise of cooperation. Nor should they willingly open the door to weakening their own national security or global cybersecurity.


Lisandra Novo is a staff lawyer for the Strategic Litigation Project at the Atlantic Council specializing in law and technology.

The post The UN finally advances a convention on cybercrime . . . and no one is happy about it appeared first on Atlantic Council.

]]>
Tech regulation requires balancing security, privacy, and usability  https://www.atlanticcouncil.org/blogs/econographics/tech-regulation-requires-balancing-security-privacy-and-usability/ Mon, 12 Aug 2024 14:44:33 +0000 https://www.atlanticcouncil.org/?p=785037 Good policy intentions can lead to unintended consequences when usability, privacy, and security are not balanced—policymakers must think like product designers to avoid these challenges.

The post Tech regulation requires balancing security, privacy, and usability  appeared first on Atlantic Council.

]]>
In the United States and across the globe, governments continue to grapple with how to regulate new and increasingly complex technologies, including in the realm of financial services. While they might be tempted to clamp down or impose strict centralized security requirements, recent history suggests that policymakers should jointly consider and balance usability and privacy—and approach their goals as if they were a product designer.

Kenya is a prime example: In 2007, a local telecommunications provider launched a form of mobile money called M-PESA, which enabled peer-to-peer money transfers between mobile phones and became wildly successful. Within five years, it grew to fifteen million users, with a deposit value approaching almost one billion dollars. To address rising security concerns, in 2013, the Kenyan government implemented a law requiring every citizen to officially register the SIM card (for their cell phone) using a government identification (ID). The measure was enforced swiftly, leading to the freezing of millions of SIM cards. Over ten years later, SIM card ID registration laws have become common across Africa, with over fifty countries adopting such regulations. 

But that is not the end of the story. In parallel, a practice called third-party SIM registration has become rampant, in which cell phone users register their SIM cards using someone else’s ID, such as a friend’s or a family member’s. 

Our recent research at Carnegie Mellon University, based on in-depth user studies in Kenya and Tanzania, found that this phenomenon of third-party SIM registration has both unexpected origins and unintended consequences. Many individuals in those countries face systemic challenges in obtaining a government ID. Moreover, some participants in our study reported having privacy concerns. They felt uncomfortable sharing their ID information with mobile money agents, who could repurpose that information for scams, harassment, or other unintended uses. Other participants felt “frustrated” by a process that was “cumbersome.” As a result, many users prefer to register a SIM card with another person’s ID rather than use or obtain their own ID.

Third-party SIM registration plainly undermines the effectiveness of the public policy and has additional, downstream effects. Telecommunications companies end up collecting “know your customer” information that is not reliable, which can impede law enforcement investigations in the case of misconduct. For example, one of our study subjects shared the story of a friend lending their ID for third-party registration, and later being arrested for the alleged crimes of the actual user of the SIM card. 

A core implication of our research is that the Kenyan government’s goals did not fully take into account the realities of the target population—or the feasibility of the measures that Kenya and Tanzania proposed. In response, people invented their own workarounds, thus potentially introducing new vulnerabilities and avenues for fraud.

Good policy, bad consequences 

Several other case studies demonstrate how even well-intentioned regulations can have unintended consequences and practical problems if they do not appropriately consider security, privacy and usability together. 

  • Uganda: Much like our findings in Kenya and Tanzania, a biometric digital identity program in Uganda has considerable unintended consequences. Specifically, it risks excluding fifteen million Ugandans “from accessing essential public services and entitlements” because they do not have access to a national digital identity card there. While the digitization of IDs promises to offer certain security features, it also has potential downsides for data privacy and risks further marginalizing vulnerable groups who are most in need of government services.
  • Europe: Across the European Union (EU), a landmark privacy law called General Data Protection Regulation (GDPR) has been critical for advancing data protection and has become a benchmark for regulatory standards worldwide. But GDPR’s implementation has had unforeseen effects such as some websites blocking EU users. Recent studies have also highlighted various usability issues that may thwart the desired goals. For example, opting out of data collection through app permissions and setting cookie preferences is an option for users. But this option is often exclusionary and inconvenient, resulting in people categorically waiving their privacy for the sake of convenience.
  • United States (health law): Within the United States, the marquee federal health privacy law passed in 1996 (the Health Insurance Portability and Accountability Act, known as HIPAA) was designed to protect the privacy and security of individuals’ medical information. But it also serves as an example of laws that can present usability challenges for patients and healthcare providers alike. For example, to comply with HIPAA, many providers still require the use of ink signatures and fax machines. Not only are technologies somewhat antiquated and cumbersome (thereby slowing information sharing)—they also pose risks arising from unsecured fax machines and misdialed phone numbers, among other factors.
  • Jamaica: Both Jamaica and Kenya have had to halt national plans to launch a digital ID in light of privacy and security issues. Kenya already lost over $72 million from a prior project that was launched in 2019, which failed because of serious concerns related to privacy and security. In the meantime, fraud continues to be a considerable problem for everyday citizens: Jamaica has incurred losses of more than $620 million from fraud since 2018.
  • United States [tax system]: The situation in Kenya and Jamaica mirrors the difficulties encountered by other digital ID programs. In the United States, the Internal Revenue Service (IRS) has had to hold off plans for facial recognition based on concerns about the inadequate privacy measures, as well as usability concerns—like long verification wait times, low accuracy for certain groups, and the lack of offline options. The stalled program has resulted in missed opportunities for other technologies that could have allowed citizens greater convenience in accessing tax-related services and public benefits. Even after investing close to $187 million towards biometric identification, the IRS has not made much progress.

Collectively, a key takeaway from these international experiences is that when policymakers fail to simultaneously balance (or even consider) usability, privacy, and security, the progress of major government initiatives and the use of digitization to achieve important policy goals is hampered. In addition to regulatory and legislative challenges, delaying or canceling initiatives due to privacy and usability concerns can lead to erosion in public trust, increased costs and delays, and missed opportunities for other innovations.

Policy as product design

Going forward, one pivotal way for government decision makers to avoid pitfalls like the ones laid out above is to start thinking like product designers. Focusing on the most immediate policy goals is rarely enough to understand the practical and technological dimensions of how that policy will interact with the real world.

That does not mean, of course, that policymakers must all become experts in creating software products or designing user interfaces. But it does mean that some of the ways that product designers tend to think about big projects could inform effective public policy.

First, policymakers should embrace user studies to better understand the preferences and needs of citizens as they interact digitally with governmental programs and services. While there are multiple ways user studies can be executed, the first often includes upfront qualitative and quantitative research to understand the core behavioral drivers and systemic barriers to access. These could be complemented with focus groups, particularly with marginalized communities and populations who are likely to be disproportionately affected by any unintended outcomes of tech policy. 

Second, like early-stage technology products that are initially rolled out to an early group of users (known as “beta-testing”), policymakers could benefit from pilot testing to encourage early-stage feedback. 

Third, regulators—just like effective product designers—should consider an iterative process whereby they solicit feedback, implement changes to a policy or platform, and then repeat the process. This allows for validation of the regulation and makes room for adjustments and continuous improvements as part of an agency’s rulemaking process.

Lastly, legislators and regulators alike should conduct more regular tabletop exercises to see how new policies might play out in times of crisis. The executive branch regularly does such “tabletops” in the context of national security emergencies. But the same principles could apply to understanding cybersecurity vulnerabilities or user responses before implementing public policies or programs at scale.

In the end, a product design mindset will not completely eliminate the sorts of problems we have highlighted in Kenya, the United States, and beyond. However, it can help to identify the most pressing usability, security, and privacy problems before governments spend time and treasure to implement regulations or programs that may not fit the real world.


Karen Sowon is a user experience researcher and post doctoral research associate at Carnegie Mellon University.

JP Schnapper-Casteras is a nonresident senior fellow at the Atlantic Council’s GeoEconomics Center and the founder and managing partner at Schnapper-Casteras, PLLC.


Giulia Fanti is a nonresident senior fellow at the Atlantic Council’s GeoEconomics Center and an assistant professor of electrical and computer engineering at Carnegie Mellon University.

At the intersection of economics, finance, and foreign policy, the GeoEconomics Center is a translation hub with the goal of helping shape a better global economic future.

The post Tech regulation requires balancing security, privacy, and usability  appeared first on Atlantic Council.

]]>
The future of digital transformation and workforce development in Latin America and the Caribbean https://www.atlanticcouncil.org/in-depth-research-reports/report/the-future-of-digital-transformation-and-workforce-development-in-latin-america-and-the-caribbean/ Thu, 08 Aug 2024 14:00:00 +0000 https://www.atlanticcouncil.org/?p=775109 During an off-the-record private roundtable, thought leaders and practitioners from across the Americas evaluated progress made in the implementation of the Regional Agenda for Digital Transformation.

The post The future of digital transformation and workforce development in Latin America and the Caribbean appeared first on Atlantic Council.

]]>
The sixth of a six-part series following up on the Ninth Summit of the Americas commitments.

An initiative led by the Atlantic Council’s Adrienne Arsht Latin America Center in partnership with the US Department of State continues to focus on facilitating greater constructive exchange among multisectoral thought leaders and government leaders as they work to implement commitments made at the ninth Summit of the Americas. This readout was informed by a private, information-gathering roundtable and several one-on-one conversations with leading experts in the digital space.

Executive summary

At the ninth Summit of the Americas, regional leaders agreed on the adoption of a Regional Agenda for Digital Transformation that reaffirmed the need for a dynamic and resilient digital ecosystem that promotes digital inclusion for all peoples. The COVID-19 pandemic exacerbated the digital divide globally, but these gaps were shown to be deeper in developing countries, disproportionately affecting women, children, persons with disabilities, and other vulnerable and/or marginalized individuals. Through this agenda, inclusive workforce development remains a key theme as an avenue to help bridge the digital divide and skills gap across the Americas.

As part of the Atlantic Council’s consultative process, thought leaders and practitioners evaluated progress made in the implementation of the Regional Agenda for Digital Transformation agreed on at the Summit of Americas, resulting in three concrete recommendations: (1) leverage regional alliances and intraregional cooperation mechanisms to accelerate implementation of the agenda; (2) strengthen public-private partnerships and multisectoral coordination to ensure adequate financing for tailored capacity-building programs, the expansion of digital infrastructure, and internet access; and (3) prioritize the involvement of local youth groups and civil society organizations, given their on-the-ground knowledge and role as critical indicators of implementation.

Recommendations for advancing digitalization and workforce development in the Americas:

  1. Leverage regional alliances and intraregional cooperation mechanisms to accelerate implementation of the agenda.
  • Establish formal partnerships between governments and local and international universities to broaden affordable student access to exchange programs, internships, and capacity-building sessions in emerging fields such as artificial intelligence and cybersecurity. Programs should be tailored to country-specific economic interests and sectors such as agriculture, manufacturing, and tourism. Tailoring these programs can also help enhance students’ access to the labor market upon graduation.
  • Ensure existing and new digital capacity-building programs leverage diaspora professionals. Implement virtual workshops, webinars, and collaborative projects that transfer knowledge and skills from technologically advanced regions to local communities. Leveraging these connections will help ensure programs are contextually relevant and effective.
  • Build on existing intraregional cooperation mechanisms and alliances to incorporate commitments of the Regional Agenda for Digital Transformation. Incorporating summit commitments to mechanisms such as the Alliance for Development in Democracy, the Americas Partnership for Economic Prosperity, the Caribbean Community and Common Market, and other subregional partnerships can result in greater sustainability of commitments as these alliances tend to transcend finite political agendas.
  • Propose regional policies to standardize the recognition of digital nomads and remote workers, including visa programs, tax incentives, and employment regulations. This harmonization will facilitate job creation for young professionals and enhance regional connectivity.
  1. Prioritize workforce development for traditionally marginalized groups by strengthening public-private partnerships and multisectoral collaboration.
  • Establish periodic and open dialogues between the public and private sectors to facilitate the implementation of targeted digital transformation for key sectors of a country’s economy that can enhance and modernize productivity. For instance, provide farmers with digital tools for precision agriculture, train health care workers in telemedicine technologies, and support tourism operators in developing online marketing strategies.
  • Foster direct lines of communication with multilateral organizations such as the Inter-American Development Bank and the World Bank. Engaging in periodic dialogues with these actors will minimize duplication of efforts and maximize the impact of existing strategies and lines of work devoted to creating digital societies that are more resilient and inclusive. Existing and new programs should be paired with employment opportunities and competitive salaries for marginalized groups based on the acquired skills, thereby creating strong incentives to pursue education in digital skills.
  • Collaborate with telecommunications companies to offer subsidized internet packages for low-income households and small businesses and simplify regulatory frameworks to attract investment in rural and underserved areas, expanding internet coverage and accessibility.
  • Enhance coordination with private sector and multilateral partners to create a joint road map for sustained financing of digital infrastructure and workforce development to improve investment conditions in marginalized and traditionally excluded regions and cities.
  1. Increase engagement with local youth groups and civil society organizations to help ensure digital transformation agendas are viable and in line with local contexts.
  • Facilitate periodic dialogues with civil society organizations, the private sector , and government officials and ensure that consultative meetings are taking place at remote locations to ensure participation from disadvantaged populations in the digital space. Include women, children, and persons with disabilities to ensure capacity programs are generating desired impact and being realigned to address challenges faced by key, targeted communities.
  • Work with local actors such as youth groups and civil society organizations to conduct widespread awareness campaigns to help communities visualize the benefits of digital skills and technology use. Utilize success stories and case studies to show how individuals and businesses can thrive in a digital economy, fostering a culture of innovation and adaptation.
  • Invest in local innovation ecosystems by providing grants and incentives for start-ups and small businesses working on digital solutions. Create business incubators and accelerators to support the growth of digital enterprises, particularly those addressing local challenges.
  • Offer partnership opportunities with governments to provide seed capital, contests, digital boot camps, and mentorship sessions specifically designed for girls and women in school or college to help bridge the gender digital divide.

Related content

Solar panels on a field of grass

Report

May 31, 2024

PACC 2030 objectives: The road to implementation

By Wazim Mowla, Charlene Aguilera

The Atlantic Council organized a PACC 2030 Working Group and worked closely with governments, the business community, and civil society organizations to support the implementation of PACC 2030’s objectives.

Caribbean Climate Change & Climate Action
Medical personnel handling COVID swab test.

Report

Apr 16, 2024

Advancing health and resilience policies in Latin America and the Caribbean

By Isabel Chiriboga, Martin Cassinelli, Diego Area

During an off-the-record private roundtable, thought leaders and practitioners from across the Americas discussed how to further enhance access to and finance for health services and products in the region.

Coronavirus Latin America

Summit of the Americas

Amid global uncertainties and new challenges, the ninth Summit of the Americas is a renewed opportunity to bring about hemispheric cooperation and consensus to reach regional prosperity and security.

Related experts



Subscribe to LAC Source Newsletter
Get monthly updates on Latin America and the Caribbean (LAC) to receive the latest developments of the region, upcoming public events and recaps, new reports, and more.

The Adrienne Arsht Latin America Center broadens understanding of regional transformations and delivers constructive, results-oriented solutions to inform how the public and private sectors can advance hemispheric prosperity.

The post The future of digital transformation and workforce development in Latin America and the Caribbean appeared first on Atlantic Council.

]]>
Effective US government strategies to address China’s information influence https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/effective-us-government-strategies-to-address-chinas-information-influence/ Tue, 30 Jul 2024 12:00:00 +0000 https://www.atlanticcouncil.org/?p=782361 To mount the most effective response to Chinese influence and the threat it poses to democratic interests at home and on the international stage, the United States should develop a global information strategy, one that reflects the interconnected nature of regulatory, industrial, and diplomatic policies with regard to the information domain.

The post Effective US government strategies to address China’s information influence appeared first on Atlantic Council.

]]>

China’s global influence operations have received increasing attention in the national security community. Numerous congressional hearings, media reports, and academic and industry findings have underscored China’s increased use and resourcing of foreign information manipulation and interference (FIMI) tactics in its covert operations both in the United States and abroad.

In response, US government offices the Foreign Malign Influence Center (FMIC), the Global Engagement Center (GEC), and the Cybersecurity and Infrastructure Security Agency (CISA), among others, have made strides in raising awareness of the issue and charting pathways to increase the resilience of the US information ecosystem to foreign influence. To date, however, the efforts to counter the influence of the People’s Republic of China (PRC) have been fragmented. That fragmentation is indicative of a lack of cohesion around the concept of influence operations itself.

Across the government and nongovernment sectors alike, there is considerable variation regarding the definition and scope of information manipulation. For example, the Department of State’s (DOS’s) GEC has an expansive definition, which includes “leveraging propaganda and censorship, promoting digital authoritarianism, exploiting international organizations and bilateral partnerships, pairing cooptation and pressure, and exercising control of Chinese-language media.” Others define it more narrowly as disinformation and propaganda spread by a foreign threat actor in a coordinated, inauthentic manner, and largely occurring on social media platforms.

This variation is a reflection of the holistic and multifaceted nature of Chinese influence. Coercive tactics and influence operations have long been a central part of China’s strategic tool kit and core to how it engages with the outside world. Because China conceives of the information domain as a space that must be controlled and dominated to ensure regime survival, information operations are part of a much bigger umbrella of influence that spans the economic, political, and social domains. It may be more useful to think of information manipulation as existing within the broader conceptual framework of China’s weaponization of the information domain in service of its goal to gain global influence.

As previous work by the Digital Forensic Lab (DFRLab) has shown, China’s approach to the information domain is coordinated and proactive, taking into account the mutually constitutive relationships between the economic, industrial, and geopolitical strategies of the Chinese Communist Party (CCP). The aim of its efforts is to gain influence—or “discourse power”—with the ultimate goal of decentering US power and leadership on the global stage. One of the main mechanisms through which the CCP seeks to achieve this objective is by focusing on the dominance of information ecosystems. This ecosystem encompasses not only narratives and content that appear in traditional and social media but also the digital infrastructure on which communication systems rely, the policies that govern those systems at the international level, and the diplomatic strategy deployed by Beijing’s operatives abroad to gain buy-in for the CCP’s vision of the global order.

The DFRLab’s previous two reports, which explored China’s strategy and the impacts of its operations abroad, found that the United States will not be successful in addressing the challenges of Chinese influence if it sees that influence as separate from the interconnected economic, political, and technical domains in which its strategy is embedded.

To this end, the DFRLab hosted a series of one-on-one expert interviews, conducted research and workshops, and held a virtual roundtable discussion with scholars and practitioners with expertise on or experience in addressing authoritarian influence and information operations, US government processes and policies around these issues, and Chinese foreign policy. This issue brief is part of a larger body of work that examines the Chinese government’s interests and capabilities and the impacts of party’s efforts to shape the global information ecosystem. The focus of this report is on how the US government can best respond to those challenges, including the architecture, tools, and strategies that exist for addressing PRC influence and information manipulation, as well as any potential gaps in the government tool kit.

This report finds that, to mount the most effective response to Chinese influence and the threat it poses to democratic interests at home and on the international stage, the United States should develop a global information strategy, one that reflects the interconnected nature of regulatory, industrial, and diplomatic policies with regard to the information domain. A core assumption undergirding this concept is that US policymaking space tends to over-index on the threat of information manipulation in particular while under-indexing on the core national interest of fostering a secure, interoperable information environment on a larger scale.

The limits of understanding Chinese influence as systemic and part of a broader strategy has sometimes led US response to be pigeonholed as an issue of strategic communications, rather than touching on the information and technology ecosystems, among others, where China focuses its information and influence efforts. Responding to Chinese influence with government messaging is not sufficient to address the complex nature of the challenge and places the United States in a position of reactivity.

In short, understanding that the CCP (1) integrates its tech industrial strategy, governance policy, and engagement strategy and (2) connects its approach at home to how it engages abroad, the United States needs to do the same, commensurate with its values. It should not respond tit-for-tat but rather have a collective strategy for a global competition for information that connects its tech strategy to its governance approach to its engagement around the world.

That is not to say that a US strategy on information resilience should mirror China’s, or that such a strategy should be developed in response to the PRC’s actions in the information domain. Nor is it to say that the United States should adopt a similar whole-of-government approach to the information domain. There are silos by design in the US system and important legal and normative foundations for the clear delineation of mission between them. What this issue brief argues for is a strategic breaking down of silos to facilitate proactive action versus a dangerous breaking down of legally required silos.

This report emphasizes that the United States should articulate how major initiatives like the CHIPS and Science Act, regulatory approaches like the recent executive orders on AI and data security, and the DOS’s recent cyberspace and digital policy strategy are part of a cohesive whole and should be understood and operationalized as such.

The strategy should outline what the United States stands for as much as what it is against. This requires that the United States frame its assessment of threat within a broader strategy of what its values are and how those values should be articulated in its regulatory, strategic, and diplomatic initiatives to promote open information environments and shore up information resilience. This includes working with allies and partners to ensure that a free, open, and interoperable internet is a global priority as well as a domestic one; developing common standards for understanding and thresholding foreign influence; and promoting connectivity at home and abroad. One finding of this report is that the United States is already leaning into its strengths and values, including championing policies that support openness and continuing support for civil society. This, along with the awareness of influence operations as the weaponization of the information domain, is a powerful response to authoritarian attacks on the integrity of both the domestic US and global information spaces.

The United States has a core national security interest in the existence of a rules-based, orderly, and open information environment. Such an environment facilitates the essential day-to-day tasks related to public diplomacy, the basic expression of rights, and investment in industries of strategic and economic value. Absent a coherent strategy on these core issues related to the integrity of the United States’ information environment that is grounded in an understanding of the interconnected nature of their constitutive parts, the challenges of foreign influence and interference will only continue to grow. This issue brief contains three sections. For sections one and two, experts in different aspects of the PRC’s information strategy addressed two to three main questions; during the course of research, further points were raised that are included in the findings. Each section represents a synthesis of the views expressed in response to these questions. The third section comprises recommendations for the US government based on the findings from the first two sections.

About the author

Related content

The Atlantic Council’s Digital Forensic Research Lab (DFRLab) has operationalized the study of disinformation by exposing falsehoods and fake news, documenting human rights abuses, and building digital resilience worldwide.

The post Effective US government strategies to address China’s information influence appeared first on Atlantic Council.

]]>
A policymaker’s guide to ensuring that AI-powered health tech operates ethically https://www.atlanticcouncil.org/blogs/geotech-cues/a-policymakers-guide-to-ensuring-that-ai-powered-health-tech-operates-ethically/ Mon, 29 Jul 2024 20:00:57 +0000 https://www.atlanticcouncil.org/?p=782140 The private sector is moving quickly with the development of AI tools. The public sector will need to keep up with new strategies, standards, and regulations around the deployment and use of such tools in the healthcare sector.

The post A policymaker’s guide to ensuring that AI-powered health tech operates ethically appeared first on Atlantic Council.

]]>
The healthcare landscape is undergoing a profound transformation thanks to artificial intelligence (AI) and big data. However, with this transformation come complex challenges surrounding data collection, algorithmic decision-making, transparency, and workforce readiness.

That was a topic of a recent roundtable hosted by the GeoTech Center and Syntropy, a platform that works with healthcare, government, and other groups to collaborate on data in a single ecosystem geared toward informing healthcare research.

At the roundtable, experts from the public and private sectors discussed the complex challenges that arise with the transformation of the healthcare sector, arguing that these challenges lie not only in the development of the technology but also in the implementation and use of it.

As AI becomes more and more integrated with healthcare, policymakers must lay the groundwork for a future in which AI augments, rather than replaces, human expertise in the pursuit of better health outcomes for all. Below are the roundtable participants’ recommendations for policymakers, focusing on building strong data foundations, setting guidelines for algorithm testing and maintenance, fostering trust and transparency, and supporting a strong workforce.

1. Building strong data foundations

Data sets in the healthcare sector can be messy, small in scale, and lacking in diversity, leading to inherent biases that can skew the outcomes of AI-driven analyses—and decisions made following such analyses. Moreover, these biases are not always apparent and often require extensive work to identify. Thus, it is important at the outset to ensure the integrity, quality, and diversity of the data with which AI systems are trained.

The ability to do so will in part depend on the strength of the workforce and the infrastructure that collects and manages data. For example, hospitals—from large, well-funded facilities to smaller community-based hospitals with fewer resources—play an important role in collecting data.

A strong foundation for data is one that protects data. In an ideal world, all individuals (regardless of socioeconomic status or geographic location) can benefit from AI-driven healthcare technologies. With that come concerns about the protection of health data, particularly in countries with fragile democracies and low regulatory standards. The potential misuse of health data by governments around the world poses significant risks to individual privacy and autonomy, highlighting the need for robust legal and ethical frameworks to safeguard against such abuses.

To address such challenges with data collection and management, policymakers can begin by implementing the following:

  • Establishing a foundational data strategy for healthcare data that will improve patient equity by setting standards for inclusive data sets.
  • Allocating more resources and support for community hospitals to ensure that the data collected in such facilities is high quality and diverse.
  • Encouraging the development of robust data systems that allow for better data sharing, collaboration, and interoperability.
  • Optimizing patient benefits by providing transparency about not only the healthcare providers but also about anyone else participating in data sharing.

2. Establishing guidelines for algorithm testing and maintenance by healthcare-technology companies

While building an algorithm may be a complex process, understanding and testing its performance over time is even more challenging. The dynamic nature of the healthcare industry demands ongoing adaptation and refinement of algorithms to account for evolving patient needs, technological advancements, and regulatory requirements.

In addition to continuous testing, it’s important to recognize that the same algorithms may exhibit different risk profiles when deployed in different contexts. Factors such as patient demographics, disease prevalence, and healthcare infrastructure can all influence the performance and safety of AI algorithms. A one-size-fits-all approach to AI deployment in healthcare is neither practical nor advisable.

To ensure that algorithms are constantly tested and maintained, policymakers should consider the following:

  • Developing guidelines that inform developers, testers, data scientists, regulators, and clinicians about their shared responsibility of maintaining algorithms.
  • Instituting an oversight authority to continuously monitor the risks associated with decisions that have been made based on AI to ensure the algorithms remain accurate, reliable, and safe for clinical settings.

3. Fostering patient trust and transparency

As technology continues to impact the healthcare industry, and as patients often find themselves unaware of the integration of AI technologies into their care processes, it becomes more difficult for those patients to give informed consent. This lack of transparency undermines patient autonomy and raises profound ethical questions about patients’ right to be informed and participate in health-related decisions. A lack of awareness about the integration of AI technologies is just one layer to the problem; even if a patient knows that AI is playing a role in their care, they may not know about who sponsors such technologies. Sponsors pay for the testing and maintenance of these systems, and they may also have access to the patient’s data.

When AI technologies are involved in care processes, it is still important to achieve the right balance between human interaction and AI-driven solutions. While AI technologies hold great promise for improving efficiency and accuracy in clinical decision-making, they must be integrated seamlessly into existing workflows and complement (rather than replace) human expertise and judgment.

The willingness to accept AI in healthcare varies significantly among patients and healthcare professionals. To bridge this gap in acceptance and address other challenges with trust and transparency, policymakers should consider the following:

  • Providing transparent information about the capabilities, limitations, and ethical considerations of AI technologies.
  • Encouraging companies to use particular design methods that ensure that tools and practices align with privacy values and protect patient autonomy.
  • Producing guiding principles for hospitals to promote a deep understanding of the implications of AI and proactively addressing concerns related to workforce dynamics and patient care.
  • Developing strategies to strengthen institutional trust to encourage patients to share data, avoiding algorithms that develop in silos.
  • Awarding organizations with an integrity badge for transparency, responsible use, and testing.

4. Supporting a strong workforce

The integration of AI tools into healthcare workflows is challenging, particularly because of the changes in processes, job roles, patient-provider interactions, and organizational culture such implementation creates. It will be necessary to support the hospital workforce with strategies to manage this change and also with comprehensive education and training initiatives. While the focus here is on humans rather than technology, such support is just as integral to realizing the full potential of these innovations in improving patient outcomes and healthcare delivery.

Many hospitals lack the necessary capabilities to effectively leverage AI technologies to their fullest potential, but supporting technical assistance training and infrastructure could help in the successful deployment of AI technologies.

To navigate the changes that AI tools would bring to the workplace, policymakers should consider the following:

  • Releasing guidance to healthcare companies to anticipate change management, education, training, and governance.
  • Incentivizing private-sector technical assistance training and infrastructure to provide services to communities with fewer resources.
  • Creating training programs tailored to the specific needs of healthcare organizations so that stakeholders can ensure AI implementations are both effective and sustainable in the long run.

The private sector is moving quickly with the development of AI tools. The public sector will need to keep up with new strategies, standards, and regulations around the deployment and use of such tools in the healthcare sector.


Coley Felt is a program assistant at the GeoTech Center.

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post A policymaker’s guide to ensuring that AI-powered health tech operates ethically appeared first on Atlantic Council.

]]>
The sovereignty trap https://www.atlanticcouncil.org/blogs/geotech-cues/the-sovereignty-trap/ Fri, 26 Jul 2024 19:11:47 +0000 https://www.atlanticcouncil.org/?p=781286 When sovereignty is invoked in digital contexts without an understanding of the broader political environment, several traps can be triggered.

The post The sovereignty trap appeared first on Atlantic Council.

]]>
This piece was originally published on DFRLab.org.

On February 28, 2024, a blog post entitled “What is Sovereign AI?” appeared on the website of NVIDIA, a chip designer and one of the world’s most valuable companies. The post defined the term as a country’s ability to produce artificial intelligence (AI) using its own “infrastructure, data, workforce and business networks.” Later, in its May 2024 earnings report, NVIDIA outlined how sovereign AI has become one of its “multibillion dollar” verticals, as it seeks to deliver AI chips and software to countries around the world.

On its face, “sovereign AI” as a concept is focused on enabling states to mitigate potential downsides of relying on foreign-made large AI models. Sovereign AI is NVIDIA’s attempt to turn this growing demand from governments into a new market, as the company seeks to offer governments computational resources that can aid them in ensuring that AI systems are tailored to local conditions. By invoking sovereignty, however, NVIDIA is weighing into a complex existing geopolitical context. The broader push from governments for AI sovereignty will have important consequences for the digital ecosystem on the whole and could undermine internet freedom. NVIDIA is seeking to respond to demand from countries that are eager for more indigenous options for developing compute capacity and AI systems. However, sovereign AI can create “sovereignty traps” that unintentionally grant momentum to authoritarian governments’ efforts to undermine multistakeholder governance of digital technologies. This piece outlines the broader geopolitical context behind digital sovereignty and identifies several potential sovereignty traps associated with sovereign AI.1

Background

Since its inception, the internet has been managed through a multistakeholder system that, while not without its flaws, sought to uphold a global, open, and interoperable internet. Maintaining this inherent interconnectedness is the foundation by which the multistakeholder community of technical experts, civil society organizations, and industry representatives have operated for years.

One of the early instantiations of digital sovereignty was introduced by China in its 2010 White Paper called “The State of China’s Internet.” In it, Beijing defined the internet as “key national infrastructure,” and as such it fell under the scope of the country’s sovereign jurisdiction. In the same breath, Chinese authorities also made explicit the centrality of internet security to digital sovereignty. In China’s case, the government aimed to address internet security risks related to the dissemination of information and data—including public opinion—that could pose a risk to the political security of the Chinese Communist Party (CCP). As a result, foreign social media platforms like X (formerly Twitter) and Facebook have been banned in China since around 2009. It is no coincidence that the remit of China’s main internet regulator, the Central Cyberspace Affairs Commission, has evolved from developing and enforcing censorship standards for online content to becoming a key policy body for regulating privacy, data security, and cybersecurity.

This emphasis on state control over the internet—now commonly referred to by China as “network sovereignty” or “cyber sovereignty” (网络主权), also characterizes China’s approach to the global digital ecosystem. Following the publication of its White Paper in 2010, in September of the following year, China, Russia, Tajikistan, and Uzbekistan jointly submitted an “International Code of Conduct for Information Security” to the United Nations General Assembly, which held that control over policies related to the governance of the internet is “the sovereign right of states”—and thus should reside squarely under the jurisdiction of the host country.

In line with this view, China has undertaken great efforts in recent years to move the center of gravity of internet governance from multistakeholder to multilateral fora. For example, Beijing has sought to leverage the platform of the Global Digital Compact under the United Nations to engage G-77 countries to support its vision. China has proposed language that would make the internet a more centralized, top-down network over which governments have sole authority, excluding the technical community and expert organizations that have helped shape community governance from the internet’s early days.

Adding to the confusion is the seeming interchangeability of the terms “cyber sovereignty,” used more frequently by China, and “digital sovereignty,” a term used most often by the European Union and its member states. While semantically similar, these terms have vastly different implications for digital policy due to the disparate social contexts in which they are embedded. For example, while the origin of the “cyber sovereignty” concept in China speaks to the CCP’s desire for internet security, some countries view cyber sovereignty as a potential pathway by which to gain more power over the development of their digital economies, thus enabling them to more efficiently deliver public goods to their citizens. There is real demand for this kind of autonomy, especially among Global Majority countries.

Democracies are now trying to find alternative concepts to capture the spirit of self-sufficiency in tech governance without lending credence to the more problematic implications of digital sovereignty. For example, in Denmark’s strategy for tech diplomacy, the government avoids reference to digital sovereignty, instead highlighting the importance of technology in promoting and preserving democratic values and human rights, while assisting in addressing global challenges. The United States’ analogous strategy invokes the concept of “digital solidarity” as a counterpoint, alluding to the importance of respecting fundamental rights in the digital world.

Thus, ideas of sovereignty, as applied to the digital, can have both a positive, rights-affirming connotation, as well as a negative one that leaves the definition of digital rights and duties to the state alone. This can lead to confusion and often obscures the legitimate concerns that Global Majority countries have about technological capacity-building and autonomy in digital governance.

NVIDIA’s addition of the concept of “sovereign AI” further complicates this terrain and may amplify the problems presented by authoritarian pushes for sovereignty in the digital domain. For example, national-level AI governance initiatives that emphasize sovereignty may undermine efforts for collective and collaborative governance of AI, reducing the efficacy of risk mitigations. Over-indexing on sovereignty in the context of technology often cedes important ground in ensuring that transformative technologies like AI are governed in an open, transparent, and rights-respecting manner. Without global governance, the full, uncritical embrace of sovereign AI may make the world less safe, prosperous, and democratic. Below we outline some of the “traps” that can be triggered when sovereignty is invoked in digital contexts without an understanding of the broader political contexts within which such terms are embedded.

Sovereignty trap 1: Sovereign systems are not collaborative

If there is one thing we have learned from the governance of the internet in the past twenty years, it is that collaboration sits at the core of how we should address the complexity and fast-paced nature of technology. AI is no different. It is an ecosystem that is both diverse and complex, which means that no single entity or person should be responsible for allocating its benefits and risks. Just like the internet, AI is full of “wicked problems,” whether regarding the ethics of autonomy or the effects that large language models could have on the climate, given the energy required to build large models. Wicked problems can only be solved through successful collaboration, not with each actor sticking its head in the sand.

Collaboration leads to more transparent governance, and transparency in how AI is governed is essential given the potential for AI systems to be weaponized and cause real-world harm. For example, many of the drones that are being used in the war in Ukraine have AI-enabled guidance or targeting systems, which has had a major impact on the war. Just as closed systems on the internet can be harmful for innovation and competition, as with operating systems or app stores built as “walled gardens,” AI systems that are created in silos and are not subject to a collaborative international governance framework will produce fewer benefits for society.

Legitimate concerns about the misappropriation of AI systems will only worsen if sovereign AI is achieved by imposing harsh restrictions on cross-border data flows. Just like in the case of the internet, data flows are crucial because they ensure access to information that is important for AI development. True collaboration can help level the playing field between stakeholders and address existing gaps, especially in regard to the need for human rights to underlie the creation, deployment, and use of AI systems.

Sovereignty trap 2: Sovereign systems make governments the sole guarantors of rights

Sovereign AI, like its antecedent “digital sovereignty,” means different things to different audiences. On one hand, it denotes reclaiming control of the future from dominant tech companies, usually based in the United States. It is important to note that rallying cries for digital sovereignty stem from real concerns about critical digital infrastructure, including AI infrastructure, being disrupted or shut down unilaterally by the United States. AI researchers have long said that actors in the Global Majority must avoid being relegated to the status of data suppliers and consumers of models, as AI systems that are built and tested in the contexts where they will actually be deployed will generate better outcomes for Global Majority users.

The other connotation of sovereign AI, however, is that the state has the sole authority to define, guarantee, or deny rights. This is particularly worrying in the context of generative AI, which is an inherently centralizing technology due to its lack of interpretability and the immense resources required to build large AI models. If governments choose to pursue sovereign AI by nationalizing data resources, such as by blocking cross-border transfer of datasets that could be used to train large AI models, this could have significant implications for human rights. For instance, governments might increase surveillance to better collect such data or to monitor cross-border transfers. At a more basic level, governments have a more essentialist understanding of national identity than civil society organizations, sociotechnical researchers, or other stakeholders who might curate national datasets, meaning government-backed data initiatives for sovereign AI are still likely to hurt marginalized populations.

Sovereignty trap 3: Sovereign systems can be weaponized

Assessing the risks of sovereign AI systems is critical, but governments lack the capacity and the incentives to do so. The bedrock of any AI system lies in the quality and quantity of the data used to build it. If the data is biased or incomplete, or if the values encoded in the data are nondemocratic or toxic, an AI system’s output will reflect these characteristics. This is akin to the old adage in computer science, “garbage in, garbage out,” emphasizing that the quality of output is determined by the quality of the input.

As countries increasingly rely on AI for digital sovereignty and national security, new challenges and potential risks emerge. Sovereign AI systems, designed to operate within a nation’s own infrastructure and data networks, might inadvertently or intentionally weaponize or exaggerate certain information based on their training data.

For instance, if a national AI system is trained on data that overwhelmingly endorses nondemocratic values or autocratic perspectives, the system may identify certain actions or entities as threats that would not be considered as such in a democratic context. These could include political opposition, civil society activism, or free press. This scenario echoes the concerns about China’s approach to “cyber sovereignty,” where the state exerts control over digital space in several ways to suppress information sources that may present views or information contradicting the official narrative of the Chinese government. This includes blocking access to foreign websites and social media platforms, filtering online content, and monitoring digital communications to prevent the dissemination of dissenting views or information deemed sensitive by the government. Such measures could potentially be reinforced through the use of sovereign AI systems.

Moreover, the legitimacy that comes with sovereign AI projects could be exploited by governments to ensure that state-backed language models endorse a specific ideology or narrative. This is already taking place in China, where the government has succeeded in censoring the outputs of homegrown large language models. This also aligns with China’s push to leverage the Global Digital Compact to reshape internet governance in favor of a more centralized approach. If sovereign AI is used to bolster the position of authoritarian governments, it could further undermine the multistakeholder model of internet and digital governance.

Conclusion

The history of digital sovereignty shows that sovereign AI comes with a number of pitfalls, even as its benefits remain largely untested. The push to wall off the development of AI and other emerging technologies with diminished external involvement and oversight is risky: lack of collaboration, governments as the sole guarantors of rights, and potential weaponization of AI systems are all major potential drawbacks of sovereign AI. The global community should focus on ensuring AI governance is open, collaborative, transparent, and aligned with core values of human rights and democracy. While sovereign AI will undoubtedly boost NVIDIA’s earnings, its impact on democracy is more ambiguous.

Addressing these potential threats is crucial for global stability and security. As AI’s impact on national security grows, it is essential to establish international norms and standards for the development and deployment of state-backed AI systems. This includes ensuring transparency in how these systems are built, maintained, released, and applied, as well as implementing measures to prevent misuse of AI applications. AI governance should seek to ensure that AI enhances security, fosters innovation, and promotes economic growth, rather than exacerbating national security threats or strengthening authoritarian governments. Our goal should be to advance the well-being of ordinary people, not sovereignty for sovereignty’s sake.


Konstantinos Komaitis is a nonresident fellow with the Democracy + Tech Initiative of the Atlantic Council’s Digital Forensic Research Lab.

Esteban Ponce de León is a research associate at the Atlantic Council’s Digital Forensic Research Lab based in Colombia.

Kenton Thibaut is a resident China fellow at the Atlantic Council’s Digital Forensic Research Lab.

Trisha Ray is an associate director and resident fellow at the Atlantic Council’s GeoTech Center.

Kevin Klyman is a visiting fellow at the Atlantic Council’s Digital Forensic Research Lab.

Further Reading

1    A note that countries could pursue sovereign AI in different ways, including by acquiring more AI chips and building more data centers to increase domestic capacity to train and run large AI models, training of fine-tuning national AI models with government support, building datasets of national languages (or images of people from the country) to enable the creation of more representative training datasets, or by blocking foreign firms and countries from accessing domestic resources that might otherwise be used to train their AI models (e.g., critical minerals, data laborers, datasets, or chips). This piece focuses on data, as it has been critical in discussions of digital sovereignty.

The post The sovereignty trap appeared first on Atlantic Council.

]]>
Ukraine’s drone success offers a blueprint for cybersecurity strategy https://www.atlanticcouncil.org/blogs/ukrainealert/ukraines-drone-success-offers-a-blueprint-for-cybersecurity-strategy/ Thu, 18 Jul 2024 20:28:12 +0000 https://www.atlanticcouncil.org/?p=780918 Ukraine's rapidly expanding domestic drone industry offers a potentially appealing blueprint for the development of the country's cybersecurity capabilities, writes Anatoly Motkin.

The post Ukraine’s drone success offers a blueprint for cybersecurity strategy appeared first on Atlantic Council.

]]>
In December 2023, Ukraine’s largest telecom operator, Kyivstar, experienced a massive outage. Mobile and internet services went down for approximately twenty four million subscribers across the country. Company president Alexander Komarov called it “the largest hacker attack on telecom infrastructure in the world.” The Russian hacker group Solntsepyok claimed responsibility for the attack.

This and similar incidents have highlighted the importance of the cyber front in the Russian invasion of Ukraine. Ukraine has invested significant funds in cybersecurity and can call upon an impressive array of international partners. However, the country currently lacks sufficient domestic cybersecurity system manufacturers.

Ukraine’s rapidly expanding drone manufacturing sector may offer the solution. The growth of Ukrainian domestic drone production over the past two and a half years is arguably the country’s most significant defense tech success story since the start of Russia’s full-scale invasion. If correctly implemented, it could serve as a model for the creation of a more robust domestic cybersecurity industry.

Stay updated

As the world watches the Russian invasion of Ukraine unfold, UkraineAlert delivers the best Atlantic Council expert insight and analysis on Ukraine twice a week directly to your inbox.

Speaking in summer 2023, Ukraine’s Minister of Digital Transformation Mykhailo Fedorov outlined the country’s drone strategy of bringing together drone manufacturers and military officials to address problems, approve designs, secure funding, and streamline collaboration. Thanks to this approach, he predicted a one hundred fold increase in output by the end of the year.

The Ukrainian drone production industry began as a volunteer project in the early days of the Russian invasion, and quickly became a nationwide movement. The initial goal was to provide the Ukrainian military with 10,000 FPV (first person view) drones along with ammunition. This was soon replaced by far more ambitious objectives. Since the start of Russia’s full-scale invasion, more the one billion US dollars has been collected by Ukrainians via fundraising efforts for the purchase of drones. According to online polls, Ukrainians are more inclined to donate money for drones than any other cause.

Today, Ukrainian drone production has evolved from volunteer effort to national strategic priority. According to Ukrainian President Volodymyr Zelenskyy, the country will produce more than one million drones in 2024. This includes various types of drone models, not just small FPV drones for targeting personnel and armored vehicles on the battlefield. By early 2024, Ukraine had reportedly caught up with Russia in the production of kamikaze drones similar in characteristics to the large Iranian Shahed drones used by Russia to attack Ukrainian energy infrastructure. This progress owes much to cooperation between state bodies and private manufacturers.

Marine drones are a separate Ukrainian success story. Since February 2022, Ukraine has used domestically developed marine drones to damage or sink around one third of the entire Russian Black Sea Fleet, forcing Putin to withdraw most of his remaining warships from occupied Crimea to the port of Novorossiysk in Russia. New Russian defensive measures are consistently met with upgraded Ukrainian marine drones.

In May 2024, Ukraine became the first country in the world to create an entire branch of the armed forces dedicated to drone warfare. The commander of this new drone branch, Vadym Sukharevsky, has since identified the diversity of country’s drone production as a major asset. As end users, the Ukrainian military is interested in as wide a selection of manufacturers and products as possible. To date, contracts have been signed with more than 125 manufacturers.

The lessons learned from the successful development of Ukraine’s drone manufacturing ecosystem should now be applied to the country’s cybersecurity strategy. “Ukraine has the talent to develop cutting-edge cyber products, but lacks investment. Government support is crucial, as can be seen in the drone industry. Allocating budgets to buy local cybersecurity products will create a thriving market and attract investors. Importing technologies strengthens capabilities but this approach doesn’t build a robust national industry,” commented Oleh Derevianko, co-founder and chairman of Information Systems Security Partners.

The development of Ukraine’s domestic drone capabilities has been so striking because local manufacturers are able to test and refine their products in authentic combat conditions. This allows them to respond on a daily basis to new defensive measures employed by the Russians. The same principle is necessary in cybersecurity. Ukraine regularly faces fresh challenges from Russian cyber forces and hacker groups; the most effective approach would involve developing solutions on-site. Among other things, this would make it possible to conduct immediate tests in genuine wartime conditions, as is done with drones.

At present, Ukraine’s primary cybersecurity funding comes from the Ukrainian defense budget and international donors. These investments would be more effective if one of the conditions was the procurement of some solutions from local Ukrainian companies. Today, only a handful of Ukrainian IT companies supply the Ukrainian authorities with cybersecurity solutions. Increasing this number to at least dozens of companies would create a local industry capable of producing world-class products. As we have seen with the rapid growth of the Ukrainian drone industry, this strategy would likely strengthen Ukraine’s own cyber defenses while also boosting the cybersecurity of the wider Western world.

Anatoly Motkin is president of StrategEast, a non-profit organization with offices in the United States, Ukraine, Georgia, Kazakhstan, and Kyrgyzstan dedicated to developing knowledge-driven economies in the Eurasian region.

Further reading

The views expressed in UkraineAlert are solely those of the authors and do not necessarily reflect the views of the Atlantic Council, its staff, or its supporters.

The Eurasia Center’s mission is to enhance transatlantic cooperation in promoting stability, democratic values and prosperity in Eurasia, from Eastern Europe and Turkey in the West to the Caucasus, Russia and Central Asia in the East.

Follow us on social media
and support our work

The post Ukraine’s drone success offers a blueprint for cybersecurity strategy appeared first on Atlantic Council.

]]>
Cryptocurrency Regulation Tracker and Kumar cited by Axios on crypto regulation https://www.atlanticcouncil.org/insight-impact/in-the-news/cryptocurrency-regulation-tracker-and-kumar-cited-by-axios-on-crypto-regulation/ Thu, 18 Jul 2024 16:06:45 +0000 https://www.atlanticcouncil.org/?p=781060 Read the full newsletter here.

The post Cryptocurrency Regulation Tracker and Kumar cited by Axios on crypto regulation appeared first on Atlantic Council.

]]>
Read the full newsletter here.

The post Cryptocurrency Regulation Tracker and Kumar cited by Axios on crypto regulation appeared first on Atlantic Council.

]]>
Cryptocurrency Regulation Tracker cited by Axios on global crypto regulation https://www.atlanticcouncil.org/insight-impact/in-the-news/cryptocurrency-regulation-tracker-cited-by-axios-on-global-crypto-regulation/ Mon, 15 Jul 2024 13:45:54 +0000 https://www.atlanticcouncil.org/?p=781000 Read the full newsletter here.

The post Cryptocurrency Regulation Tracker cited by Axios on global crypto regulation appeared first on Atlantic Council.

]]>
Read the full newsletter here.

The post Cryptocurrency Regulation Tracker cited by Axios on global crypto regulation appeared first on Atlantic Council.

]]>
Cryptocurrency Regulation Tracker cited by Politico on crypto relevance in US election https://www.atlanticcouncil.org/insight-impact/in-the-news/cryptocurrency-regulation-tracker-cited-by-politico-on-crypto-relevance-in-us-election-cycle/ Mon, 15 Jul 2024 13:38:22 +0000 https://www.atlanticcouncil.org/?p=780996 Read the full newsletter here.

The post Cryptocurrency Regulation Tracker cited by Politico on crypto relevance in US election appeared first on Atlantic Council.

]]>
Read the full newsletter here.

The post Cryptocurrency Regulation Tracker cited by Politico on crypto relevance in US election appeared first on Atlantic Council.

]]>
Transatlantic Economic Statecraft Report cited in the International Cybersecurity Law Review on semiconductor supply chains https://www.atlanticcouncil.org/insight-impact/in-the-news/transatlantic-economic-statecraft-report-cited-in-the-international-cybersecurity-law-review-on-semiconductor-supply-chains/ Tue, 25 Jun 2024 13:57:00 +0000 https://www.atlanticcouncil.org/?p=779317 Read the journal article here.

The post Transatlantic Economic Statecraft Report cited in the International Cybersecurity Law Review on semiconductor supply chains appeared first on Atlantic Council.

]]>
Read the journal article here.

The post Transatlantic Economic Statecraft Report cited in the International Cybersecurity Law Review on semiconductor supply chains appeared first on Atlantic Council.

]]>
Kumar cited by Axios on wholesale central bank digital currency development https://www.atlanticcouncil.org/insight-impact/in-the-news/kumar-cited-by-axios-on-wholesale-central-bank-digital-currency-development/ Mon, 24 Jun 2024 16:37:39 +0000 https://www.atlanticcouncil.org/?p=776865 Read the full newsletter here.

The post Kumar cited by Axios on wholesale central bank digital currency development appeared first on Atlantic Council.

]]>
Read the full newsletter here.

The post Kumar cited by Axios on wholesale central bank digital currency development appeared first on Atlantic Council.

]]>
CBDC Tracker cited by Coingeek on wholesale central bank digital currency development https://www.atlanticcouncil.org/insight-impact/in-the-news/cbdc-tracker-cited-by-coingeek-on-wholesale-central-bank-digital-currency-development/ Sat, 22 Jun 2024 16:33:53 +0000 https://www.atlanticcouncil.org/?p=776861 Read the full article here.

The post CBDC Tracker cited by Coingeek on wholesale central bank digital currency development appeared first on Atlantic Council.

]]>
Read the full article here.

The post CBDC Tracker cited by Coingeek on wholesale central bank digital currency development appeared first on Atlantic Council.

]]>
Zaaimi in Leadership Connect: Tribal Spotlight Interview https://www.atlanticcouncil.org/insight-impact/in-the-news/zaaimi-in-leadership-connect-tribal-spotlight-interview/ Tue, 18 Jun 2024 18:57:35 +0000 https://www.atlanticcouncil.org/?p=774275 The post Zaaimi in Leadership Connect: Tribal Spotlight Interview appeared first on Atlantic Council.

]]>

The post Zaaimi in Leadership Connect: Tribal Spotlight Interview appeared first on Atlantic Council.

]]>
Tran, Matthews, and CBDC Tracker cited by YouTube video on Saudi Arabia mBridge membership https://www.atlanticcouncil.org/insight-impact/in-the-news/tran-matthews-and-cbdc-tracker-cited-by-youtube-video-on-saudi-arabia-mbridge-membership/ Mon, 17 Jun 2024 20:48:40 +0000 https://www.atlanticcouncil.org/?p=774963 Watch the full video here.

The post Tran, Matthews, and CBDC Tracker cited by YouTube video on Saudi Arabia mBridge membership appeared first on Atlantic Council.

]]>
Watch the full video here.

The post Tran, Matthews, and CBDC Tracker cited by YouTube video on Saudi Arabia mBridge membership appeared first on Atlantic Council.

]]>
Kumar and CBDC Tracker cited by Axios on global central bank digital currency development https://www.atlanticcouncil.org/insight-impact/in-the-news/kumar-and-cbdc-tracker-cited-by-axios-on-global-central-bank-digital-currency-development/ Mon, 17 Jun 2024 20:32:28 +0000 https://www.atlanticcouncil.org/?p=774947 Read the full newsletter here.

The post Kumar and CBDC Tracker cited by Axios on global central bank digital currency development appeared first on Atlantic Council.

]]>
Read the full newsletter here.

The post Kumar and CBDC Tracker cited by Axios on global central bank digital currency development appeared first on Atlantic Council.

]]>
Designing a blueprint for open, free and trustworthy digital economies https://www.atlanticcouncil.org/blogs/econographics/designing-a-blueprint-for-open-free-and-trustworthy-digital-economies/ Fri, 14 Jun 2024 21:21:25 +0000 https://www.atlanticcouncil.org/?p=773476 US digital policy must be aimed at improving national security, defending human freedom, dignity, and economic growth while ensuring necessary accountability for the integrity of the technological bedrock.

The post Designing a blueprint for open, free and trustworthy digital economies appeared first on Atlantic Council.

]]>
More than half a century into the information age, it is clear how policy has shaped the digital world. The internet has enabled world-changing innovation, commercial developments, and economic growth through a global and interoperable infrastructure. However, the internet is also home to rampant fraud, misinformation, and criminal exploitation. To shape policy and technology to address these challenges in the next generation of digital infrastructure, policymakers must confront two complex issues: the difficulty of massively scaling technologies and the growing fragmentation across technological and economic systems.

How today’s policymakers decide to balance freedom and security in the digital landscape will have massive consequences for the future. US digital policy must be aimed at improving national security, defending human freedom, dignity, and economic growth while ensuring necessary accountability for the integrity of the technological bedrock.

Digital economy building blocks and the need for strategic alignment

Digital policymakers face a host of complex issues, such as regulating and securing artificial intelligence, banning or transitioning ownership of TikTok, combating pervasive fraud, addressing malign influence and interference in democratic processes, considering updates to Section 230 and impacts on tech platforms, and implementing zero-trust security architectures. When addressing these issues, policymakers must keep these core building blocks of the digital economy front and center:

  • Infrastructure: How to provide the structure, rails, processes, standards, and technologies for critical societal functions;
  • Data: How to protect, manage, own, use, share, and destroy open and sensitive data; and
  • Identity: How to represent and facilitate trust and interactions across people, entities, data, and devices.

How to approach accountability—who is responsible for what—in each of these pillars sets the stage for how future digital systems will or will not be secure, competitive, and equitable.

Achieving the right balance between openness and security is not easy, and the stakes for both personal liberty and national security amid geostrategic competition are high. The open accessibility of information, infrastructure, and markets enabled by the internet all bring knowledge diffusion, data flows, and higher order economic developments, which are critical for international trade and investment.

However, vulnerabilities in existing digital ecosystems contribute significantly to economic losses, such as the estimated $600 billion per year lost to intellectual property theft and the $8 trillion in global costs last year from cybercrime. Apart from direct economic costs, growing digital authoritarianism threatens undesirable censorship, surveillance, and manipulation of foreign and domestic societies that could not only undermine democracy but also reverse the economic benefits wrought from democratization.

As the United States pursues its commitment with partner nations toward an open, free, secure internet, Washington must operationalize that commitment into specific policy and technological implementations coordinated across the digital economy building blocks. It is critical to shape them to strengthen their integrity while preventing undesired fragmentation, which could hinder objectives for openness and innovation.

Infrastructure

The underlying infrastructure and technologies that define how consumers and businesses get access to and can use information are featured in ongoing debates and policymaking, which has led to heightened bipartisan calls for accountability across platform operators. Further complicating the landscape of accountability in infrastructure are the growing decentralization and aggregation of historically siloed functions and systems. As demonstrated by calls for decentralizing the banking system or blockchain-based decentralized networks underlying cryptocurrencies, there is an increasing interest from policymakers and industry leaders to drive away from concentration risks and inequity that can be at risk in overly centralized systems.

However, increasing decentralization can lead to a lack of clear lines of responsibility and accountability in the system. Accountability and neutrality policy are also impacted by increasing digital interconnectedness and the commingling of functions. The Bank of the International Settlement recently coined a term, “finternet,” to describe the vision of an exciting but complexly interconnected digital financial system that must navigate international authorities, sovereignty, and regulatory applicability in systems that operate around the world.

With this tech and policy landscape in mind, infrastructure policy should focus on two aspects:

  • Ensuring infrastructure security, integrity, and openness. Policymakers and civil society need to articulate and test a clear vision for stakeholders to coordinate on what openness and security across digital infrastructure for cross-economic purposes should look like based on impacts to national security, economic security, and democratic objectives. This would outline elements such as infrastructure ecosystem participants, the degree of openness, and where points for responsibility of controls should be, whether through voluntary or enforceable means. This vision would build on ongoing Biden administration efforts and provide a north star for strategic coordination with legislators, regulators, industry, civil society, and international partners to move in a common direction.
  • Addressing decentralization and the commingling of infrastructure. Technologists must come together with policymakers to ensure that features for governance and security are fit for purpose and integrated early in decentralized systems, as well as able to oversee and ensure compliance for any regulated, high-risk activity.

Data

Data has been called the new oil, the new gold, and the new oxygen. Perhaps overstated, each description nonetheless captures what is already the case: Data is incredibly valuable in digital economies. US policymakers should focus on how to surround how to address the privacy, control, and integrity of data, the fundamental assets of value in information economies.

Privacy is a critical area to get right in the collection and management of information. The US privacy framework is fragmented and generally use-specific, framed for high risk sectors like finance and healthcare. In the absence of a federal-government-wide consumer data privacy law, some states are implementing their own approaches. In light of existing international data privacy laws, US policy also has to account for issues surrounding harmonization and potential economic hindrances brought by data localization.

Beyond just control of privacy and disclosure, many tech entrepreneurs, legislators, and federal agencies are aimed at placing greater ownership of data and subsequent use in the hands of consumers. Other efforts supporting privacy and other national and economic security concerns are geared toward protecting against the control and ownership of sensitive data by adversarial nations or anti-competitive actors, including regulations on data brokers and the recent divest-or-ban legislation targeted at TikTok.

There is also significant policy interest surrounding the integrity of information and the systems reliant on it, such as in combating the manipulation of data underlying AI systems and protecting electoral processes that could be vulnerable to disinformation. Standards and research are rising, focused on data provenance and integrity techniques. But there remain barriers to getting the issue of data integrity right in the digital age.

While there is some momentum for combating data integrity compromise, doing so is rife with challenges of implementation and preserving freedom of expression that have to be addressed to achieve the needed balance of security and freedom:

  • Balancing data security, discoverability, and privacy. Stakeholders across various key functions of law enforcement, regulation, civil society, and industry must together define what type of information should be discoverable by whom and under what conditions, guided by democratic principles, privacy frameworks, the rule of law, and consumer and national security interests. This would shape the technical standards and requirements for privacy tech and governance models that government and industry can put into effect.
  • Preserving consumer and democratic control and ownership of data. Placing greater control and localization protections around consumer data could bring great benefits to user privacy but must also be done in consideration of the economic impacts and higher order innovations enabled from the free flow and aggregation of data. Policy efforts could pursue research and experimentation for assessing the value of data
  • Combating manipulation and protecting information integrity. Governments must work hand in hand with civil society and, where appropriate, media organizations to pursue policies and technical developments that could contribute to promoting trust in democratic public institutions and help identify misinformation across platforms, especially in high-risk areas to societies and democracies such as election messaging, financial services and markets, and healthcare.

Identity

Talk about “identity” can trigger concerns of social credit scores and Black Mirror episodes. It may, for example, evoke a sense of state surveillance, criminal anonymity, fraud, voter and political dissident suppression, disenfranchisement of marginalized populations, or even the mundane experience of waiting in line at a department of motor vehicles. As a force for good, identity enables critical access to goods and services for consumers, helps provide recourse for victims of fraud and those seeking public benefits, and protects sensitive information while providing necessary insights to authorities and regulated institutions to hold bad actors accountable. With increasing reliance on digital infrastructure, government and industry will have to partner to create the technical and policy fabric for secure, trustworthy, and interoperable digital identity.

Digital identity is a critical element of digital public infrastructure (DPI). The United States joined the Group of Twenty (G20) leaders in committing to pursue work on secure, interoperable digital identity tools and emphasized its importance in international fora to combat illicit finance. However, while many international efforts have taken root to establish digital identity systems abroad, progress by the United States on holistic domestic or cross-border digital identity frameworks has been limited. Identity security is crucial to establish trust in US systems, including the US financial sector and US public institutions. While the Biden administration has been driving some efforts to strengthen identity, the democratized access to sophisticatedAI tools increased the threat environment significantly by making it easy to create fraudulent credentials and deepfakes that circumvent many current counter-fraud measures.

The government is well-positioned to be the key driver of investments in identity that would create the underlying fabric for trust in digital communications and commerce:

  • Investing in identity as digital public infrastructure. Digital identity development and expansion can unlock massive societal and economic benefits, including driving value up to 13 percent of a nation’s gross domestic product and providing access to critical goods and services, as well as the ability to vote, engage in the financial sector, and own land. Identity itself can serve as infrastructure for higher-order e-commerce applications that rely on trust. The United States should invest in secure, interoperable digital identity infrastructure domestically and overseas, to include the provision of secure verifiable credentials and privacy-preserving attribute validation services.
  • Managing security, privacy, and equity in Identity. Policymakers must work with industry to ensure that identity systems, processes, and regulatory requirements implement appropriate controls in full view of all desired outcomes across security, privacy, and equity, consistent with National Institute of Science and Technology standards. Policies should ensure that saving resources by implementing digital identity systems also help to improve services for those not able to use them.

Technology by itself is not inherently good or evil—its benefits and risks are specific to the technological, operational, and governance implementations driven by people and businesses. This outline of emerging policy efforts affecting digital economy building blocks may help policymakers and industry leaders consider efforts needed to drive alignment to preserve the benefits of a global, interoperable, secure and free internet while addressing the key shortfalls present in the current digital landscape.


Carole House is a nonresident senior fellow at the Atlantic Council GeoEconomics Center and the Executive in Residence at Terranet Ventures, Inc. She formerly served as the director for cybersecurity and secure digital innovation for the White House National Security Council, where Carole will soon be returning as the Special Advisor for Cybersecurity and Critical Infrastructure Policy. This article reflects views expressed by the author in her personal capacity.

The post Designing a blueprint for open, free and trustworthy digital economies appeared first on Atlantic Council.

]]>
Cryptocurrency Regulation Tracker cited in Bank of International Settlements Paper on CBDC and crypto development https://www.atlanticcouncil.org/insight-impact/in-the-news/cryptocurrency-regulation-tracker-cited-in-bank-of-international-settlements-paper-on-cbdc-and-crypto-development/ Fri, 14 Jun 2024 16:04:00 +0000 https://www.atlanticcouncil.org/?p=781057 Read the full report here.

The post Cryptocurrency Regulation Tracker cited in Bank of International Settlements Paper on CBDC and crypto development appeared first on Atlantic Council.

]]>
Read the full report here.

The post Cryptocurrency Regulation Tracker cited in Bank of International Settlements Paper on CBDC and crypto development appeared first on Atlantic Council.

]]>
House published in Bloomberg Law on US public-private investment in critical technology https://www.atlanticcouncil.org/insight-impact/in-the-news/house-published-in-bloomberg-law-on-us-public-private-investment-in-critical-technology/ Fri, 14 Jun 2024 14:51:07 +0000 https://www.atlanticcouncil.org/?p=773352 Read the full article here.

The post House published in Bloomberg Law on US public-private investment in critical technology appeared first on Atlantic Council.

]]>
Read the full article here.

The post House published in Bloomberg Law on US public-private investment in critical technology appeared first on Atlantic Council.

]]>
CBDC Tracker cited by MSN on central bank digital currency development outside US https://www.atlanticcouncil.org/insight-impact/in-the-news/cbdc-tracker-cited-by-msn-on-central-bank-digital-currency-development-outside-us/ Thu, 13 Jun 2024 14:44:23 +0000 https://www.atlanticcouncil.org/?p=773349 Read the full article here.

The post CBDC Tracker cited by MSN on central bank digital currency development outside US appeared first on Atlantic Council.

]]>
Read the full article here.

The post CBDC Tracker cited by MSN on central bank digital currency development outside US appeared first on Atlantic Council.

]]>
Intentionally vague: How Saudi Arabia and Egypt abuse legal systems to suppress online speech https://www.atlanticcouncil.org/in-depth-research-reports/report/intentionally-vague-how-saudi-arabia-and-egypt-abuse-legal-systems-to-suppress-online-speech/ Wed, 12 Jun 2024 11:00:00 +0000 https://www.atlanticcouncil.org/?p=771211 Egypt and Saudi Arabia are weaponizing vaguely written domestic media, cybercrime, and counterterrorism laws to target and suppress dissent, opposition, and vulnerable groups.

The post Intentionally vague: How Saudi Arabia and Egypt abuse legal systems to suppress online speech appeared first on Atlantic Council.

]]>

Egypt and Saudi Arabia are weaponizing vaguely written domestic media, cybercrime, and counterterrorism laws to target and suppress dissent, opposition, and vulnerable groups. Political leaders in Egypt and Saudi Arabia often claim that their countries’ judicial systems enjoy independence and a lack of interference, a narrative intended to distance the states from the real and overzealous targeting and prosecution of critics. Such claims can be debunked and dismissed, as the Egyptian and Saudi governments have had direct involvement in establishing and implementing laws that are utilized to target journalists and human rights defenders.

Egypt and Saudi Arabia were selected as case studies for this report because of their status as among the most frequently documented offenders in the region when it comes to exploiting ambiguously written laws to target and prosecute journalists, critics, activists, human rights defenders, and even apolitical citizens. The two countries have consolidated power domestically, permitting them to utilize and bend their domestic legal systems to exert control over the online information space. Punishments for those targeted can involve draconian prison sentences, travel bans, and fines, which result in a chilling effect that consequently stifles online speech and activities, preventing citizens from discussing political, social, and economic issues.

Both Egypt and Saudi Arabia enacted media, cybercrime, and counterterrorism laws with ambiguous language and unclear definitions of legal terms, allowing for flexible interpretations of phrases such as “false information,” “morality,” or “family values and principles.” The laws in both countries also loosely define critical terms like “terrorism,” thereby facilitating expansive interpretations of what constitutes a terrorist crime. Further, anti-terror laws now include articles that connect the “dissemination of false information” with terrorist acts. This vague and elastic legal language has enabled the Egyptian and Saudi regimes to prosecute peaceful citizens on arbitrary grounds, sometimes handing out long prison sentences or even death sentences, undermining respect for the rule of law in the two countries.

This report explores the development of media, cybercrime, and counterterrorism laws in both countries, and demonstrates through case studies how Saudi Arabia and Egypt weaponize the laws to prosecute opposition figures and control narratives online. This report examines the relationship between criminal charges tied to one’s professional activities or online speech and how those charges can trigger online smear campaigns and harassment. In cases that involve women, gender-based violence is often used to harm a woman’s reputation. Though a direct correlation between judicial charges and online harassment cannot be ascertained, these case studies suggest that dissidents are likely to face online harm following legal persecution, even after they are released.

Related content

The Atlantic Council’s Digital Forensic Research Lab (DFRLab) has operationalized the study of disinformation by exposing falsehoods and fake news, documenting human rights abuses, and building digital resilience worldwide.

The post Intentionally vague: How Saudi Arabia and Egypt abuse legal systems to suppress online speech appeared first on Atlantic Council.

]]>
CBDC Tracker cited by Cryptonews on digital euro development https://www.atlanticcouncil.org/insight-impact/in-the-news/cbdc-tracker-cited-by-cryptonews-on-development-of-digital-euro/ Wed, 05 Jun 2024 14:44:41 +0000 https://www.atlanticcouncil.org/?p=771270 Read the full article here.

The post CBDC Tracker cited by Cryptonews on digital euro development appeared first on Atlantic Council.

]]>
Read the full article here.

The post CBDC Tracker cited by Cryptonews on digital euro development appeared first on Atlantic Council.

]]>
Lipsky quoted and CBDC Tracker cited by US News on Saudi Arabia decision to join China-led CBDC project https://www.atlanticcouncil.org/insight-impact/in-the-news/lipsky-quoted-and-cbdc-tracker-cited-by-us-news-on-saudi-arabia-decision-to-join-china-led-cbdc-project/ Wed, 05 Jun 2024 14:42:06 +0000 https://www.atlanticcouncil.org/?p=771268 Read the full article here.

The post Lipsky quoted and CBDC Tracker cited by US News on Saudi Arabia decision to join China-led CBDC project appeared first on Atlantic Council.

]]>
Read the full article here.

The post Lipsky quoted and CBDC Tracker cited by US News on Saudi Arabia decision to join China-led CBDC project appeared first on Atlantic Council.

]]>
Lipsky quoted and CBDC Tracker cited by Reuters on Saudi Arabia decision to join China-led CBDC project https://www.atlanticcouncil.org/insight-impact/in-the-news/lipsky-quoted-and-cbdc-tracker-cited-by-reuters-on-saudi-arabia-decision-to-join-china-led-central-bank-digital-currency-project/ Wed, 05 Jun 2024 14:36:00 +0000 https://www.atlanticcouncil.org/?p=771263 Read the full article here.

The post Lipsky quoted and CBDC Tracker cited by Reuters on Saudi Arabia decision to join China-led CBDC project appeared first on Atlantic Council.

]]>
Read the full article here.

The post Lipsky quoted and CBDC Tracker cited by Reuters on Saudi Arabia decision to join China-led CBDC project appeared first on Atlantic Council.

]]>
Who’s a national security risk? The changing transatlantic geopolitics of data transfers https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/whos-a-national-security-risk-geopolitics-of-data-transfers/ Wed, 29 May 2024 19:34:02 +0000 https://www.atlanticcouncil.org/?p=767982 The geopolitics of data transfers is changing. How will Washington's new focus on data transfers affect Europe and the transatlantic relationship?

The post Who’s a national security risk? The changing transatlantic geopolitics of data transfers appeared first on Atlantic Council.

]]>

Table of contents

Introduction
Data transfer politics come to America
Data transfer politics in Europe
Conclusions

Introduction

The geopolitics of transatlantic data transfers have been unvarying for the past decade. European governments criticize the US National Security Agency (NSA) for exploiting personal data moving from Europe to the United States for commercial reasons. The US government responds, through a series of arrangements with the European Union, by providing assurances that NSA collection is not disproportionate, and that Europeans have legal avenues if they believe their data has been illegally used. Although the arrangements have not proven legally stable, on the whole they have sufficed to keep data flowing via subsea cables under the Atlantic Ocean.

Now the locus of national security concerns about international data transfers has shifted from Brussels to Washington. The Biden administration and the US Congress, in a series of bold measures, are moving aggressively to interrupt certain cross-border data flows, notably to China and Russia.

The geopolitics of international data flows remain largely unchanged in Europe, however. European data protection authorities have been mostly noncommittal about the prospect of Russian state surveillance collecting Europeans’ personal data. Decisions on whether to transfer European data to Russia and China remain in the hands of individual companies.

Will Washington’s new focus on data transfers to authoritarian states have an impact in Europe? Will Europe continue to pay more attention to the surveillance activities of its liberal democratic allies, especially the United States? Is there a prospect of Europe and the United States aligning on the national security risks of transfers to authoritarian countries?

Data transfer politics come to America

The US government long considered the movement of personal data across borders as primarily a matter of facilitating international trade.1 US national security authorities’ surveillance of foreigners’ personal data in the course of commercial transfers was regarded as an entirely separate matter.

For example, the 2001 EU-US Safe Harbor Framework,2 the first transatlantic data transfer agreement, simply allowed the United States to assert the primacy of national security over data protection requirements, without further discussion. Similarly, the 2020 US-Mexico-Canada Free Trade Agreement3 and the US-Japan Digital Trade Agreement4 contain both free flow of data guarantees and traditional national security carve-outs from those obligations.

Edward Snowden’s 2013 revelations of expansive US NSA surveillance in Europe put the Safe Harbor Framework’s national security derogation into the political spotlight. Privacy activist Max Schrems then challenged its legality under EU fundamental rights law, and the Court of Justice of the European Union (CJEU) ruled it unacceptable.5

The 2023 EU-US Data Privacy Framework6 (DPF) is the latest response to this jurisprudence. In it, the United States commits to hold national security electronic surveillance of EU-origin personal data to a more constrained standard, as the European Commission has noted.7 The United States’ defensive goal has been to reassure Europe that it conducts foreign surveillance in a fashion that can be reconciled with EU fundamental rights law.

Now, however, the US government has begun expressly integrating its own national security considerations into decisions on the foreign destinations to which US-origin personal data may flow. It is a major philosophical shift from the prior free data flows philosophy, in which national security limits played a theoretical and marginal role.

One notable development is a February 28, 2024, executive order, Preventing Access to Americans’ Bulk Sensitive Personal Data and United States Government-Related Data by Countries of Concern.8 The EO empowers the Department of Justice (DOJ), in consultation with other relevant departments, to identify countries “of concern” and to prohibit or otherwise regulate bulk data transfers to them, based on a belief that these countries could be collecting such data for purposes of spying on or extorting Americans. A week later DOJ issued a proposed rule describing the envisaged regulatory regime, and proposing China, Cuba, Iran, North Korea, Russia, and Venezuela as the countries “of concern.”9

The White House, in issuing the bulk data EO, was at pains to insist that it was limited in scope and not inconsistent with the historic US commitment to the free flow of data, because it applies only to certain categories of data and certain countries.10 Nonetheless, as has been observed by scholars Peter Swire and Samm Sacks, the EO and proposed rule are, for the United States, part of “a new chapter in how it regulates data flows” in that they would create an elaborate new national security regulatory regime applying to legal commercial data activity.11

Hard on the heels of the bulk data EO came congressional passage in April of the Protecting Americans’ Data from Foreign Adversaries Act, which the president signed into law.12 It prohibits data brokers from selling or otherwise making available Americans’ sensitive information to four specified countries: China, Iran, North Korea, and Russia. The new law has a significantly broader scope than the EO. It cuts off certain data transfers to any entity controlled by one of these adversary countries, apparently including corporate affiliates and subsidiaries. It extends to any sensitive data, not just data in bulk. It remains to be seen how the administration will address the overlaps between the new law and the EO.

Another part of the same omnibus legislation ordered the ban or forced sale of TikTok, the Chinese social media platform widely used in this country.13 Advocates of the law point to the government of China’s ability under its own national security law to demand that companies operating there turn over personal data, including, potentially, TikTok users’ data transferred from the United States. Critics have cast the measure as a targeted punishment of a particular company, done without public evidence being offered of national security damage. TikTok has challenged the law as a violation of the First Amendment.14

Finally, the data transfer restrictions in these measures are thematically similar to a January 29 proposed rule from the Commerce Department obliging cloud service providers to verify the identity of their customers, on whose behalf they transfer data.15 The rule would impose know your customer (KYC) requirements—similar to those that apply in the international banking context—for cloud sales to non-US customers, wherever located.

This extraordinary burst of legislative and executive action focused on the national security risks of certain types of data transfers from the United States to certain authoritarian states is indicative of how far and fast political attitudes have shifted in this country. But what of Europe, which faces similar national security data challenges from authoritarian states? Is it moving in a similar direction as the United States?

Data transfer politics in Europe

The EU, unlike the United States, has long had a systematic set of controls on personal data flows from EU territory abroad, articulated in the General Data Protection Regulation (GDPR).16 The GDPR conditions transfers to a foreign jurisdiction on the “adequacy” of its data protection safeguards—or, as the CJEU has refined the concept, their “essential equivalence” to the GDPR regime.

The task of assessing foreign legal systems falls to the European Commission, the EU’s quasi-executive arm. Article 45 of the GDPR instructs it to consider, among other things, “the rule of law, respect for human rights and fundamental freedoms, relevant legislation . . . including concerning . . . the access of public authorities to personal data.”

For much of the past decade, the central drama in the European Commission’s adequacy process has been whether the United States meets this standard. As previously noted, the CJEU invalidated first the Safe Harbor Framework,17 in 2015, and then the Privacy Shield Framework,18 in 2020. The DPF is the third try by the US government and the European Commission to address the CJEU’s fundamental rights concerns. Last year, the European Commission issued yet another adequacy decision that found the DPF adequate.19 The EU understandably has focused its energies on the United States, since vast amounts of Europeans’ personal data travels to cloud service providers’ data centers in the United States and, as Snowden revealed, offered an inviting target for the NSA.

Separately, the European Commission has gradually expanded the range of other countries benefiting from adequacy findings, conferring this status on Japan,20 Korea,21 and the United Kingdom.22 However, the 2019 adequacy decision for the UK continues to be criticized in Brussels. On April 22, the Committee on Civil Liberties, Justice, and Home Affairs (LIBE) of the European Parliament wrote to the UK House of Lords complaining about UK national security bulk data collection practices and the prospect of onward transfer of data from UK territory to jurisdictions not deemed adequate by the EU.23 Next year, the European Commission will formally review the UK’s adequacy status.

List of countries with European Commission Adequacy Decisions

This past January, the European Commission renewed the adequacy decisions for eleven jurisdictions which had long enjoyed them, including, notably, Israel.24 On April 22, a coalition of civil society groups published an open letter to the European Commission questioning the renewal of Israel’s adequacy decision.25 The letter expressed doubts about the rule of law in Israel itself, the specific activities of Israeli intelligence agencies in Gaza during the current hostilities there, and the surveillance powers exercised by those agencies more generally.

Also delicate is the continuing flow of personal data from the European Union to Russia and China. Although neither country has been—or is likely to be—accorded adequacy status, data nonetheless can continue to flow to their territories, as to other third countries, if accompanied by contractual data protection safeguards. The CJEU established in its Schrems jurisprudence that such standard contractual clauses (SCCs) must uphold the same fundamental rights standards as an adequacy decision. The European Data Protection Board (EDPB) subsequently issued detailed guidance on the essential guarantees against national security surveillance that must be in place in order for personal data to be sent to a nonadequate jurisdiction.26

In 2021, the EDPB received an outside expert report27 on several foreign governments’ data access regimes. Its findings were clear. “Chinese law legitimises broad and unrestricted access to personal data by the government,” it concluded. Similarly, with respect to Russia, “The right to privacy is strongly limited when interests of national security are at stake.” The board did not take any further steps to follow up on the report, however.

Shortly after Russia invaded Ukraine, Russia was excluded from the Council of Europe and ceased to be a party to that body’s European Convention on Human Rights.28 The European Data Protection Board issued a statement confirming that data transfers to Russia pursuant to standard contract clauses remained possible, but stressed that safeguards to guard against Russian law enforcement or national security access to data were vital.29

Over two thousand multinational companies continue to do business in Russia, despite the Ukraine war, although a smaller number have shut down, according to a Kyiv academic research institute.30 Data flows between Europe and Russia thus remain substantial, if less than previously. Companies engaged in commerce in Russia also are subject to requirements that data on Russian persons be localized in that country.31 Nonetheless, data flows from Europe to Russia are not subject to categorical exclusions, unlike the new US approach.

The sole reported case of a European data protection authority questioning data flows to Russia involves Yango, a taxi-booking mobile app developed by Yandex, a Russian internet search and information technology company. Yango’s European services are based in the Netherlands and are available in other countries including Finland and Norway. In August 2023, Finland’s data protection authority (DPA) issued an interim decision to suspend use of Yango in its territory because Russia had just adopted a decree giving its state security service (FSB) unrestricted access to commercial taxi databases.32

The interim suspension decision was short-lived. A month later, the Finnish authority, acting in concert with Norwegian and Dutch counterparts, lifted it, on the basis of a clarification that the Russian decree in fact did not apply to use of the Yango app in Finland.33 The Finnish authority further announced that the Dutch authority, in coordination with it and Norway, would issue a final decision in the matter. The Dutch investigation reportedly remains open, but it does not appear to be a high priority matter.

The day after lifting the Yango suspension, the Finnish data protection authority rushed out yet another press release advising that its decision “does not address the legality of data transfers to Russia,” or “mean that Yango data transfers to Russia would be in compliance with the GDPR or that Russia has an adequate level of data protection.”34

One can interpret this final Finnish statement as at least indirectly acknowledging that continued commercial data transfers from an EU jurisdiction to Russia may raise rule of law questions bigger than a single decree allowing its primary security agency, known as the FSB, to access certain taxi databases. Otherwise, the Finnish decision could be criticized for ignoring the forest for the birch trees.

Equally striking is the limited extent of DPA attention to data transfers between EU countries and China. China maintains an extensive national security surveillance regime, and lately has implemented a series of legal measures that can limit outbound data transfers for national security reasons.35 In 2023, the Irish Data Protection Commissioner36 imposed a substantial fine on TikTok for violating the GDPR with respect to children’s privacy, following a decision by the EDPB.37 This inquiry did not examine the question of whether Chinese government surveillance authorities had access to European users’ data, however.

Personal data actively flows between Europe and China in the commercial context, pursuant to SCCs. China reportedly may issue additional guidance to companies on how to respond to requests for data from foreign law enforcement authorities. To date there is no public evidence of European DPAs questioning companies about their safeguard measures for transfers to China.

Indeed, signs recently have emerged from China of greater openness to transfers abroad of data generated in the automotive sector, including from connected cars. Data from connected cars is a mix of nonpersonal and personal data. China recently approved Tesla’s data security safeguards, enabling the company’s previously localized data to leave the country.38 In addition, the government of Germany is trying to ease the passage of data to and from China on behalf of German carmakers. On April 16, several German government ministers, part of a delegation visiting China led by Chancellor Olaf Scholz, issued a joint political statement with Chinese counterparts promising “concrete progress on the topic of reciprocal data transfer—and this in respect of national and EU data law,” with data from connected cars and automated driving in mind.39

Conclusions

The United States and the European Union are, in some respects, converging in their international data transfer laws and policies. In Washington, free data transfers are no longer sacrosanct. In Europe, they never have been. Viewed from Brussels, it appears that the United States is, finally, joining the EU by creating a formal international data transfers regime—albeit constructed in a piecemeal manner and focused on particular countries, rather than through a comprehensive and general data privacy law.

Yet the rationales for limiting data transfers vary considerably from one side of the Atlantic to the other. Washington now focuses on the national security dangers to US citizens and to the US government from certain categories of personal data moving to the territories of “foreign adversaries.” Brussels instead applies more abstract criteria relating to foreign governments’ commitment to the rule of law, human rights, and especially their access to personal data.

A second important difference is that the United States has effectively created a blacklist of countries to which certain categories of data should not flow, whereas the EU’s adequacy process serves as a means of “white listing” countries with comparable data protection frameworks to its own. Concretely, this structural difference means that the United States concentrates on prohibiting certain data transfers to China and Russia, while the EU institutionally has withheld judgment about transfers to those authoritarian jurisdictions. Critics of the EU’s adequacy practice instead have tended to concentrate on the perceived risks of data transfers to liberal democracies with active foreign surveillance establishments: Israel, the United Kingdom, and the United States.

The transatlantic—as well as global—geopolitics of data transfers are in flux. The sudden US shift to viewing certain transfers through a national security lens is unlikely to be strictly mirrored in Europe. In light of the emerging differences in approach, the United States and European governments should consider incorporating the topic of international data transfers into existing political-level conversations. Although data transfer topics have thus far not figured into the formal work of the EU-US Trade and Technology Council (TTC),40 which has met six times since 2022 including most recently in April,41 there is no evident reason why that could not change. If the TTC resumes activity after the US elections, it could become a useful bilateral forum for candid discussion of perceived national security risks in data flows.

Utilizing a broader grouping, such as the data protection and privacy authorities of the Group of Seven (G7), which as a group has been increasingly active in the last few years,42 also could be considered. The deliberations of this G7 group already have touched generally on the matter of government access, and they could readily expand to how its democratic members assess risks from authoritarians in particular. Eventually, such discussions could be expanded beyond the G7 frame into broader multilateral fora. The Organisation of Economic Co-operation and Development (OECD) Declaration on Government Access43 is a good building block.

The days when international data transfers were a topic safely left to privacy lawyers are long gone. It’s time for Washington and Brussels to acknowledge that the geopolitics of data flows has moved from the esoteric to the mainstream, and to grapple with the consequences.

About the author

Related content

The Europe Center promotes leadership, strategies, and analysis to ensure a strong, ambitious, and forward-looking transatlantic relationship.

1    Kenneth Propp, “Transatlantic Digital Trade Protections: From TTIP to ‘Policy Suicide?,’” Lawfare, February 16, 2024, https://www.lawfaremedia.org/article/transatlantic-digital-trade-protections-from-ttip-to-policy-suicide.
2    U.S.-EU Safe Harbor Framework: Guide to Self-Certification, US Department of Commerce, March 2009, https://legacy.trade.gov/publications/pdfs/safeharbor-selfcert2009.pdf.
3    “Chapter 19: Digital Trade,” US-Mexico-Canada Free Trade Agreement, Office of the United States Trade Representative, https://ustr.gov/sites/default/files/files/agreements/FTA/USMCA/Text/19-Digital-Trade.pdf.
4    “Agreement between the United States of America and Japan Concerning Digital Trade,” Office of the United States Trade Representative, https://ustr.gov/sites/default/files/files/agreements/japan/Agreement_between_the_United_States_and_Japan_concerning_Digital_Trade.pdf.
5    Schrems v. Data Protection Commissioner, CASE C-362/14 (Court of Justice of the EU 2015), https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:62014CJ0362.
6    “President Biden Signs Executive Order to Implement the European Union-U.S. Data Privacy Framework,” Fact Sheet, White House Briefing Room, October 7, 2022, https://www.whitehouse.gov/briefing-room/statements-releases/2022/10/07/fact-sheet-president-biden-signs-executive-order-to-implement-the-european-union-u-s-data-privacy-framework/.
7    European Commission, “Commission Implementing Decision of 10.7.2023 Pursuant to Regulation (EU) 2016/679 of the European Parliament and of the Council on the Adequate Level of Protection of Personal Data under the EU-US Data Privacy Framework,” July 10, 2023, https://commission.europa.eu/system/files/2023-07/Adequacy%20decision%20EU-US%20Data%20Privacy%20Framework_en.pdf.
9    Department of Justice, “National Security Division; Provisions Regarding Access to Americans’ Bulk Sensitive Personal Data and Government-Related Data by Countries of Concern,” Proposed Rule, 28 C.F.R. 202 (2024), https://www.federalregister.gov/d/2024-04594.
10    “President Biden Issues Executive Order to Protect Americans’ Sensitive Personal Data,” Fact Sheet, White House Briefing Room, February 28, 2024, https://www.whitehouse.gov/briefing-room/statements-releases/2024/02/28/fact-sheet-president-biden-issues-sweeping-executive-order-to-protect-americans-sensitive-personal-data/.
11    Peter Swire and Samm Sacks, “Limiting Data Broker Sales in the Name of U.S. National Security: Questions on Substance and Messaging,” Lawfare, February 28, 2024, https://www.lawfaremedia.org/article/limiting-data-broker-sales-in-the-name-of-u.s.-national-security-questions-on-substance-and-messaging.
12    “Protecting Americans from Foreign Adversary Controlled Applications Act,” in emergency supplemental appropriations, Pub. L. No. 118–50, 118th Cong. (2024), https://www.congress.gov/bill/118th-congress/house-bill/7520/text.
13    Cristiano Lima-Strong, “Biden Signs Bill That Could Ban TikTok, a Strike Years in the Making,” Washington Post, April 24, 2024, https://www.washingtonpost.com/technology/2024/04/23/tiktok-ban-senate-vote-sale-biden/.
14    “Petition for Review of Constitutionality of the Protecting Americans from Foreign Adversary Controlled Applications Act,” TikTok Inc. and ByteDance Ltd. v. Merrick B. Garland (US Court of Appeals for the District of Columbia Cir. 2024), https://sf16-va.tiktokcdn.com/obj/eden-va2/hkluhazhjeh7jr/AS%20FILED%20TikTok%20Inc.%20and%20ByteDance%20Ltd.%20Petition%20for%20Review%20of%20H.R.%20815%20(2024.05.07)%20(Petition).pdf?x-resource-account=public.
15    Department of Commerce, “Taking Additional Steps to Address the National Emergency with Respect to Significant Malicious Cyber-Enabled Activities,” Proposed Rule, 15 C.F.R. Part 7 (2024), https://www.govinfo.gov/content/pkg/FR-2024-01-29/pdf/2024-01580.pdf.
16    “Regulation (EU) 2016/679 of the European Parliament and of the Council of April 27, 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation),” 2016/679, Official Journal of the European Union (2016), https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32016R0679.
17    Schrems v. Data Protection Commissioner.
18    Data Protection Commissioner v. Facebook Ireland & Schrems, CASE C-311/18 (Court of Justice of the EU 2020), https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:62018CJ0311.
19    The Commission’s decision has since been challenged before the CJEU. See Latombe v. Commission, No. Case T-553/23 (Court of Justice of the EU 2023), https://curia.europa.eu/juris/document/document.jsf?text=&docid=279601&pageIndex=0&doclang=EN&mode=lst&dir=&occ=first&part=1&cid=1498741.
20    European Commission, “European Commission Adopts Adequacy Decision on Japan, Creating the World’s Largest Area of Safe Data Flows,” Press Release, January 23, 2019, https://commission.europa.eu/document/download/c2689793-a827-4735-bc8d-15b9fd88e444_en?filename=adequacy-japan-factsheet_en_2019.pdf.
21    “Commission Implementing Decision (EU) 2022/254 of 17 December 2021 Pursuant to Regulation (EU) 2016/679 of the European Parliament and of the Council on the Adequate Protection of Personal Data by the Republic of Korea under the Personal Information Protection Act,” Official Journal of the European Union, December 17, 2021, https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32022D0254.
22    “Commission Implementing Decision (EU) 2021/1772 of 28 June 2021 Pursuant to Regulation (EU) 2016/679 of the European Parliament and of the Council on the Adequate Protection of Personal Data by the United Kingdom,” Official Journal of the European Union, June 28, 2021, https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32021D1772.
23    European Parliament Justice Committee, Correspondence to Rt. Hon. Lord Peter Ricketts regarding Inquiry into Data Adequacy, April 22, 2024, https://content.mlex.com/Attachments/2024-04-25_L75PCWU60ZLVILJ5%2FLIBE%20letter%20-%20published%20EAC.pdf.
24    “Report from the Commission to the European Parliament and the Council on the First Review of the Functioning of the Adequacy Decisions Adopted Pursuant to Article 25(6) of Directive 95/46/EC,” European Commission, January 15, 2024, https://commission.europa.eu/document/download/f62d70a4-39e3-4372-9d49-e59dc0fda3df_en?filename=JUST_template_comingsoon_Report%20on%20the%20first%20review%20of%20the%20functioning.pdf.
25    European Digital Rights et al., Letter to Vice-President of the European Commission Věra Jourová Regarding Concerns following  Reconfirmation of Israel’s Adequacy Status, April 22, 2024, https://edri.org/wp-content/uploads/2024/04/Concerns-Regarding-European-Commissions-Reconfirmation-of-Israels-Adequacy-Status-in-the-Recent-Review-of-Adequacy-Decisions-updated-open-letter-April-2024.pdf.
26    Milieu Consulting and Centre for IT and IP Law of KU Leuven, “Recommendations 02/2020 on the European Essential Guarantees for Surveillance Measures,” Prepared for European Data Protection Board (EDPB), November 10, 2020, https://www.edpb.europa.eu/sites/default/files/files/file1/edpb_recommendations_202002_europeanessentialguaranteessurveillance_en.pdf.
27    Milieu Consulting and Centre for IT and IP Law of KU Leuven, “Government Access to Data in Third Countries,” EDPB, EDPS/2019/02-13, November 2021, https://www.edpb.europa.eu/system/files/2022-01/legalstudy_on_government_access_0.pdf.
28    European Convention on Human Rights, November 4, 1950, https://www.echr.coe.int/documents/d/echr/Convention_ENG.
29    Statement 02/2022 on Data Transfers to the Russian Federation, European Data Protection Board, July 12, 2022,
https://www.edpb.europa.eu/system/files/2022-07/edpb_statement_20220712_transferstorussia_en.pdf.
30    “Stop Doing Business with Russia,” KSE Institute, May 20, 2024, #LeaveRussia: The List of Companies that Stopped or Still Working in Russia (leave-russia.org).
31    “Russian Data Localization Law: Now with Monetary Penalties,” Norton Rose Fulbright Data Protection Report, December 20, 2019, https://www.dataprotectionreport.com/2019/12/russian-data-localization-law-now-with-monetary-penalties/.
32    “Finnish DPA Bans Yango Taxi Service Transfers of Personal Data from Finland to Russia Temporarily,” Office of the Data Protection Ombudsman, August 8, 2023, https://tietosuoja.fi/en/-/finnish-dpa-bans-yango-taxi-service-transfers-of-personal-data-from-finland-to-russia-temporarily.
33    “European Data Protection Authorities Continue to Cooperate on the Supervision of Yango Taxi Service’s Data Transfers–Yango Is Allowed to Continue Operating in Finland until Further Notice,” Office of the Data Protection Ombudsman, September 26, 2023, https://tietosuoja.fi/en/-/european-data-protection-authorities-continue-to-cooperate-on-the-supervision-of-yango-taxi-service-s-data-transfers-yango-is-allowed-to-continue-operating-in-finland-until-further-notice.
34    “The Data Protection Ombudsman’s Decision Does Not Address the Legality of Data Transfers to Russia–the Matter Remains under Investigation,” Office of the Data Protection Ombudsman, September 27, 2023, https://tietosuoja.fi/en/-/the-data-protection-ombudsman-s-decision-does-not-address-the-legality-of-data-transfers-to-russia-the-matter-remains-under-investigation#:~:text=The%20Office%20of%20the%20Data%20Protection%20Ombudsman%27s%20decision,Protection%20Ombudsman%20in%20October%2C%20was%20an%20interim%20decision.
35    Samm Sacks, Yan Lou, and Graham Webster, “Mapping U.S.-China Data De-Risking,” Freeman Spogli Institute for International Studies, Stanford University, February 29, 2024), https://digichina.stanford.edu/wp-content/uploads/2024/03/20240228-dataderisklayout.pdf.
36    “Irish Data Protection Commission Announces €345 Million Fine of TikTok,” Office of the Irish Data Protection Commissioner, September 15, 2023, https://www.dataprotection.ie/en/news-media/press-releases/DPC-announces-345-million-euro-fine-of-TikTok.
37    “Following EDPB Decision, TikTok Ordered to Eliminate Unfair Design Practices Concerning Children,” European Data Protection Board, September 15, 2023, https://www.edpb.europa.eu/news/news/2023/following-edpb-decision-tiktok-ordered-eliminate-unfair-design-practices-concerning_en.
38    “Tesla Reaches Deals in China on Self-Driving Cars,” New York Times, April 29, 2024, https://www.nytimes.com/2024/04/29/business/elon-musk-tesla-china-full-self-driving.html.
39    “Memorandum of Understanding with China,” German Federal Ministry of Digital and Transport, April 16, 2024,
https://bmdv.bund.de/SharedDocs/DE/Pressemitteilungen/2024/021-wissing-deutschland-china-absichtserklaerung-automatisiertes-und-vernetztes-fahren.html.
40    Frances Burwell and Andrea Rodríguez, “The US-EU Trade and Technology Council: Assessing the Record on Data and Technology Issues,” Atlantic Council, April 20, 2023, https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/us-eu-ttc-record-on-data-technology-issues/.
41    “U.S.-EU Trade and Technology Council (TTC),” US State Department, https://www.state.gov/u-s-eu-trade-and-technology-council-ttc/.
42    “G7 DPAs’ Action Plan,” German Office of the Federal Commissioner for Data Protection and Freedom of Information (BfDI), June 22, 2023, https://www.bfdi.bund.de/SharedDocs/Downloads/EN/G7/2023-Action-Plan.pdf?__blob=publicationFile&v=1.
43    OECD, Declaration on Government Access to Personal Data Held by Private Sector Entities, December 14, 2022, OECD/LEGAL/0487, https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0487.

The post Who’s a national security risk? The changing transatlantic geopolitics of data transfers appeared first on Atlantic Council.

]]>
Generational AI: Digital inclusion for aging populations https://www.atlanticcouncil.org/in-depth-research-reports/report/generational-ai-digital-inclusion-for-aging-populations/ Wed, 29 May 2024 14:00:00 +0000 https://www.atlanticcouncil.org/?p=768355 Recommendations on improved inclusion and empowerment of older adults in the age of artificial intelligence.

The post Generational AI: Digital inclusion for aging populations appeared first on Atlantic Council.

]]>

As artificial intelligence (AI) applications become ubiquitous in products and services, it is more important than ever to ensure that they are appropriately aligned for positive use and avoid exacerbating social exclusions for an aging population. Based on discussions with leaders in equity, AI, and aging, and additional research, Generational AI: Digital inclusion for aging populations outlines the unique considerations for older adults within the AI lifecycle, barriers to digital inclusion that older adults experience regarding AI and suggested near- and long-term solutions to advance digital inclusion and mitigate biases against older adults, while supporting practical AI innovation, AI policy, and healthy aging.

Age and its intersection with other dimensions of access—including income, race, language, and gender—dramatically influence an individual’s ability to fully access, benefit from, and contribute to the digital world. With current trends, the population of adults aged sixty and older is expected to surpass 1.4 billion by 2030. Guidance and policies that include and engage older adults in AI development and deployment can foster broader inclusion, as the demographic cuts across various protected statuses and minority identities. Empowering the inclusion of older adults supports them in acting as agents of enhancing more comprehensive inclusion across AI. This change is necessary to ensure responsible and equitable AI for all, especially as the global population rapidly ages.

Digital inclusion for aging populations is possible, with various solutions across the AI lifecycle. Generational AI: Digital inclusion for aging populations identifies the varied use cases of artificial intelligence and older adults, breaking down the main considerations within the design, development, and deployment of AI to support healthy aging and advance equitable AI. These considerations reveal four significant barriers to the digital inclusion of older adults in AI, including:

  • incomplete or biased data on older adults;
  • lack of inclusion of older adults in AI design, development, and post-deployment feedback;
  • limited digital literacy and algorithmic awareness of older adults; and
  • adaptive monitoring and evaluation.

To address each gap, priorities suggested for the multistakeholder field of AI development, deployment, and governance are:

  • forging data-inclusion and transparency standards;
  • empowering user education and literacy for older adults, while ensuring proportional and appropriate modes of consent; and
  • establishing a standard of care through monitoring, evaluation, and impact assessments.

Interoperability, connectivity, literacy, transparency, and inclusion emerge as key themes to help identify the existing gaps within the intersection of AI and aging. These themes are visible across recent policy efforts, and can be made even more impactful by recognizing their intersection with specific communities, like older adults. The recent developments in guidelines, frameworks, and agreements signify a positive shift toward enabling digital inclusion for older populations. These developments are crucial to safeguard against biases inherent in AI-enabled technologies, biases that can significantly impact older adults throughout the various stages of the AI lifecycle. The path forward demands not just the inclusion of older adults in AI, but also their empowerment. As AI products and services become intertwined with daily life, advocating for the rights and needs of the aging population becomes more critical. This approach will pave the way for an equitable landscape where older citizens are not merely passive recipients, but active contributors and beneficiaries of the AI revolution.

About the author

Related content

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

This report and event have been made possible through the generous support of AARP. All views expressed in the report and event may not necessarily reflect the views of AARP. Throughout this process, the author engaged in confidential consultations with many well-known private and public organizations. These discussions were instrumental in shaping the contents of this report. Consequently, to maintain confidentiality, specific affiliations are not disclosed in the report or event.

The post Generational AI: Digital inclusion for aging populations appeared first on Atlantic Council.

]]>
CBDC Tracker cited by Forkast on CBDC Anti-Surveillance Act https://www.atlanticcouncil.org/insight-impact/in-the-news/cbdc-tracker-cited-by-forkast-on-cbdc-anti-surveillance-act/ Fri, 24 May 2024 14:59:19 +0000 https://www.atlanticcouncil.org/?p=769543 Read the full article here.

The post CBDC Tracker cited by Forkast on CBDC Anti-Surveillance Act appeared first on Atlantic Council.

]]>
Read the full article here.

The post CBDC Tracker cited by Forkast on CBDC Anti-Surveillance Act appeared first on Atlantic Council.

]]>
CBDC Tracker cited by Ledger Insights on CBDC Anti-Surveillance Act https://www.atlanticcouncil.org/insight-impact/in-the-news/cbdc-tracker-cited-by-ledger-insights-on-cbdc-anti-surveillance-act/ Thu, 23 May 2024 14:53:28 +0000 https://www.atlanticcouncil.org/?p=769539 Read the full article here.

The post CBDC Tracker cited by Ledger Insights on CBDC Anti-Surveillance Act appeared first on Atlantic Council.

]]>
Read the full article here.

The post CBDC Tracker cited by Ledger Insights on CBDC Anti-Surveillance Act appeared first on Atlantic Council.

]]>
Lipsky quoted by Politico on pending US CBDC legislation https://www.atlanticcouncil.org/insight-impact/in-the-news/lipsky-quoted-by-politico-on-pending-us-cbdc-legislation/ Mon, 20 May 2024 15:23:14 +0000 https://www.atlanticcouncil.org/?p=767967 Read the full newsletter here.

The post Lipsky quoted by Politico on pending US CBDC legislation appeared first on Atlantic Council.

]]>
Read the full newsletter here.

The post Lipsky quoted by Politico on pending US CBDC legislation appeared first on Atlantic Council.

]]>
What to do about ransomware payments https://www.atlanticcouncil.org/blogs/econographics/what-to-do-about-ransomware-payments/ Tue, 14 May 2024 16:57:36 +0000 https://www.atlanticcouncil.org/?p=764759 And why payment bans alone aren’t sufficient.

The post What to do about ransomware payments appeared first on Atlantic Council.

]]>
Ransomware is a destabilizing form of cybercrime with over a million attacks targeting businesses and critical infrastructure every day.  Its status as a national security threat, even above that of other pervasive cybercrime, is driven by a variety of factors like its scale, disruptive nature, and potential destabilizing impact on critical infrastructure and services—as well as the sophistication and innovation in ransomware ecosystems and cybercriminals, who are often Russian actors or proxies.   

The ransomware problem is multi-dimensional. Ransomware is both a cyber and a financial crime, exploiting vulnerabilities not only in the security of digital infrastructure but also in the financial system that have enabled the rise of sophisticated Ransomware-as-a-Service (RaaS) economies.  It is also inherently international, involving transnational crime groups operating in highly distributed networks that are targeting victims, leveraging infrastructure, and laundering proceeds without regard for borders.  As with other asymmetric threats, non-state actors can achieve state-level consequences in disruption of critical infrastructure.

With at least $1 billion reported in ransomware payments in 2021 and with incidents targeting critical infrastructure like hospitals, it is not surprising that the debate on ransomware payments is rising again. Ransomware payments themselves are problematic—they are the primary motive for these criminal acts, serving to fuel and incentivize this ecosystem.  Many are also inherently already banned in that payments to sanctioned actors are prohibited. However, taking a hardline position on ransomware payments is also challenging because of its potential impact on victims, visibility and cooperation, and limited resources.

Cryptocurrency’s role in enabling ransomware’s rise

While ransomware has existed in some form since 1989, the emergence of cryptocurrencies as an easy means for nearly-instantaneous, peer-to-peer, cross-border value transfer contributed to the rise of sophisticated RaaS economies. Cryptocurrencies use largely public, traceable ledgers which can certainly benefit investigations and disruption efforts. However, in practice those disruption efforts are hindered by weaknesses in cryptocurrency ecosystems like lagging international and industry compliance with anti-money laundering and countering financing of terrorism (AML/CFT) standards; growth of increasingly sophisticated methods of obfuscation leveraging mixers, anonymity-enhanced cryptocurrencies, chain-hopping, and intermixing with off-chain and traditional finance methods; and insufficient steps taken to enable real-time, scaled detection and timely interdictionof illicit cryptocurrency proceeds.

Despite remarks by some industry and policymaker advocates, RaaS economies would not work at the same level of scale and success without cryptocurrency, at least in its current state of compliance and exploitable features. Massively scaled ransomware campaigns targeting thousands of devices could not work by asking victims to pay using wire transfers and gift cards pointing to common accounts at regulated banks or widely publishing a physical address. Reliance on traditional finance methods would require major, and likely significantly less profitable, evolution in ransomware models.

The attraction of banning ransomware payments

Any strategy to deal with ransomware needs to have multiple elements, and one key aspect is the approach to ransomware payments. The Biden Administration’s multi-pronged counter-ransomware efforts have driven unprecedented coordination of actions combating ransomware, seen in actions like disrupting the ransomware variant infrastructure and actors, OFAC and FinCEN designations of actors and financial institutions facilitating ransomware, pre-ransomware notifications to affected companies by CISA, and a fifty-member International Counter-Ransomware Initiative.

However, ransomware remains a significant threat and is still affecting critical infrastructure. As policymakers in the administration and in Congress consider every tool available, they will have to consider the effectiveness of the existing policy approach to ransomware payments. Some view payment bans as a necessary action to address the risks ransomware presents to Americans and to critical infrastructure. Set against the backdrop of the moral, national security, and economic imperatives to end this destabilizing activity, bans could be the quickest way to diminish incentives for targeting Americans and the significant amounts of money making it into the hands of criminals.

Additionally, banning ransomware payments promotes other Administration policy objectives like driving a greater focus on cybersecurity and resilience. Poor cyber hygiene, and especially often poor identity and access management, are frequently exploited in ransomware. Removing payments as a potential “escape hatch” is seen by some as a way to leverage market forces to incentivize better cyber hygiene, especially in a space where the government has limited and fragmented regulatory authority.

Those who promote bans typically do not come to that position lightly but instead see them as a last resort to try to deter ransomware.  The reality is that we have not yet been able to sufficiently scale disruption to the extent needed to diminish this threat below a national security concern—driven by insufficient resourcing, limits on information sharing and collaboration, timeliness issues for use of certain authorities, and insufficient international capacity and coordination on combating cyber and crypto crime. When policymakers are in search of high-impact initiatives to reduce the high-impact threat of ransomware, many understandably view bans as attractive.

Challenges with banning ransomware payments

However, taking a hardline position on ransomware payments can also present practical and political challenges:

  • Messaging and optics of punishing victims:A ban inherently places the focus of the policy burden and messaging on the victims, potentially not stopping them from using this tool but instead raising the costs for them to do so. Blaming victims that decide to pay in order to keep their company intact presents moral and political challenges.
  • Limited resources that need to be prioritized against the Bad Guys:  For a ban to be meaningful, it would have to be enforced. Spending enforcement resources against victims to enforce a ban—resources which could have been spent on scaling disruption of the actual perpetrators—could divert critically limited resources from efforts against the ransomware actors.
  • Likelihood that payments will still happen as companies weigh the costs against the benefits:  Many feel that companies, if faced between certain demise and the costs of likely discovery and legal or regulatory action by the government, will still end up making ransomware payments.
  • Disincentivizing reporting and visibility:  A ban would also make companies less likely to report that they have been hit with ransomware, as they will aim to keep all options open as they decide how to proceed. This disincentivizes transparency and cooperation from companies needed to drive effective implementation of the cyber incident and ransomware payment reporting requirements under the Cybersecurity Incident Reporting for Critical Infrastructure Act (CIRCIA) regulations to the Cybersecurity and Infrastructure Security Agency (CISA). Diminished cooperation and transparency could have a devastating effect on investigations and disruption efforts that rely on timely visibility.
  • Asking for permission means the government deciding which companies survive:  Some advocates for bans propose exceptions, such as supplementing a presumptive ban with a licensing or waiver authority, where the government is the arbiter of deciding which companies get to pay or not.  This could enable certain entities like hospitals to use the payment “escape hatch.” However, placing the government in a position to decide which companies live and die is extremely complicated and presents uncomfortable questions.  It is unclear what government body could be capable, or should be endowed with the authority of making that call at all, especially in as timely a fashion as would be required.  Granting approval could also place the government in the uncomfortable position of essentially approving payments to criminals.

Additional policy options that can strike a balance for practical implementation

In light of the large-scale, disruptive threat to critical infrastructure from ransomware, policymakers will have to consider other initiatives along with its ransomware payment approach to strike a balance on enhancing disruption and incentivizing security measures:

  • Resource agencies and prioritize counter-ransomware efforts: Government leadership must properly resource through appropriations and prioritize disruption efforts domestically and internationally as part of a sustained pressure campaign against prioritized ransomware networks.
  • International cyber and cryptocurrency capacity building and pressure campaign: Agencies should prioritize targeted international engagement, such as capacity building where capability lags and diplomatic pressure where political will lags, toward defined priority jurisdictions.  Capacity building and pressure should drive both cybersecurity and cryptocurrency capacity, such as critical infrastructure controls, regulatory, and law enforcement capabilities. Jurisdictional prioritization could account for elements like top nations where RaaS actors and infrastructure operate and where funds are primarily laundered and cashed out.
  • Enhance targeting authorities for use against ransomware actors: Congress should address limitations in existing authorities to enable greater disruptive action against the cyber and financial elements of ransomware networks. For example, Congress could consider fixes to AML/CFT authorities (e.g., 311 and 9714 Bank Secrecy Act designations) for better use against ransomware financial enablers, as well as potential fixes that the defense, national security, and law enforcement communities may need.
  • Ensure government and industry visibility for timely interdiction and disruption of ransomware flows: Congressional, law enforcement, and regulatory agencies should work with industry to ensure critical visibility across key ecosystem participants to enable disruption efforts, such as through: Enforcing reporting requirements of ransomware payments under CIRCIA and US Treasury suspicious activity reporting (SAR) requirements; Mandating through law that entities (such as digital forensic and incident response [DFIR] firms) that negotiate or make payments to ransomware criminals on behalf of victims, including in providing decryption services for victims, must be regulated as financial institutions with SAR reporting requirements; Driving the evolution of standards, like those for cyber indicators, to enable real-time information sharing and ingestion of cryptocurrency illicit finance indicators for responsible ecosystem participants to disrupt illicit finance flows.
  • Prioritize and scale outcome-driven public-private partnerships (PPPs): Policymakers should prioritize, fund, and scale timely efforts for PPPs across key infrastructure and threat analysis actors (e.g., internet service providers [ISPs], managed service providers [MSPs], cyber threat firms, digital forensic and incident response [DFIR] and negotiation firms, cryptocurrency threat firms, cryptocurrency exchanges, and major crypto administrators and network-layer players [e.g., mining pools and validators]) focused on disruption of key ransomware activities and networks.
  • Incentivize and promote better security while making it less attractive to pay ransoms: Policymakers could leverage market and regulatory incentives to drive better security measures adoption to deter ransomware and make it less attractive to pay.  For example, legislation could prohibit cyber insurance reimbursement of ransomware payments. Regulatory action and legislative authority expansion could also drive implementation of high-impact defensive measures against ransomware across critical infrastructure and coordination of international standards on cyber defense.

While attractive for many reasons, banning ransomware payments presents challenges for limiting attacks that demand a broader strategy to address. Only this kind of multi-pronged, whole-of-nation approach will be sufficient to reduce the systemic threats presented by disruptive cybercrime that often targets our most vulnerable.


Carole House is a nonresident senior fellow at the Atlantic Council GeoEconomics Center and the Executive in Residence at Terranet Ventures, Inc. She formerly served as the director for cybersecurity and secure digital innovation for the White House National Security Council.

The post What to do about ransomware payments appeared first on Atlantic Council.

]]>
CBDC Tracker cited by Banking Risk & Regulation on central bank digital currency development https://www.atlanticcouncil.org/insight-impact/in-the-news/cbdc-tracker-cited-by-banking-risk-regulation-on-central-bank-digital-currency-development/ Mon, 13 May 2024 13:48:24 +0000 https://www.atlanticcouncil.org/?p=765124 Read the full article here.

The post CBDC Tracker cited by Banking Risk & Regulation on central bank digital currency development appeared first on Atlantic Council.

]]>
Read the full article here.

The post CBDC Tracker cited by Banking Risk & Regulation on central bank digital currency development appeared first on Atlantic Council.

]]>
Kumar interviewed on Penta podcast on geopolitics of digital currencies https://www.atlanticcouncil.org/insight-impact/in-the-news/kumar-interviewed-on-penta-podcast-on-geopolitics-of-digital-currencies/ Wed, 08 May 2024 20:33:26 +0000 https://www.atlanticcouncil.org/?p=763808 Listen to the full interview here.

The post Kumar interviewed on Penta podcast on geopolitics of digital currencies appeared first on Atlantic Council.

]]>
Listen to the full interview here.

The post Kumar interviewed on Penta podcast on geopolitics of digital currencies appeared first on Atlantic Council.

]]>
EU AI Act sets the stage for global AI governance: Implications for US companies and policymakers https://www.atlanticcouncil.org/blogs/geotech-cues/eu-ai-act-sets-the-stage-for-global-ai-governance-implications-for-us-companies-and-policymakers/ Mon, 22 Apr 2024 15:51:29 +0000 https://www.atlanticcouncil.org/?p=757285 The European Union (EU) has made a significant step forward in shaping the future of Artificial Intelligence (AI) with the recent approval of the EU Artificial Intelligence Act (EU AI Act) by the European Parliament. This historic legislation, passed by an overwhelming margin of 523-46 on March 13, 2024, creates the world’s first comprehensive framework […]

The post EU AI Act sets the stage for global AI governance: Implications for US companies and policymakers appeared first on Atlantic Council.

]]>
The European Union (EU) has made a significant step forward in shaping the future of Artificial Intelligence (AI) with the recent approval of the EU Artificial Intelligence Act (EU AI Act) by the European Parliament. This historic legislation, passed by an overwhelming margin of 523-46 on March 13, 2024, creates the world’s first comprehensive framework for AI regulation. The EU will now roll out the new regulation in a phased approach through 2027. The bloc took a risk-based approach to AI governance, strictly prohibiting AI practices that are considered unacceptable, with some AI systems classified as high-risk, while encouraging responsible innovation.

The law is expected to enter into force between May and June after approval from the European Council; its impact is expected to extend far beyond the EU’s borders, reshaping the global AI landscape and establishing a new standard for AI governance around the world.

While reviewing the EU AI Act’s requirements for tech companies, it is critical to distinguish between core obligations that will have the greatest impact on AI development and deployment and those that are more peripheral.

Tech companies should prioritize transparency obligations such as disclosing AI system use, clearly indicating AI-generated content, maintaining detailed technical documentation, and reporting serious incidents or malfunctions. These transparency measures are critical for ensuring AI systems’ trustworthiness, accountability, and explainability, which are the Act’s primary goals.

More peripheral requirements exist, such as registering the classified high-risk AI systems in a public EU database or establishing specific compliance assessment procedures. Prioritizing these key obligations allows tech companies to demonstrate their commitment to responsible AI development while also ensuring compliance with the most important aspects of the EU AI Act.

The Act strictly prohibits certain high-risk AI practices that have been deemed unacceptable. These prohibited practices include using subliminal techniques or exploiting vulnerabilities to materially distort human behavior, which has the potential to cause physical or psychological harm, particularly to vulnerable groups such as children or the elderly. The Act prohibits social scoring systems, which rate individuals or groups based on social behavior and interactions. These systems can be harmful, discriminatory, and racially biased.

Certain AI systems are classified as high-risk under the EU AI Act due to their potential to have a significant or severe impact on people and society. These high-risk AI systems include those used in critical infrastructure like transportation, energy, and water supply, where failures endanger citizens’ lives and health. AI systems used in educational or vocational training that affect access to learning and professional development, such as those used to score exams or evaluate candidates, are also considered high-risk. The Act also classifies AI systems used as safety components in products, such as robot-assisted surgery or autonomous vehicles, as high-risk, as well as those used in employment, worker management, and access to self-employment, such as resume-sorting software for recruitment or employee performance monitoring and evaluation systems.

Furthermore, AI systems used in critical private and public services, such as credit scoring or determining access to public benefits, as well as those used in law enforcement, migration, asylum, border control management, and the administration of justice and democratic processes, are classified as high-risk under the EU AI Act.

The Act set stringent requirements for these systems include thorough risk assessments, high-quality datasets, traceability measures, detailed documentation, human oversight, and robustness standards. Companies running afoul of the new rules could face fines of up to 7 percent of global revenue or $38 million, whichever is higher.

The Act classifies all remote biometric identification systems as high-risk and generally prohibits their use in publicly accessible areas for law enforcement purposes, with only a few exceptions. The national security exemption in the Act has raised concerns among civil society and human rights groups because it creates a double standard between private tech companies and government agencies when it comes to AI systems used for national security, potentially allowing government agencies to use these same technologies without the same oversight and accountability.

The EU AI Act has far-reaching implications for US AI companies and policymakers. Companies developing or deploying AI systems in or for the EU market will have to navigate the Act’s strict requirements, which requires significant changes to their AI development and governance practices. This likely would involve investments to improve risk assessment and mitigation processes, ensure the quality and representativeness of training data, implement comprehensive policies and documentation procedures, and establish strong human oversight mechanisms. Besides significant penalties, noncompliance with the Act’s provisions may result in reputational damage which can be significant and long-lasting, resulting in a severe loss of trust and credibility, as well as widespread public backlash, negative media coverage, customer loss, partnerships, investment opportunities, and boycott calls.

The AI Act’s extraterritorial reach means that US companies will be impacted if their AI systems are used by EU customers. This emphasizes the importance for US AI companies to closely monitor and adapt to the changing regulatory landscape in the EU, regardless of their primary market focus.

As Thierry Breton, the European Commissioner for Internal Market, said on X (formerly Twitter), “Europe is NOW a global standard-setter in AI”. The EU AI Act will likely shape AI legislation in other countries by setting a high-risk-based regulation standard for AI governance. Many countries are already considering the EU AI Act as they formulate their AI policies. François-Philippe Champagne, Canada’s Minister of Innovation, Science, and Industry, has stated that the country is closely following the development of the EU AI Act as it works on its own AI legislation. A partnership that is already strong with the boost of their joint strategic digital partnership to address AI challenges by implementing the EU-Canada Digital Partnership.

Similarly, the Japanese government has expressed an interest in aligning its AI governance framework with the EU’s approach as Japan’s ruling party is expected to push for AI legislation within 2024. As more countries find inspiration in the EU AI Act, similar AI penal provisions are likely to become the de facto global standard for AI regulation.

The impact of the EU AI Act on the technology industry is expected to be significant, as companies developing and deploying AI systems will need to devote resources to compliance measures, which raise costs and slow innovation in the short term, especially for startups. However, the Act’s emphasis on responsible AI development and protecting fundamental rights is the region’s first attempt to set up guardrails and increase public trust in AI technologies, with the overall goal of promoting long-term growth and adoption.

Tech giants, like Bill Gates, Elon Musk, Mark Zuckerberg, and Sam Altman have repeatedly asked governments to regulate AI. Sundar Pichai, CEO of Google and Alphabet, stated last year that “AI is too important not to regulate”, and the EU AI Act is an important step toward ensuring that AI is developed and used in a way that benefits society at large.

As other countries look to the EU AI Act as a model for their own legislation, US policymakers should continue engaging in international dialogues to ensure consistent approaches to AI governance globally, helping to ease regulatory fragmentation.

The EU AI Act is a watershed moment in the global AI governance and regulatory landscape, with far-reaching implications for US AI companies and policymakers. As the Act approaches implementation, it is critical for US stakeholders to proactively engage with the changing regulatory environment, adapt their practices to ensure compliance and contribute to the development of responsible AI governance frameworks that balance innovation, competitiveness, and fundamental rights.

Logo of the Commission on the Geopolitical Impacts of New Technologies and Data. Includes an 8-point compass rose in the middle with the words "Be Bold. Be Brave. Be Benevolent" at the bottom.

GeoTech Center

Championing positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post EU AI Act sets the stage for global AI governance: Implications for US companies and policymakers appeared first on Atlantic Council.

]]>
Kumar interviewed by P.I.T. Exchange on reimagining payment systems to rebuild the Palestinian economy https://www.atlanticcouncil.org/insight-impact/in-the-news/kumar-interviewed-by-p-i-t-exchange-on-reimagining-payment-systems-to-rebuild-the-palestinian-economy/ Mon, 22 Apr 2024 14:09:17 +0000 https://www.atlanticcouncil.org/?p=759644 Listen to the full episode here.

The post Kumar interviewed by P.I.T. Exchange on reimagining payment systems to rebuild the Palestinian economy appeared first on Atlantic Council.

]]>
Listen to the full episode here.

The post Kumar interviewed by P.I.T. Exchange on reimagining payment systems to rebuild the Palestinian economy appeared first on Atlantic Council.

]]>
What should digital public infrastructure look like? The G7 and G20 offer contrasting visions. https://www.atlanticcouncil.org/blogs/new-atlanticist/what-should-digital-public-infrastructure-look-like-g7-g20/ Thu, 18 Apr 2024 16:59:38 +0000 https://www.atlanticcouncil.org/?p=757969 The two organizations hold different views of how digital public infrastructure should shape the way markets function.

The post What should digital public infrastructure look like? The G7 and G20 offer contrasting visions. appeared first on Atlantic Council.

]]>
The Group of Seven’s (G7) recent entry into the digital public infrastructure (DPI) debate marks an important shift in the winds of global digital governance. It’s as if the G7, which released its latest Industry, Technology, and Digital Ministerial Declaration in March, wants to send a not-so-subtle message: “We’ve arrived at the DPI party, and we’ve got some thoughts.” And indeed, they do.

For well over a year, DPI discussions have simmered in capitals around the world, drawing in policymakers, diplomats, and development experts alike. As a quick primer on DPI, think of it as the digital equivalent of laying down highways and bridges, but for the virtual world. Just as physical infrastructure drives economic growth, investing in DPI can propel inclusive development at a societal scale. Identity, payments, and data exchange platforms are often cited as the core building blocks of DPI, mirroring the multilayered structure of India’s famous homegrown technology stack.

India has been a trailblazer in deploying DPI at home and globalizing the DPI model. With its Group of Twenty (G20) presidency in 2023, New Delhi championed DPI on the world stage, securing political buy-in for the concept at the highest levels. G20 digital ministers endorsed a framework to govern the design, development, and deployment of DPI last August. And with the unanimous endorsement of G20 leaders, the New Delhi Leaders’ Declaration from last November set the stage for accelerated DPI development in 2024.

However, as the G7’s foray into the DPI arena reveals, the conversation is far from over. There are still different views of what DPI is and ought to be, as well as how it should shape the way markets function. Contrasting the G7 and G20 ministerial texts on DPI reveal three important areas of contention.

Differing visions

First, there’s the question of scope and purpose: Should DPI focus on enhancing public service delivery by governments or seek to restructure markets and delivery of private services? The G7 ministerial text opts for a narrower focus, solely emphasizing DPI’s role in enhancing citizen access to public services delivered by governments, while the G20 imagines a more expansive canvas, where DPI serves as a conduit for “equitable access” to both public and private services. This distinction is not merely academic; it gets to the core of what makes DPI novel and contested.

What does it mean to use DPI to enable equitable access to private services at a societal scale? It’s an evolving concept, but the basic thrust is to leverage the design, deployment, and governance of DPI to “dynamically create and shape new markets” and advance policy goals. For example, with a market-shaping DPI in place, a system operator, often the state itself, can define technical standards for private service providers to ensure interoperability. It can cap market share to give force to its vision of competition policy. It can influence pricing and business strategies through system rules and design features, with the DPI operator playing the role of “market orchestrator.” This is a different paradigm for the digital economy than a traditional market-led model—and to DPI champions, that’s precisely the point.

Second, consider the motivations for deploying DPI: Should these include advancing competition policy objectives? When describing the objectives for deploying DPI, G7 ministers borrow from the G20 framework but make a notable omission: There is no reference to “competition” as a core rationale for DPI. This omission is fully consistent with the G7’s vision of DPI, narrowly focused on public service delivery by governments. For the G7, the task of building competitive markets for the private sector is left to national regulators and antitrust authorities, not DPI builders and operators.

By contrast, the G20’s framework invokes the role of DPI in promoting competition twice, and that’s no accident. All governments want competitive digital ecosystems, but some see overexposure to Western tech giants as compounding the risks posed by pure market concentration alone. In this context, deployment of DPI serves two related purposes: disrupting entrenched incumbent positions while increasing state capacity to offer core digital services that reduce reliance on Western tech firms.

Third, what about design principles? Should DPI require open-source tech and open standards? The G7 ministerial statement omits all specific references to open source or open standards; instead, it vigorously defends the role of the private sector in building interoperable elements of DPI, presumably using open or proprietary technologies. In comparison, the G20’s DPI framework pointedly and repeatedly emphasizes the need for open software, open standards, and open application programming interfaces (APIs). Ultimately, however, the G20 statement hedges on this question, stating that DPI can be built on “open source and/or proprietary solutions, as well as a combination of both.”

Nevertheless, speak to DPI theorists shaping G20 and Global South thinking on DPI, and it’s clear they see “openness” as a defining principle of well-built DPI, citing the role open architectures, open-source tech, and open APIs play in enabling transparency, scale, interoperability, and reduced risk of vendor lock-in. Still, fuzziness around the term “openness” and its application in some of the largest DPI systems deployed to date suggests there is much left to unpack.

How will the G7 engage with DPI going forward?

It’s clear that the G7’s vision for DPI differs from the G20’s in at least three important areas. The question that remains is what comes next: How will the G7 (or its member states) assert their point of view?

The G7’s ministerial text offers some early clues. It acknowledges that G7 members will have “different approaches to the development of digital solutions, including DPI” and notes that the upcoming G7 Compendium on Digital Government Services will collect “relevant examples of digital public services from G7 members.” The compendium would also summarize factors that have led to “successful deployment and use of digital government services, such as national strategies, investment, public procurement practices, governance frameworks, and partnerships.”

Developing the compendium is a good start. But looking ahead, G7 members will need to weigh in this year at fast-moving multilateral discussions during Brazil’s G20 presidency, for instance, or within the United Nations’ multiple DPI workstreams. In each case, G7 perspectives on corporate governance, privacy, market disciplines, and regulatory best practices will strengthen discussions and outcomes, just as the G20’s and the Global South’s focus on inclusion, competition, and openness help ground the conversation in public interest concerns. The push and pull of the different visions for DPI could yield a better outcome for all—that’s the optimistic case.

A pessimist may insist that the gaps between the G7 and G20 views on DPI are tough to bridge. And it’s true, there is a real difference between a DPI scoped for public service delivery and one intended to shape the structure of digital markets and digital services offered by the private sector. If the latter view of DPI holds, G7 member states may need to find new ways to constructively participate in global DPI discussions. This could involve promoting individual layers of the DPI stack, as the G7 is already doing with digital ID governance, and emphasizing the need for sustainable public-private partnerships for DPI build-out. 

Ultimately, time will tell how the G7 chooses to lean into the global DPI debate. The only certainty is that the G7’s active engagement isn’t optional anymore—it’s essential.


Anand Raghuraman is a nonresident senior fellow at the Atlantic Council’s South Asia Center, where he leads research initiatives on US-India digital cooperation and publishes expert commentary on Indian data governance and digital policy initiatives. He is also director of global public policy at Mastercard.

Mastercard, through its Policy Center for the Digital Economy, is a financial supporter of an Atlantic Council project on digital public infrastructure.

The views expressed in this article are the author’s and do not necessarily reflect those of Mastercard.

The post What should digital public infrastructure look like? The G7 and G20 offer contrasting visions. appeared first on Atlantic Council.

]]>
“Retaliation and Resilience: China’s Economic Statecraft in a Taiwan Crisis” report cited by Brookings on eurodollars and stablecoins https://www.atlanticcouncil.org/insight-impact/in-the-news/retaliation-and-resilience-chinas-economic-statecraft-in-a-taiwan-crisis-report-cited-by-brookings-on-eurodollars-and-stablecoins/ Wed, 17 Apr 2024 18:14:30 +0000 https://www.atlanticcouncil.org/?p=761420 Read the full paper here.

The post “Retaliation and Resilience: China’s Economic Statecraft in a Taiwan Crisis” report cited by Brookings on eurodollars and stablecoins appeared first on Atlantic Council.

]]>
Read the full paper here.

The post “Retaliation and Resilience: China’s Economic Statecraft in a Taiwan Crisis” report cited by Brookings on eurodollars and stablecoins appeared first on Atlantic Council.

]]>
Iranians sacrificed their lives to share videos of regime violence. Now there’s an online archive for the world to see.  https://www.atlanticcouncil.org/blogs/iransource/mahsa-amini-access-now-iranian-archive-human-rights/ Fri, 12 Apr 2024 14:16:32 +0000 https://www.atlanticcouncil.org/?p=756453 The Iranian Archive holds more than one million videos to ensure that the Women, Life, Freedom uprising led by women would not be erased.

The post Iranians sacrificed their lives to share videos of regime violence. Now there’s an online archive for the world to see.  appeared first on Atlantic Council.

]]>
My radio alarm clock woke me up on June 12, 2009 to the news that millions of Iranians had taken to the streets to protest the fraudulent outcome of the presidential election. I listened momentarily, rolled over, and hit the snooze button.

When I got to work at my information technology (IT) job that day, I read news about the protests that became known as the Green Movement, prompted by the sham reelection of hardliner President Mahmoud Ahmadinejad. For weeks, Iranians poured into the streets of major cities wearing green—the color of reformist candidate Mir Hossein Mousavi—and chanting and holding signs that read, “Where is my vote?”

At the time, the Atlantic called it “the first major world event broadcast almost entirely via social media.” Caught by surprise, the clerical establishment scrambled to censor the internet by blocking websites or deliberately slowing connection speed. It was a historical moment and showed that the internet could be a medium of hope for dramatic social and political change.

As a young Iranian-American man struggling with his half-Iranian identity after growing up with the anti-Iranian hate crimes and discrimination of the 1980s, the protests were a lightning bolt to my heart. People who looked like me were not chanting, “Death to America,” but instead calling for democratic values that I held dear—ones that were in my Iranian immigrant father’s heart as he fled to the United States after the 1979 revolution with my pregnant American mother. The Green Movement changed how I saw myself, and I felt a deep call to get involved in the Iranian people’s quest for freedom. That is why in 2009 I helped co-found Access Now, one of the largest human rights organizations dedicated to defending digital rights. 

SIGN UP FOR THIS WEEK IN THE MIDEAST NEWSLETTER

During the Green Movement, I quickly learned how to develop and distribute proxy servers to allow Iranians uncensored internet access to tell their stories. This work gathered a group of young activists to support the protestors with whatever tools and expertise they needed, and required shifts of eighteen hours per day. The servers were used by tens of thousands of Iranians daily, and websites that were defended from being taken down by the regime—reaching five million Iranians per day, or more than 25 percent of the entire country’s internet users—were the main news sources for Western media outlets covering the ongoing protests. Tools were developed to defend hundreds of key journalists and activists inside Iran sharing news and video. These elements became the foundations of Access Now.

But one project stayed close to my heart: video archiving. The clerical establishment was trying to erase protest videos, while activists were removing videos for fear of persecution. History was being erased as quickly as it was being made. In response, I downloaded thousands of videos filled with violence, hope, tears, and joy, which were converted to mobile formats and redistributed across the country, where they were downloaded by more than three million Iranians.

In 2022, when Mahsa Amini was murdered by the so-called morality police for “violating” mandatory hijab rules, I gathered a small group of colleagues and friends and together we downloaded thousands of videos to ensure that the Women, Life, Freedom uprising led by women would not be erased. This became the Azadi (freedom) Archive, created in September 2022 and later joined by an international archival coalition led by the Atlantic Council’s Strategic Litigation Project and Mnemonic, with the Promise Institute for Human Rights at the University of California, Los Angeles (UCLA) Law, the University of California, Berkeley’s Human Rights CenterAmnesty International’s Digital Verification Corps, and the Iran Human Rights Documentation Center.

The newly renamed Iranian Archive now holds more than one million videos and contributed to the investigation carried out by the United Nations Independent International Fact-Finding Mission on the Islamic Republic of Iran (FFMI), which unveiled a report in March detailing how the Islamic Republic committed crimes against humanity and other serious human rights violations against Woman, Life, Freedom protesters. On March 21, the United Nations Human Rights Council voted to renew the mandate of the FFMI, giving it more time to strengthen its significant findings and ensure the effective preservation of evidence for use in legal proceedings, including the significant photo and video shared throughout the protests.

The world’s tragedies deserve justice and to be remembered. Iranians who risked their lives to share images of protest violence did so with the hope that information would get out and the world would respond to the Islamic Republic’s atrocious human rights violations. Even now, thousands of videos are uploaded by brave activists from around the globe every day—but without a systematic and funded approach to preservation, the opportunities for accountability, remembering, research, and memorialization are lost.

The global coalition of universities, nonprofits, and companies committed to archiving and preserving videos is making that vision a reality by working together through the nonprofit Iranian Archive to preemptively capture, store, catalog, and tag digital content in a way that can be used by researchers, lawyers, and human rights defenders in the future.   

The birth of this global archival coalition signals to Iranian activists and citizen journalists that their sacrifice to share information with the world will not be erased online. It also upholds the best that the internet has to offer, despite increasing “enshittification,” and makes a meaningful contribution to social justice and human rights online and offline. 

It’s been fifteen years since the international community realized the importance of digital activism due to the 2009 Green Movement. It must not wait another fifteen years before it develops a robust and comprehensive approach to archiving and preserving video to support justice and human rights movements across the world.

Cameran Ashraf is co-founder of international human rights and technology organization AccessNow, human rights scholar, and NGO human rights leader.

The post Iranians sacrificed their lives to share videos of regime violence. Now there’s an online archive for the world to see.  appeared first on Atlantic Council.

]]>
Kumar and Chhangani cited by Ledger Insights on global interoperability standards for central bank digital currency https://www.atlanticcouncil.org/insight-impact/in-the-news/kumar-and-chhangani-cited-by-ledger-insights-on-global-interoperability-standards-for-central-bank-digital-currency/ Thu, 11 Apr 2024 14:16:09 +0000 https://www.atlanticcouncil.org/?p=756455 Read the full article here.

The post Kumar and Chhangani cited by Ledger Insights on global interoperability standards for central bank digital currency appeared first on Atlantic Council.

]]>
Read the full article here.

The post Kumar and Chhangani cited by Ledger Insights on global interoperability standards for central bank digital currency appeared first on Atlantic Council.

]]>
Kumar cited by Vanguard Think Tank on China development of central bank digital currency https://www.atlanticcouncil.org/insight-impact/in-the-news/kumar-cited-by-vanguard-think-tank-on-china-development-of-central-bank-digital-currency/ Wed, 10 Apr 2024 15:08:07 +0000 https://www.atlanticcouncil.org/?p=756114 Read the full article here.

The post Kumar cited by Vanguard Think Tank on China development of central bank digital currency appeared first on Atlantic Council.

]]>
Read the full article here.

The post Kumar cited by Vanguard Think Tank on China development of central bank digital currency appeared first on Atlantic Council.

]]>
CBDC Tracker cited by Congressman Stephen Lynch (D-MA) in Politico https://www.atlanticcouncil.org/insight-impact/in-the-news/cbdc-tracker-cited-by-congressman-stephen-lynch-d-ma-in-politico/ Fri, 29 Mar 2024 19:46:24 +0000 https://www.atlanticcouncil.org/?p=752978 Read the full article here.

The post CBDC Tracker cited by Congressman Stephen Lynch (D-MA) in Politico appeared first on Atlantic Council.

]]>
Read the full article here.

The post CBDC Tracker cited by Congressman Stephen Lynch (D-MA) in Politico appeared first on Atlantic Council.

]]>
Groen writes in The Cipher Brief about cryptocurrency and national security https://www.atlanticcouncil.org/insight-impact/in-the-news/groen-digital-battlefield-criminal-terrorism-cipher-brief/ Tue, 26 Mar 2024 16:50:00 +0000 https://www.atlanticcouncil.org/?p=752397 Michael Groen writes about combatting illicit actors and nation states with blockchain intelligence on the digital battlefield.

The post Groen writes in The Cipher Brief about cryptocurrency and national security appeared first on Atlantic Council.

]]>

On March 26, Forward Defense Nonresident Senior Fellow Michael Groen coauthored an article for The Cipher Brief titled “Preparing for a Digital Battlefield: National Security and Cryptocurrency” about combatting illicit actors and nation states with blockchain intelligence. He emphasized that sanctions enforcement and counterterrorism success must include digital tools and techniques to investigate, seize, and disrupt transactions in evolving domains to protect national security.

Forward Defense, housed within the Scowcroft Center for Strategy and Security, generates ideas and connects stakeholders in the defense ecosystem to promote an enduring military advantage for the United States, its allies, and partners. Our work identifies the defense strategies, capabilities, and resources the United States needs to deter and, if necessary, prevail in future conflict.

The post Groen writes in The Cipher Brief about cryptocurrency and national security appeared first on Atlantic Council.

]]>
CBDC Tracker cited by SWIFT on central bank digital currency collaborative experiments https://www.atlanticcouncil.org/insight-impact/in-the-news/cbdc-tracker-cited-by-swift-on-central-bank-digital-currency-collaborative-experiments/ Mon, 25 Mar 2024 14:56:38 +0000 https://www.atlanticcouncil.org/?p=752313 Read the full article here.

The post CBDC Tracker cited by SWIFT on central bank digital currency collaborative experiments appeared first on Atlantic Council.

]]>
Read the full article here.

The post CBDC Tracker cited by SWIFT on central bank digital currency collaborative experiments appeared first on Atlantic Council.

]]>
Bauerle Danzman quoted in The Kansas City Star on US TikTok bill https://www.atlanticcouncil.org/insight-impact/in-the-news/bauerle-danzman-quoted-on-the-kansas-city-star-on-us-tiktok-bill/ Fri, 15 Mar 2024 17:52:09 +0000 https://www.atlanticcouncil.org/?p=749498 Read the full article here.

The post Bauerle Danzman quoted in The Kansas City Star on US TikTok bill appeared first on Atlantic Council.

]]>
Read the full article here.

The post Bauerle Danzman quoted in The Kansas City Star on US TikTok bill appeared first on Atlantic Council.

]]>
CBDC Tracker update announced in Semafor Flagship newsletter https://www.atlanticcouncil.org/insight-impact/in-the-news/cbdc-tracker-update-announced-in-semafor-flagship-newsletter/ Thu, 14 Mar 2024 15:57:41 +0000 https://www.atlanticcouncil.org/?p=748698 Read the newsletter here.

The post CBDC Tracker update announced in Semafor Flagship newsletter appeared first on Atlantic Council.

]]>
Read the newsletter here.

The post CBDC Tracker update announced in Semafor Flagship newsletter appeared first on Atlantic Council.

]]>
CBDC Tracker cited and Lipsky quoted by China Daily on global development of central bank digital currency https://www.atlanticcouncil.org/insight-impact/in-the-news/cbdc-tracker-cited-and-lipsky-quoted-by-china-daily-on-global-development-of-central-bank-digital-currency/ Thu, 14 Mar 2024 15:50:46 +0000 https://www.atlanticcouncil.org/?p=748690 Read the full article here.

The post CBDC Tracker cited and Lipsky quoted by China Daily on global development of central bank digital currency appeared first on Atlantic Council.

]]>
Read the full article here.

The post CBDC Tracker cited and Lipsky quoted by China Daily on global development of central bank digital currency appeared first on Atlantic Council.

]]>
Kumar interviewed by The Hill on the exploration of central bank digital currency https://www.atlanticcouncil.org/insight-impact/in-the-news/kumar-interviewed-by-the-hill-on-the-exploration-of-central-bank-digital-currency/ Thu, 14 Mar 2024 15:47:30 +0000 https://www.atlanticcouncil.org/?p=748672 Watch the full interview here.

The post Kumar interviewed by The Hill on the exploration of central bank digital currency appeared first on Atlantic Council.

]]>
Watch the full interview here.

The post Kumar interviewed by The Hill on the exploration of central bank digital currency appeared first on Atlantic Council.

]]>
CBDC Tracker cited and Lipsky quoted by Coin Edition on divergence over central bank digital currency development https://www.atlanticcouncil.org/insight-impact/in-the-news/cbdc-tracker-cited-and-lipsky-quoted-by-coin-edition-on-divergence-over-central-bank-digital-currency-development/ Thu, 14 Mar 2024 15:32:41 +0000 https://www.atlanticcouncil.org/?p=748660 Read the full article here.

The post CBDC Tracker cited and Lipsky quoted by Coin Edition on divergence over central bank digital currency development appeared first on Atlantic Council.

]]>
Read the full article here.

The post CBDC Tracker cited and Lipsky quoted by Coin Edition on divergence over central bank digital currency development appeared first on Atlantic Council.

]]>
CBDC Tracker cited and Lipsky quoted by The Economic Times on global progress on central bank digital currency https://www.atlanticcouncil.org/insight-impact/in-the-news/cbdc-tracker-cited-and-lipsky-quoted-by-the-economic-times-on-global-progress-on-central-bank-digital-currency/ Thu, 14 Mar 2024 15:27:24 +0000 https://www.atlanticcouncil.org/?p=748653 Read the full article here.

The post CBDC Tracker cited and Lipsky quoted by The Economic Times on global progress on central bank digital currency appeared first on Atlantic Council.

]]>
Read the full article here.

The post CBDC Tracker cited and Lipsky quoted by The Economic Times on global progress on central bank digital currency appeared first on Atlantic Council.

]]>
CBDC Tracker cited and Lipsky quoted by Politico on US central bank digital currency development https://www.atlanticcouncil.org/insight-impact/in-the-news/cbdc-tracker-cited-and-lipsky-quoted-by-politico-on-us-central-bank-digital-currency-development/ Thu, 14 Mar 2024 15:19:41 +0000 https://www.atlanticcouncil.org/?p=748646 Read the full article here.

The post CBDC Tracker cited and Lipsky quoted by Politico on US central bank digital currency development appeared first on Atlantic Council.

]]>
Read the full article here.

The post CBDC Tracker cited and Lipsky quoted by Politico on US central bank digital currency development appeared first on Atlantic Council.

]]>
CBDC Tracker cited and Lipsky quoted by Al Arabiya on global adoption of central bank digital currency https://www.atlanticcouncil.org/insight-impact/in-the-news/cbdc-tracker-cited-and-lipsky-quoted-by-al-arabiya-on-global-adoption-of-central-bank-digital-currency/ Thu, 14 Mar 2024 15:07:55 +0000 https://www.atlanticcouncil.org/?p=748642 Read the full article here.

The post CBDC Tracker cited and Lipsky quoted by Al Arabiya on global adoption of central bank digital currency appeared first on Atlantic Council.

]]>
Read the full article here.

The post CBDC Tracker cited and Lipsky quoted by Al Arabiya on global adoption of central bank digital currency appeared first on Atlantic Council.

]]>
CBDC Tracker cited and Kumar quoted by Axios Crypto newsletter on central bank digital currency development in US and G7 https://www.atlanticcouncil.org/insight-impact/in-the-news/cbdc-tracker-cited-and-kumar-quoted-by-axios-crypto-newsletter-on-central-bank-digital-currency-development-in-us-and-g7/ Thu, 14 Mar 2024 14:55:37 +0000 https://www.atlanticcouncil.org/?p=748637 Read the full article here.

The post CBDC Tracker cited and Kumar quoted by Axios Crypto newsletter on central bank digital currency development in US and G7 appeared first on Atlantic Council.

]]>
Read the full article here.

The post CBDC Tracker cited and Kumar quoted by Axios Crypto newsletter on central bank digital currency development in US and G7 appeared first on Atlantic Council.

]]>
CBDC Tracker cited and Lipsky and Kumar quoted by Politico Morning Money newsletter on growing number of countries exploring a central bank digital currency https://www.atlanticcouncil.org/insight-impact/in-the-news/cbdc-tracker-cited-and-lipsky-and-kumar-quoted-by-politico-morning-money-newsletter-on-growing-number-of-countries-exploring-a-central-bank-digital-currency/ Thu, 14 Mar 2024 14:26:56 +0000 https://www.atlanticcouncil.org/?p=748602 Read the full article here.

The post CBDC Tracker cited and Lipsky and Kumar quoted by Politico Morning Money newsletter on growing number of countries exploring a central bank digital currency appeared first on Atlantic Council.

]]>
Read the full article here.

The post CBDC Tracker cited and Lipsky and Kumar quoted by Politico Morning Money newsletter on growing number of countries exploring a central bank digital currency appeared first on Atlantic Council.

]]>
CBDC Tracker cited and Lipsky quoted by Reuters on global progress on central bank digital currency https://www.atlanticcouncil.org/insight-impact/in-the-news/cbdc-tracker-cited-and-lipsky-quoted-by-reuters-on-global-progress-on-central-bank-digital-currency/ Thu, 14 Mar 2024 14:17:09 +0000 https://www.atlanticcouncil.org/?p=748585 Read the full article here.

The post CBDC Tracker cited and Lipsky quoted by Reuters on global progress on central bank digital currency appeared first on Atlantic Council.

]]>
Read the full article here.

The post CBDC Tracker cited and Lipsky quoted by Reuters on global progress on central bank digital currency appeared first on Atlantic Council.

]]>
Will the US crack down on TikTok? Six questions (and expert answers) about the bill in Congress. https://www.atlanticcouncil.org/blogs/new-atlanticist/will-the-us-crack-down-on-tiktok-six-questions-and-expert-answers-about-the-bill-in-congress/ Wed, 13 Mar 2024 23:42:14 +0000 https://www.atlanticcouncil.org/?p=747735 The US House has just passed a bill to force the Chinese company ByteDance to either divest from TikTok or face a ban in the United States.

The post Will the US crack down on TikTok? Six questions (and expert answers) about the bill in Congress. appeared first on Atlantic Council.

]]>
The clock is ticking. On Wednesday, the US House overwhelmingly passed a bill to force the Chinese company ByteDance to divest from TikTok, or else the wildly popular social media app would be banned in the United States. Many lawmakers say the app is a national security threat, but the bill faces an uncertain path in the Senate. Below, our experts address six burning questions about this bill and TikTok at large.

1. What kind of risks does TikTok pose to US national security?

Chinese company ByteDance’s ownership of TikTok poses two specific risks to US national security. One has to do with concerns that the Chinese Communist Party (CCP) could use its influence over the Chinese owners to use TikTok’s algorithm for propaganda purposes. Addressing this security concern is tricky due to legal protections for freedom of expression. The other risk, and the one addressed through the current House legislation, has to do with the ability of the CCP to use Chinese ownership of TikTok to access the massive amount of data that the app collects on its users. This could include data on everything from viewing tastes, to real-time location, to information stored on users’ phones outside of the app, including contact lists and keystrokes that can reveal, for example, passwords and bank activity.

Sarah Bauerle Danzman is a resident senior fellow with the Economic Statecraft Initiative in the Atlantic Council’s GeoEconomics Center.

This debate is not over free speech or access to social media: The question is fundamentally one of whether the United States can or should force a divestment of a social media company from a parent company (in this case ByteDance) if the company can be compelled to act under the direction of the CCP. We have to ask: Does the CCP have the intent or ability to compel data to serve its interests? There is an obvious answer here. We know that China has already collected massive amounts of sensitive data from Americans through efforts such as the Office of Personnel Management hack in 2015. Recent unclassified reports, including from the Office of the Director of National Intelligence, show the skill and intent of China to use personal data for influence. And the CCP has the legal structure in place to compel companies such as ByteDance to comply and cooperate with CCP requests.

Meg Reiss is a nonresident senior fellow at the Scowcroft Strategy Initiative of the Atlantic Council’s Scowcroft Center for Strategy and Security.

2. Are those risks unique to TikTok?

TikTok is not an unproblematic platform, and there are real and significant user risks that could pose dangers to safety and security, especially for certain populations. However, focusing on TikTok ignores broader vulnerabilities in the US information ecosystem that put Americans at risk. An outright ban of TikTok as currently proposed—particularly absent clearer standards for all platforms—would not meaningfully address these broader risks and would in fact potentially undermine US interests in a much more profound way.

As our recent report outlines in detail, a ban is unlikely to achieve the intended effect of meaningfully curbing China’s ability to gather sensitive data on Americans or to conduct influence operations that harm US interests. It also may contribute to a global curbing of the free flow of data that is essential to US tech firms’ ability to innovate and maintain a competitive edge.

Kenton Thibaut is a senior resident China fellow at the Atlantic Council’s Digital Forensic Research Lab.

Some have argued that TikTok, while on the aggressive end of the personal data collection spectrum, collects similar data to what other social media companies collect. However, the US government would counter with two points: First, TikTok has a history of skirting data privacy rules, such as those limiting data collection on children and those that prevent the collection of phone-specific identifiers called MAC numbers, and therefore the company cannot be trusted to handle sensitive personal data in accordance with the law. And second, unlike other popular apps, TikTok is ultimately beholden to Chinese regulations. This includes the 2017 Chinese National Intelligence Law that requires Chinese companies to hand over a broad range of information to the Chinese government if asked. Because China’s legal system is far more opaque than the United States’, it is unclear if the US government or its citizens would even know if the Chinese government ever asked for this data from TikTok. While TikTok’s management has denied supplying the Chinese government with such data, insider reports have uncovered Chinese employees gaining access to US user data. In other words, the US government has little reason to trust that ByteDance is keeping US user data safe from the CCP.

—Sarah Bauerle Danzman

3. What does the House bill actually do?

There are two important, related bills. The one that passed the House today is the Protecting Americans from Foreign Adversary Controlled Applications Act, which forces divestment. It is not an outright ban, and it is intended to address the real risk of ByteDance—thus TikTok—falling under the jurisdiction of China’s 2017 National Intelligence Law, which compels Chinese companies to cooperate with the CCP’s requests. However, divestment doesn’t completely solve for the additional potential risks of the CCP using TikTok in a unique or systemic way for data collection, algorithmic tampering (e.g. what topics surface or don’t surface to users), or information operations (e.g. an influence campaign unique to TikTok as opposed to on other platforms as well). Second, the Protecting Americans’ Data from Foreign Adversaries Act, which cleared a House committee last week, more directly addresses a broader risk of blocking the Chinese government’s access to the type of data that TikTok and many other social media platforms collect on the open market. The former without the latter is an incomplete approach to protecting Americans’ data from the CCP—and even the two combined falls short of a federal data privacy standard.

Graham Brookie is vice president and senior director of the Digital Forensic Research Lab.

There is no question China seeks to influence the American public and harvests large amounts of data on American citizens. As our recent report illuminates however, the Chinese state’s path to these goals depends very little on TikTok.

Today’s actions in the House underscore the disjointed nature of the US approach to governing technology. Rather than focus on TikTok specifically, it would be both legally and geopolitically wiser to pass legislation that sets standards for everyone, and not just one company. That could mean setting standards for what actions or behavior by any social media company would be unacceptable (for example on the use of algorithms or collection and selling of data). Or Congress could focus on prohibiting companies that are owned by states proven to have conducted hostile actions toward US digital infrastructure to operate in the United States. That would certainly include TikTok (and many other companies). This bill takes a halfway approach, both tying itself explicitly to TikTok owner ByteDance and hinting that it could apply to “other social media companies.”

Rose Jackson is the director of the Democracy and Tech Initiative at the Digital Forensic Research Lab.

The recently passed House bill, if it were to become law, would create a pathway to force the divestment of Chinese ownership in TikTok or ban the app from app stores and web hosting sites. Unlike previous attempts by the Trump administration to ban the app outright or force a divestment through the Committee on Foreign Investment in the United States, the Protecting Americans from Foreign Adversary Controlled Applications Act would not just affect TikTok. Instead, the legislation would create a process through which the US government could designate social media apps that are considered to be under the control of foreign adversaries as national security threats. Once identified as threats, the companies would have 180 days to divest from the foreign ownership or be subject to a ban.

—Sarah Bauerle Danzman

4. What would be some of the global ripple effects of a TikTok ban?

The United States has always opposed efforts by authoritarian nations seeking to build “great firewalls” around themselves. This model of “cyber sovereignty” sees the open, interoperable, and free internet as a threat, which is why countries like China already have a well-funded strategy to leverage global governance platforms to drive the development of a less open and more authoritarian-friendly version. A TikTok ban would ironically benefit authoritarian governments as they seek to center state-level action (over multi-stakeholder processes) in internet governance. TikTok should not lead the United States to abandon its longstanding commitment to the values of a free, open, secure, and interoperable internet.

A ban could generate more problems than it would solve. What the United States should consider instead is passing federal privacy laws and transparency standards that apply to all companies. This would be the single most impactful way to address broader system vulnerabilities, protect US values and commitments, and address the unique risks related to TikTok’s Chinese ownership, while avoiding the potential significant downsides of a ban. 

Kenton Thibaut

5. What do you make of TikTok’s response, particularly in pushing its users to flood Capitol Hill with calls?

Members of Congress were rightfully alarmed by TikTok’s use of its platform to send push notifications encouraging users to call their representatives. However, Uber and Lyft used this exact same tactic in California when trying to defeat legislation that would have required it to provide benefits to its drivers. If we try to solve “TikTok” and not the broader issue TikTok is illuminating, we will keep coming back to these same issues over and over again. 

—Rose Jackson

6. How is China viewing this debate?

The CCP has a tendency to throw a lot of spaghetti at the wall in an attempt to make its arguments, in this case that the divestment of TikTok from its Chinese parent company ByteDance is unnecessary. When the CCP has justified the internment of Uyghurs, it has thrown out everything from defending its repression based on terrorist beliefs across the population to claiming that it was just helping with social integration and developing work programs. The CCP has already made claims that the divestment would cause investors to lose faith in the US market and that it shows a fundamental weakness and abuse of national security. Expect many different versions of these arguments and more. But all the anticipated pushback will be focused on diverting the public argument away from the fundamental concern: The Chinese government can, under law, force a Chinese company to share information. 

—Meg Reiss

The post Will the US crack down on TikTok? Six questions (and expert answers) about the bill in Congress. appeared first on Atlantic Council.

]]>
CBDC Tracker cited by Cointelegraph on US central bank digital currency development https://www.atlanticcouncil.org/insight-impact/in-the-news/cbdc-tracker-cited-by-cointelegraph-on-us-central-bank-digital-currency-development/ Fri, 08 Mar 2024 17:00:33 +0000 https://www.atlanticcouncil.org/?p=747306 Read the full article here.

The post CBDC Tracker cited by Cointelegraph on US central bank digital currency development appeared first on Atlantic Council.

]]>
Read the full article here.

The post CBDC Tracker cited by Cointelegraph on US central bank digital currency development appeared first on Atlantic Council.

]]>
CBDC Tracker cited by Forbes on global development of central bank digital currencies https://www.atlanticcouncil.org/insight-impact/in-the-news/cbdc-tracker-cited-by-forbes-on-global-development-of-central-bank-digital-currencies/ Fri, 08 Mar 2024 14:30:17 +0000 https://www.atlanticcouncil.org/?p=747293 Read the full article here.

The post CBDC Tracker cited by Forbes on global development of central bank digital currencies appeared first on Atlantic Council.

]]>
Read the full article here.

The post CBDC Tracker cited by Forbes on global development of central bank digital currencies appeared first on Atlantic Council.

]]>
Chavkin interviewed by Paxos on digital assets innovation and regulation https://www.atlanticcouncil.org/insight-impact/in-the-news/chavkin-interviewed-by-paxos-on-digital-assets-innovation-and-regulation/ Thu, 07 Mar 2024 17:14:14 +0000 https://www.atlanticcouncil.org/?p=746125 Read the full interview here.

The post Chavkin interviewed by Paxos on digital assets innovation and regulation appeared first on Atlantic Council.

]]>
Read the full interview here.

The post Chavkin interviewed by Paxos on digital assets innovation and regulation appeared first on Atlantic Council.

]]>
Policy on a spectrum: Guiding technology regulation through value tradeoffs https://www.atlanticcouncil.org/in-depth-research-reports/report/guiding-technology-regulation-through-value-tradeoffs/ Thu, 07 Mar 2024 14:00:00 +0000 https://www.atlanticcouncil.org/?p=727043 In a digital era where the constant flux of technology challenges our norms, "Policy on a Spectrum: Guiding Technology Regulation Through Value Tradeoffs" is a clarion call for discernment and action.

The post Policy on a spectrum: Guiding technology regulation through value tradeoffs appeared first on Atlantic Council.

]]>

In a digital era where the constant flux of technology challenges our norms, “Policy on a Spectrum: Guiding Technology Regulation Through Value Tradeoffs” is a clarion call for discernment and action. Innovation and Data Ethics in the age of Generative AI is a delicate balance. Using values as a common foundation, the authors distill complex interactions between technology and society into a comprehensive framework. Concrete examples, such as tradeoffs between data protection and public security, illustrate the spectrum of policy decisions that routinely bombard our digital world.

The narrative draws on expert practitioner insights to dissect these trade-offs, providing leaders with a compass to navigate this terrain. For instance, different approaches to notice & consent in data collection are discussed, along with a spectrum of related concepts. The analysis extends to how emerging technologies, such as Generative AI are governed, debating the tradeoff between automation and human agency. The need for policies that foster ethical practices in research, innovation, and deployment is highlighted throughout.

“Policy on a Spectrum” is an invitation to proactive engagement, urging stakeholders and policymakers to shape a technology regulatory landscape that’s both equitable and accountable, measured and risk averse, and bipartisan and values based. It pushes beyond mere regulation, advocating for a future where technology amplifies our collective welfare, not undermines it. The authors aim to influence the current policy discourse while laying the groundwork for a future where tech governance is as dynamic as the technologies it seeks to manage. It’s a handbook for those ready to embrace the nuanced challenges of our times and an insightful resource for those who will architect the digital world of tomorrow and are unwilling to have an innovation tradeoff.

About the authors

Lara Pesce Ares is a Responsible Innovation consultant at Accenture focusing on the design, development and implementation of emerging technologies in a responsible way.

Related content

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Policy on a spectrum: Guiding technology regulation through value tradeoffs appeared first on Atlantic Council.

]]>
Experts react: What Biden’s new executive order about Americans’ sensitive data really does https://www.atlanticcouncil.org/blogs/new-atlanticist/experts-react/experts-react-what-bidens-new-executive-order-about-americans-sensitive-data-really-does/ Thu, 29 Feb 2024 19:05:56 +0000 https://www.atlanticcouncil.org/?p=742382 US President Joe Biden just issued an executive order restricting the large-scale transfer of personal data to “countries of concern.” Atlantic Council experts share their insights.

The post Experts react: What Biden’s new executive order about Americans’ sensitive data really does appeared first on Atlantic Council.

]]>
It’s a personal matter. On Wednesday, US President Joe Biden issued an executive order restricting the large-scale transfer of personal data to “countries of concern.” The order is intended to prevent genomic, health, and geolocation data, among other types of sensitive information, from being sold in bulk to countries such as China, which could use it to track or blackmail individuals. Can Biden’s directive stop sensitive data from slipping into the wrong hands? And what are the implications for privacy and cybersecurity more broadly? Below, Atlantic Council experts share their personal insights.

Click to jump to an expert analysis:

Rose Jackson: The absence of a federal US data protection law threatens national security

Kenton Thibaut: The focus on data brokers targets a key vulnerability in the US information ecosystem

Graham Brookie: An essential, baseline step for shoring up US data security

Sarah Bauerle Danzman: It will be essential to sort out how new rules fit in with the current regulatory structure

Justin Sherman: Congress must get involved to tame data brokerage over the long term

Maia Hamin: A welcome step, but beware of data brokers exploiting backdoors and work-arounds


The absence of a federal US data protection law threatens national security

The United States desperately needs a federal privacy or data protection law; the absence of one threatens our national interest and national security. While we wait for Congress to take the issue seriously, the Biden administration seems to be looking to leverage its executive authorities to take action where it can. Wednesday’s executive order should be understood in that context. The order takes particular aim at what are called data brokers—a lucrative market most Americans have likely never heard of. These companies quietly buy up troves of information collected through social media and credit card companies, consumer loyalty programs, mobile phone providers, health tech services, and more, then sell the combined files to whoever wants it. That means that currently, the Chinese intelligence service doesn’t need an app like TikTok to collect data on US citizens; they can just buy it from a US company. So while this executive order won’t address all of the issues related to this unregulated and highly extractive market, it will close an obvious and glaring national security gap by barring the sale of such data to foreign adversaries.

Another significant piece of the executive order is its focus on genomic data as a particularly risky category. Genomic data are all but banned from provision to adversarial nations in any form. While this is a good step, the administration does not have the authority to ban the sale of genomic data to non-adversarial nations or domestically. This means there is a high likelihood that absent congressional or other action, the market for US genomic data will only grow. This underscores an uncomfortable reality when it comes to tech policy; there is no separating the foreign and domestic. Markets grow where there is incentive, and our continued failure in the United States to meaningfully grapple with how we want tech to be governed means we are choosing not to have input on the direction our own world-changing innovations will take.

Rose Jackson is the director of the Democracy + Tech Initiative at the Atlantic Council’s Digital Forensic Research Lab. She previously served as the chief of staff to the Bureau of Democracy, Human Rights, and Labor at the US State Department.


The focus on data brokers targets a key vulnerability in the US information ecosystem

While further details are still being developed (including rightsizing thresholds for what constitutes “bulk data”), the executive order is a welcome development for those concerned about data security. The focus on data brokers—as opposed to targeting a single app, like TikTok—targets a key vulnerability in the US information ecosystem. Data brokers compile detailed profiles of individuals—including real-time location data—from various sources, including social media, credit card companies, and public records. This creates vulnerabilities for espionage and exploitation by foreign adversaries. That means while the national security community has raised concerns over the Chinese government’s ability to use TikTok to access data on Americans, it pales in comparison to what China already accesses through hacking and legal purchases via US data brokers. 

Data security threats extend beyond individual apps to include data brokers and the broader lack of regulation in the tech industry. To protect privacy and national security, stronger regulations and transparency measures are needed, and the United States should pass comprehensive federal privacy legislation. However, in the interim, the administration has done what it can with this executive order to help stem the tide of Americans’ sensitive personal data flowing abroad. 

Kenton Thibaut is a senior resident China fellow at the Atlantic Council’s Digital Forensic Research Lab (DFRLab).


An essential, baseline step for shoring up US data security

The executive order preventing the sale of bulk data to adversarial countries may sound technical, bureaucratic, and even opaque. However, it is one of the most essential baseline steps the United States needs to take in shoring up security in an era in which technology is at the forefront of geopolitical competition. Enormous amounts of information about Americans is bought and sold on the open market every single day. This measure is intended to make it harder for specific adversarial countries to buy billions of data points about citizens legally.

As many other more challenging technical issues arise—such as how to govern the rapid development of artificial intelligence—a standard for data privacy for every single person in the United States is sorely needed. Data privacy is the foundation for establishing a rights-respecting and rights-protecting approach in an era of both rapid technological change and geopolitical competition. The executive order is an important step that can be built on. The policy is a threat-based approach to securing citizens’ data and information from the worst foreign actors. Congress can strengthen this approach and address the limitations of an executive order by passing legislation for a strong federal data privacy standard that not only protects Americans’ data from foreign adversaries, but also provides Americans protection in general.

Graham Brookie is the vice president for technology programs and strategy, as well as senior director, of the Atlantic Council’s Digital Forensic Research Lab. He previously served in various roles over four years at the White House National Security Council.


It will be essential to sort out how new rules fit in with the current regulatory structure

With its latest executive order and related advance notice of proposed rulemaking, the Biden administration is trying to find transparent, clearly defined legal channels to address a specific set of national security challenges. These are the challenges that arise from the unmitigated and largely untracked commercial world of bulk data transfer to entities owned by, controlled by, or subject to the jurisdiction or direction of potential adversaries. The administration’s proposed rules demonstrate its seriousness of purpose in attempting to craft rules that are narrow in scope and application, while also anticipating and countering potential circumvention techniques of untrusted actors. They are also complicated. For example, they seek to stand up a new licensing line of effort with financial sanctions and export licenses based on a model from the Department of Justice and on the experiences of the Office of Foreign Assets Control and the Bureau of Industry and Security. This complexity raises questions about the feasibility and costs of compliance and enforcement.

Some parts of the proposed rules overlap significantly with existing regulatory structure, and especially with the Committee on Foreign Investment in the United States (CFIUS). In particular, the regulation will cover investments by covered persons and entities in US businesses that collect covered data, a class of transactions typically handled by the CFIUS. It will be important for the government to clearly articulate how the new rules and the different government entities involved will relate to each other, with a goal toward reducing rather than exacerbating regulatory complexity that leads to higher compliance costs and confusion. The proposed rules suggest that the CFIUS might take precedence, but the CFIUS is a costly and time-intensive case-by-case review that is supposed to be a tool of last resort. It would be more efficient and probably more effective to first apply investment restrictions based on these new rules and preserve case-by-case CFIUS review only in situations in which the new data security prohibitions and restrictions do not adequately address national security risks associated with a particular transaction. Doing so would reduce pressure on the CFIUS’s ever-growing caseload and would provide businesses with bright lines rather than black boxes.

Sarah Bauerle Danzman is a resident senior fellow with the GeoEconomics Center’s Economic Statecraft Initiative. She is also an associate professor of international studies at Indiana University Bloomington where she specializes in the political economy of international investment and finance.


Congress must get involved to tame data brokerage over the long term

Data brokerage is a multi-billion-dollar industry comprising thousands of companies. Foreign governments such as China and Russia obviously have many ways to get sensitive data on Americans, from hacking to tapping into advertising networks—and one of those vulnerabilities lies in the data brokerage industry.

Data brokers collect and sell data on virtually every single person in the United States, and that includes data related to government employees, security clearance-holding contractors, and active-duty military personnel. My team at Duke’s Sanford School of Public Policy published a detailed study in November 2023, where we purchased sensitive, individually identified, and nonpublic information such as health conditions, financial information, and data on religion and children about active-duty US military servicemembers from US data brokers—with little to no vetting, and for as cheap as twelve cents per servicemember. It would be easy for the Chinese or Russian governments to set up a website and purchase data on select Americans to blackmail individuals or run intelligence operations. With some datasets available for cents on the dollar per person, or incredibly granular datasets available for much more, it may be considerably cheaper than the cost of espionage for foreign governments to simply tap into the unregulated data brokerage ecosystem and buy data.

Of course, an executive order isn’t going to fix everything. At the end of the day, the fact that data brokers gather and sell Americans’ data at scale, without their knowledge, often without controls, is a congressional problem—and has signified a major congressional failure to act. Federal and state legislation is what will ultimately best tackle the privacy, safety, civil rights, and national security risks from the data brokerage industry. But that doesn’t mean the executive branch shouldn’t act in the meantime. If the executive branch can introduce even a few additional regulations for data brokers to better vet their customers or to stop selling certain kinds of data to certain foreign actors, that’s an important improvement from the status quo.

Over the coming months, important challenges for the executive branch will be defining terms such as “data broker,” ensuring that covered data brokers are required to properly implement “know your customer” requirements, and figuring out ways to manage regulatory compliance in light of the size and operating speed of the data brokerage industry.

Justin Sherman is a nonresident fellow at the Atlantic Council’s Cyber Statecraft Initiative and founder and CEO of Global Cyber Strategies.


A welcome step, but beware of data brokers exploiting backdoors and work-arounds

The commercial data broker ecosystem monetizes and sells Americans’ most sensitive data, often piggybacking off of invasive ad-tracking infrastructure to vacuum up and auction off specific information about Americans, such as their location history or mental health conditions. This executive order is a useful step toward making it more difficult for specific adversary countries to purchase that data, and it makes clear sense from a national security perspective.

However, while this market remains (otherwise) largely unregulated and flourishing in the United States, in the absence of a comprehensive privacy law or other restrictions on data brokering, Americans’ privacy will continue to suffer. Leaving this market intact domestically runs the risk of opening up potential backdoors and work-arounds to the limitations in the executive order. It also—perhaps not coincidentally—leaves the door open for the US government itself to continue purchasing and using commercial data in its own intelligence programs. 

That’s all to say, cracking down on data brokers is always welcome, so it’s great to see this order (and recent action from the Federal Trade Commission as well). Next, let’s challenge Congress and the executive to push it further.

Maia Hamin is an associate director with the Atlantic Council’s Cyber Statecraft Initiative under the Digital Forensic Research Lab.

The post Experts react: What Biden’s new executive order about Americans’ sensitive data really does appeared first on Atlantic Council.

]]>
Gulf region markets offer huge growth potential for Ukraine’s IT sector https://www.atlanticcouncil.org/blogs/ukrainealert/gulf-region-markets-offer-huge-growth-potential-for-ukraines-it-sector/ Thu, 29 Feb 2024 17:23:03 +0000 https://www.atlanticcouncil.org/?p=742689 The Gulf region offers opportunities for Ukraine's war-ravaged but vibrant tech sector to reduce its dependence on Western markets and return to growth, writes Anatoly Motkin.

The post Gulf region markets offer huge growth potential for Ukraine’s IT sector appeared first on Atlantic Council.

]]>
Ukraine’s IT industry was the only sector of the country’s economy to grow during the first year of Russia’s full-scale invasion in 2022. Despite the unprecedented shocks of the Russian invasion, Ukrainian IT exports reached a record $7 billion by the end of the year, while local startups continued to attract investors. However, preliminary data for 2023 shows that this trend has now run its course. During the first nine months of 2023, Ukrainian IT exports fell by 9%. Annual figures are expected to confirm an 8% decline that would return the IT industry to its prewar status.

Ukrainian industry experts point to problems related to wartime conditions, including restrictions on military-age males leaving the country and challenges in meeting customer deadlines. They also acknowledge that other factors are contributing to the current market downturn, including an international IT recession that is reducing demand in the dominant US tech sector. With 42 percent of Ukrainian IT exports currently going to the United States, this downward trend is bad news for Ukraine.

For the past few decades, the Ukrainian IT sector has expanded in line with growing demand for IT services in the West. Ukrainian IT companies have focused on exporting engineering services to the US, EU, and other Western countries, while also seeking to attract investors from the same locations. This model worked well as long as demand for Ukrainian services continued to rise in the West, but the current recession in the Western tech industry along with stagnation in the US venture capital market are pushing Ukraine to look for new markets. The most obvious growth area is the Gulf region.

Stay updated

As the world watches the Russian invasion of Ukraine unfold, UkraineAlert delivers the best Atlantic Council expert insight and analysis on Ukraine twice a week directly to your inbox.

The combined IT market of the Gulf countries is currently estimated at $108 billion. This is less than one-tenth of the US market, but it is rapidly expanding and can accommodate new players. Securing a mere one to two percent of this Gulf region IT market would allow Ukraine’s IT industry to keep growing.

What are Gulf countries looking for? Most of all, they seek trusted suppliers of high-end IT solutions at a reasonable price, which is exactly where Ukraine excels. The Ukrainian IT industry has many offices of American IT engineering companies delivering US quality products and services for competitively low prices. Ukrainian IT companies also operate in a similar time zone to Gulf region customers, making them even more appealing.

Ukraine’s flourishing startup scene is a particularly attractive feature for Gulf region businesses. While more than 500 venture capital funds operating in the Gulf region raised about $2.5 billion in capital in 2023, the local startup pipeline is not yet sufficient to absorb these funds. An attractive Ukrainian startup could be the ideal fit for smart Gulf-based investors.

In order to increase Ukrainian penetration of Gulf markets, Ukrainian tech companies must open regional branches. Given the prominent role played by government agencies in the development of the IT sector in the Gulf region, the Ukrainian government should be looking to prioritize the digital component in bilateral dialogue, including high-level visits by officials from the Ministry of Digital Transformation.

International financial institutions and development agencies can also play a role in this process. This would help provide much needed support for the Ukrainian economy while also boosting the Western presence at a time when China is actively increasing its footprint in the Gulf region tech sector. According to the Nature Index, China’s share of Saudi Arabia’s total international research collaborations grew to 28.3 percent by 2023, exceeding that of the United States (26 percent), Germany (10.1 percent), and the United Kingdom (10.3 percent).

Assisting the Ukrainian technology industry in penetrating Gulf region markets could form part of a much bigger digital Marshall Plan for Ukraine. The foundations of Ukraine’s postwar economy are currently being laid; it is already clear that the tech sector will be one of the key engines of the country’s future economic growth. This new model will make Ukraine more transparent, accountable, and attractive for investors. It will also make the Ukrainian economy more sustainable and integrated into the global knowledge-driven supply chain.

Despite the many challenges created by Russia’s ongoing invasion, now is the right time support the evolution of the Ukrainian tech sector. This includes backing efforts to enter new foreign markets. Helping Ukrainian IT companies expand their presence in the Gulf region should be one of the priorities of the country’s digital growth strategy. This can provide a boost for Ukraine’s wartime economy and also position the IT industry for sustained growth in the years to come.

Anatoly Motkin is president of StrategEast, a non-profit organization developing the knowledge-driven economy in the Eurasian region with offices in the United States, Ukraine, Georgia, Kazakhstan, and Kyrgyzstan.

Further reading

The views expressed in UkraineAlert are solely those of the authors and do not necessarily reflect the views of the Atlantic Council, its staff, or its supporters.

The Eurasia Center’s mission is to enhance transatlantic cooperation in promoting stability, democratic values and prosperity in Eurasia, from Eastern Europe and Turkey in the West to the Caucasus, Russia and Central Asia in the East.

Follow us on social media
and support our work

The post Gulf region markets offer huge growth potential for Ukraine’s IT sector appeared first on Atlantic Council.

]]>
CBDC Tracker and Lipsky quoted in Politico on CBDC legislation https://www.atlanticcouncil.org/insight-impact/in-the-news/cbdc-tracker-and-lipsky-quoted-in-politico-on-cbdc-legislation/ Wed, 28 Feb 2024 18:17:05 +0000 https://www.atlanticcouncil.org/?p=741863 Read the full article here.

The post CBDC Tracker and Lipsky quoted in Politico on CBDC legislation appeared first on Atlantic Council.

]]>
Read the full article here.

The post CBDC Tracker and Lipsky quoted in Politico on CBDC legislation appeared first on Atlantic Council.

]]>
CBDC Tracker cited in The Banker on political debate over central bank digital currency development https://www.atlanticcouncil.org/insight-impact/in-the-news/cbdc-tracker-cited-in-the-banker-on-political-debate-over-central-bank-digital-currency-development/ Thu, 22 Feb 2024 05:00:58 +0000 https://www.atlanticcouncil.org/?p=740432 Read the full article here.

The post CBDC Tracker cited in The Banker on political debate over central bank digital currency development appeared first on Atlantic Council.

]]>
Read the full article here.

The post CBDC Tracker cited in The Banker on political debate over central bank digital currency development appeared first on Atlantic Council.

]]>
Lipsky and Kumar featured in Yahoo Finance on Fed CBDC development https://www.atlanticcouncil.org/insight-impact/in-the-news/lipsky-and-kumar-featured-in-yahoo-finance-on-fed-cbdc-development/ Fri, 16 Feb 2024 21:34:07 +0000 https://www.atlanticcouncil.org/?p=737684 Read the full piece here.

The post Lipsky and Kumar featured in Yahoo Finance on Fed CBDC development appeared first on Atlantic Council.

]]>
Read the full piece here.

The post Lipsky and Kumar featured in Yahoo Finance on Fed CBDC development appeared first on Atlantic Council.

]]>
Carole House testifies to the House Financial Service Committee on approaches to combat crypto crime and illicit activity https://www.atlanticcouncil.org/commentary/testimony/carole-house-testifies-to-the-house-financial-service-committee-crypto-crime/ Fri, 16 Feb 2024 19:43:25 +0000 https://www.atlanticcouncil.org/?p=736602 Non Resident Senior Fellow Carole House provided testimony to the House Financial Services Committee on crypto crime on February 15, 2024.

The post Carole House testifies to the House Financial Service Committee on approaches to combat crypto crime and illicit activity appeared first on Atlantic Council.

]]>
On February 15, Non Resident Senior Fellow Carole House testified to the US House Committee on Financial Services. Below are her prepared remarks for the committee on crypto crime and illicit activity.

Thank you Chairman Hill, Ranking Member Lynch, and distinguished members of the subcommittee, for your leadership in holding this hearing and the honor of the invitation to testify. My name is Carole House. I am a Nonresident Senior Fellow at the Atlantic Council, an Executive in Residence at Terranet Ventures, and Chair the CFTC’s Technology Advisory Committee (TAC).

Most of my career has been at the intersection of national security, emerging technologies, and finance. I have served in the US Army, the Senate Homeland Security and Governmental Affairs Committee (HSGAC), and FinCEN. I also served two tours in the White House, including most recently at National Security Council (NSC) where I supported initiatives like the US Counter-Ransomware Strategy and President Biden’s Executive Order on Ensuring Responsible Development of Digital Assets.

Innovation is core to the US economy, but we have learned that responsible innovation does not mean unchecked technological advancement without regard to implications for society, security, and democratic values. Cryptocurrency remains a serious risk for illicit finance. It is not inevitable for the sector to always be that way, but the unique aggregate features of crypto compounded by the existing state of compliance domestically and abroad have cultivated an environment ripe for exploitation by rogue nations and fraudsters. There are mitigating measures like transparency that are helping to combat illicit finance, but critical and timely steps are needed to make best use of them. The status quo has not yielded benefits for consumers, the evolving DeFi ecosystem, or US leadership.

Core to cryptocurrency’s appeal to both licit and illicit users is its ability to transfer significant value peer-to-peer, pseudonymously, immutably (or irreversibly), with global reach, with increased speed and cost efficiencies.1

The absence or reduction of financial intermediaries and central points of control in more highly decentralized cryptocurrency systems also challenges clear lines of responsibility and accountability that are crucial in managing risks in high value, high risk sectors like finance.

A risk-mitigating feature of cryptocurrencies is their often public and transparent nature.2 However, aside from concerns about consumer privacy, there are limitations to this transparency–ranging from off-chain data to the use of obfuscation methods like mixing, chain-hopping, and encryption.3 4 The same extent of transparency we see today is also not inevitably a part of these systems, given growing experimentation to integrate privacy enhancing technologies (PETs).

With these features and the lagging state of non-compliance in mind, cryptocurrency remains attractive to a full spectrum of illicit actors.

  • It is a favored tool of cybercriminals and the predominant means of payment in sophisticated ransomware-as-a-service (RaaS) economies targeting critical infrastructure like energy and hospitals, extorting at least one billion dollars last year alone.
  • The Biden Administration also reported that North Korea funds about half of its proliferation regime via cybercrime and cryptocurrency theft.5
  • Despicable pig butchering and investment fraud schemes continue to harm consumers, with over nine billion dollars reported in fraud in 2022.6
  • Cryptocurrency is one in a suite of tools used in many forms of transnational organized crime, including drug and human trafficking, as well as terrorism financing and sanctions evasion and offset.7 8 9 10

There are also national security concerns implicated by diminished leadership in driving responsible financial and technology experimentation when adversarial nations have for years been pursuing alternative financial systems and developing building blocks for the next phase of the internet.

In light of the threat, policymakers must consider what to do about it. The CFTC TAC recent report on DeFi outlined opportunities for approaching accountability, such as building in compliance features at different layers across the DeFi tech stack, as well as considering ongoing infrastructure provider-focused regulations developing at DHS and Commerce. Despite some calls for equivalent privacy and neutrality treatment of DeFi as we give the internet, I encourage consideration that financial versus information activity carries different levels of risk. “Neutrality” is not an acceptable position to take toward illicit finance.

I’ll offer some opportunities to consider for combating crypto crime:

  • Enhance regulatory and enforcement agencies’ capability to take action against egregious violators of our illicit finance framework, such as through prioritized funding for agencies and honing disruption authorities like FinCEN’s 9714 and 311 designations.
  • Next, promote international action on combating illicit cryptocurrency activity in priority jurisdictions through diplomacy and capacity building.
  • Third, enhance outcome-oriented public-private partnerships for information sharing and R&D.
  • Finally, promote development of secure, trustworthy, and interoperable digital identity infrastructure.

Thank you again for the opportunity to speak on this issue, I look forward to your questions.

At the intersection of economics, finance, and foreign policy, the GeoEconomics Center is a translation hub with the goal of helping shape a better global economic future.

1    Security consultant Alison Jimenez described these features as ability to move funds “far, fast, in large amounts, irreversibly, anonymously, and to a third party.” See Alison Jimenez, written testimony to House Financial Services Committee Subcommittee on Digital Assets, Financial Technology, and Inclusion, Hearing on Crypto Crime in Context- Breaking Down the Illicit Activity in Digital Assets (November 15, 2023).
2    See United States District Court for the District of Columbia, Case No. 20-sw-314 (ZMF), In the Matter of the Search of One Address in Washington, D.C., Under Rule 41 (January 6, 2021).
3    For example, off-chain data could include internal cryptocurrency exchange activity or transactions conducted off-chain over the Bitcoin Lightning Network via a Lightning channel.
4    See FinCEN, Advisory FIN-2019-A003, “Advisory on Illicit Activity Involving Convertible Virtual Currency” (May 9, 2019).
6    See TRM Labs, “Illicit Crypto Ecosystem Report” (June 2023).
7    See Elliptic, Elliptic Research, “Chinese Businesses Fueling the Fentanyl Epidemic Receive Tens of Millions in Crypto Payments” (May 23, 2023).
8    See FBI, Public Service Announcement, I-052223-PSA, “The FBI Warns of False Job Advertisements Linked to Labor Trafficking at Scam Compounds” (May 22, 2023).
9    See Elliptic, 2023 Report, “Sanctions Compliance in Cryptocurrencies” (2023).
10    See USDOJ, USAO, Eastern District of New York, “Five Russian Nationals and Two Oil Traders Charged in Global Sanctions Evasion and Money Laundering Scheme” (October 19, 2022).

The post Carole House testifies to the House Financial Service Committee on approaches to combat crypto crime and illicit activity appeared first on Atlantic Council.

]]>
CBDC Tracker cited in Coingeek on Philippines development of central bank digital currency https://www.atlanticcouncil.org/insight-impact/in-the-news/cbdc-tracker-cited-in-coingeek-on-philippines-development-of-central-bank-digital-currency/ Wed, 14 Feb 2024 17:02:46 +0000 https://www.atlanticcouncil.org/?p=737317 Read the full piece here.

The post CBDC Tracker cited in Coingeek on Philippines development of central bank digital currency appeared first on Atlantic Council.

]]>
Read the full piece here.

The post CBDC Tracker cited in Coingeek on Philippines development of central bank digital currency appeared first on Atlantic Council.

]]>
CBDC Tracker cited by Brookings on foreign policy impact of dollar-based payments systems https://www.atlanticcouncil.org/insight-impact/in-the-news/cbdc-tracker-cited-by-brookings-on-foreign-policy-impact-of-payments-systems/ Tue, 13 Feb 2024 16:19:27 +0000 https://www.atlanticcouncil.org/?p=737301 Read the full piece here.

The post CBDC Tracker cited by Brookings on foreign policy impact of dollar-based payments systems appeared first on Atlantic Council.

]]>
Read the full piece here.

The post CBDC Tracker cited by Brookings on foreign policy impact of dollar-based payments systems appeared first on Atlantic Council.

]]>
Is the EU missing another tech wave with AI? https://www.atlanticcouncil.org/blogs/econographics/is-the-eu-missing-another-tech-wave-with-ai/ Thu, 08 Feb 2024 16:35:31 +0000 https://www.atlanticcouncil.org/?p=734503 Policymakers in the United States and European Union view generative AI as one of the technological “commanding heights” of the coming decade. Are EU startups falling behind on funding?

The post Is the EU missing another tech wave with AI? appeared first on Atlantic Council.

]]>
Ten billion dollars. That’s how much the United States’ largest generative artificial intelligence (AI) firm, OpenAI, raised in private funding rounds between 2022-2023. While the makers of ChatGPT are in a league of their own, it’s clear US-based firms have raised substantially more capital than their European counterparts:

Missing from this estimation is China. Yet while there is little data on Chinese private funding for generative AI, a comparison of broader AI-related venture capital deals places it in third, after the US and EU.

Policymakers in the United States and European Union increasingly view generative AI, which can produce text, images, or other data from user-generated prompts, as one of the technological “commanding heights” of the coming decade. The increase in productivity from widespread adoption could add up to $4.4 trillion to the global economy annually, according to a McKinsey estimate—a figure comparable to the entire GDP of Germany. However, the technology has also raised new concerns over privacy, election misinformation, and cybersecurity. Likewise, the ability to produce advanced foundation models (large, general-purpose models which underlie generative AI) has implications for national security, where such models may be used for military training, cybersecurity and autonomous or biological weapons systems.

Like earlier waves of startups, many small tech firms rely on venture capital (VC) to scale their operations. Transatlantic divergence in this respect is stark. Last year, over 90 percent of venture capital dedicated to generative AI was concentrated in the United States. In similar fashion, nearly twice as many generative AI startups were founded in the United States as in the European Union and UK combined.

More broadly, these figures reflect a smaller European VC market. The US has just 23 startups per VC firm, and an average of $4.9 million for each. The typical EU entrepreneur has less than one-fourth that amount available–and 198 other startups per VC firm. Yet in tech, the gulf widens. When it comes to private funding for these new commanding heights, the Rockies reach far higher than the Alps.

To some, this disparity in funding can be attributed to differences in regulation. In December the European Parliament reached agreement on the final text of the EU AI Act, a sweeping set of regulations on general AI models intended to encourage transparency and protect copyright holders. Earlier versions drew opposition from France, Germany, and Italy, along with warnings from the US, that the legislation would stifle the growth of continental competitors in AI. (While the United States has not passed comparable legislation, the Biden administration released an executive order on AI in October.)

Others may recall earlier tech waves (think Amazon, Alphabet, and Apple, and the rest of the “Magnificent Seven”) in which the European Union produced few startups but many standards, including on privacy. In the optimistic view, Europe’s policies, such as the General Data Protection Regulation (GDPR), Digital Markets Act (DMA), and Digital Services Act (DSA), have helped shape standards of foreign tech giants—a so-called “Brussels Effect.” In the pessimistic view, they have engendered long-running disputes and created serious compliance (and competitiveness) challenges for the continent’s youngest firms.

Today however, the new EU and US approaches on AI bear significant similarities. To be sure, the US executive order on AI lacks strong enforcement mechanisms included in the AI Act, the latter of which includes substantial fines (7 percent of global turnover) for non-compliant firms. Nevertheless, both adopt a similar focus on “risk-based” approaches, transparency requirements, and testing. More broadly, the United States and the EU have coordinated their approaches through the G7 Hiroshima AI Process, UK AI Safety Summit, Administrative Arrangement on Artificial Intelligence, and the Trade and Technology Council (TTC).

One contrast with previous tech waves is that the European Union is increasingly pairing injunctions with incentives. Shortly after the European Commission reached agreement on the AI Act, it announced new measures to assist AI startups, including dedicated access to supercomputers (“AI Factories”) and other financial support expected to raise $4 billion across the sector by 2027.

While increasingly aligned on regulation, such measures aim to overcome the more enduring disparity in private funding between the two jurisdictions. For now, while Europe is trying to catch-up in the innovation race when it comes to the newest chatbots, the United States still looks more, well, generative.


Ryan Murphy is a program assistant at the Atlantic Council’s GeoEconomics Center. He works within the Center’s Economic Statecraft Initiative, supporting events and research on economic security, sanctions, and illicit finance.

This post is adapted from the GeoEconomics Center’s weekly Guide to the Global Economy newsletter. If you are interested in getting the newsletter, email SBusch@atlanticcouncil.org

At the intersection of economics, finance, and foreign policy, the GeoEconomics Center is a translation hub with the goal of helping shape a better global economic future.

The post Is the EU missing another tech wave with AI? appeared first on Atlantic Council.

]]>
Gina Raimondo and Margrethe Vestager on transatlantic approaches to trade, AI, and China https://www.atlanticcouncil.org/commentary/transcript/gina-raimondo-and-margrethe-vestager-on-transatlantic-approaches-to-trade-ai-and-china/ Tue, 30 Jan 2024 22:44:43 +0000 https://www.atlanticcouncil.org/?p=730712 The US commerce secretary and European Commission executive vice president discussed transatlantic trade and technology cooperation at the Atlantic Council.

The post Gina Raimondo and Margrethe Vestager on transatlantic approaches to trade, AI, and China appeared first on Atlantic Council.

]]>
Watch the event

Event transcript

Uncorrected transcript: Check against delivery

Speakers

Gina Raimondo
US Secretary of Commerce

Margrethe Vestager
Executive Vice President of the European Commission for a Europe Fit for the Digital Age

Moderator

Frederick Kempe
President and CEO
Atlantic Council


FREDERICK KEMPE: Good afternoon. Thank you for joining us. I’m Fred Kempe, president and CEO of the Atlantic Council.

I’m delighted to welcome two of the Atlantic Council’s favorite global leaders, US Secretary of Commerce Gina Raimondo and American—and European Commission—

MARGRETHE VESTAGER: I have no problem with that.

FREDERICK KEMPE: Just don’t—be careful what you ask for. European Commission Executive Vice President Margrethe Vestager.

So, Madam Secretary, Madam Commissioner, welcome to the Council.

MARGRETHE VESTAGER: Thank you.

FREDERICK KEMPE: You’ve been here before. You’ve just come from the fifth meeting of the US-EU Trade and Technology Council, where you both serve as cochairs. And we look forward to hearing your insights on that. Since the TTC first met in 2021, it’s been a key element in the renewed and revitalized US-EU relationship, critical tool for cooperation together, and at a time when we’re facing a set of daunting challenges together.

Addressing these issues is what the Atlantic Council has been committed to long before the TTC was the TTC. The deepening economic relationship between the US and Europe was a core part of our founding mission since we were created in 1961. Our Europe Center, run by Jörn Fleck, leads our engagement for this set of talks through our TTC Track-2 Dialogue series. Our Geoeconomics Center leads the charge on cutting-edge work on friend-shoring, semiconductor supply chains. Our Digital Forensic Research Lab has forty staff, seventeen countries. Cutting-edge research on online ecosystems. Our Global China Hub is working on China. So across the Atlantic Council, working on our sixteen programs and centers, ranging from global energy to Africa, we basically drive transatlantic cooperation and relations across all of these realms. And so this is a very special meeting for us.

But let me get straight into the questions. And start with just a reminder to our virtual audience—we always have a large virtual audience. And you see here we’ve got standing room only here. That they should use the hashtag #ACFrontPage on social media and online. But let’s get started talking about TTC. So, Madam Secretary, Madam Commissioner, this was the fifth TTC ministerial. It’s been eight months since you met. A lot happened in that period of time in artificial intelligence, in EVs, in semiconductors. So talk to us first quickly about what issues and developments were at the top of each of your agendas for this edition, and how was the tone? How was the—how were the issues different than in previous? And maybe, Madam Commissioner, visiting from Brussels, maybe you could go first.

MARGRETHE VESTAGER: Well, first and foremost, thank you very much for hosting us. Congratulations on all you do. I think it’s really important. And the long history of the Atlantic Council, I think, also shows why it’s really, really worth investing in this relationship, as we have been doing now for the last three years. 

And we’ve learned a lot in these three years. And what we’re pushing for is, of course, to show that cooperation is useful. It must be felt to make a difference for people, for businesses, for our stakeholders. So, as you say, we’ve been discussing semiconductors. We had, I think, a very intense roundtable this morning—Gina, Thierry Breton, and myself—to sort of figure out, well, what is it with the—? They will be important for decades still to come. How to make sure that we cooperate and prevent shortages or being captured by Chinese production on those things. And I think it’s a very good illustration of how we try to make economic security a real thing. 

We also discussed one of the things that are very close to heart, which is artificial intelligence. It was one of the first things on our common agenda, to agree on having a risk-based approach. So not to regulate technology, but to focus on the risk—on the use cases where risks are involved. And I think what we have done with pushing for the G7 code of conduct, the executive order that we have here in the US, and the European AI Act that will come into force in two years’ time, we have a very much aligned approach. That will serve the business community, but maybe even more important it will serve us as citizens because the aim is to make artificial intelligence secure, trustworthy, and make sure that it serves people.

And I think those sort of very tangible achievements is what has characterized the cooperation. And this is also why I think if you were a fly on the wall, you’d see that there is also a very safe and trustworthy atmosphere in the room when we meet. With that, also a very noticeable feeling that we really want to serve people.

FREDERICK KEMPE: Madam Secretary.

GINA RAIMONDO: Well, first, thank you, Fred. Thank you for having us. And thank you to the Atlantic Council for hosting us and for all that you do.

I agree with everything that the EVP just said. I think it’s useful, though, to remember where we were when we started this. President Biden came into office. Tensions—US-EU tensions were high; not a lot of collaboration in the five years that preceded us coming into office. And we said we need to really lean into our longstanding allies, the Europeans.

And so we created this Trade and Technology Council to frankly get back together. We have a one and a half trillion dollar trade relationship. We argue, of course, over certain things as it relates to technology and trade. There are irritants for sure. But fundamentally what binds us is massively more consequential than the irritants.

And so the TTC was created—we had our first meeting, you know, six months after the president took office. It was a real statement, I think, that we said we’re going to prioritize this. We’re going to find concrete areas where we can work together as it relates to technology, trade, emerging technologies. And we have done that.

I mean, due to the trust that we’ve created, the collaboration, the information sharing, we resolved this deal on aluminum tariffs that had existed at the time we came into office, US-EU 232 steel and aluminum tariffs. We worked with unbelievable speed to put the export-control regime in place. We brought thirty-six countries together when war broke out in Russia to deny Russia a lot of technology that they need to conduct the war.

We are now working now, as Margrethe has said, on semiconductors. We’re working together in the way we’re implementing our CHIPS Acts. You know, I have fifty billion dollars of US taxpayer money to invest. The EU is putting a great deal of money to work. We can’t—we have to work with each other. We shouldn’t compete against each other. It shouldn’t be a race to the bottom. We can’t allow companies to play us off of one another and get us into a subsidy race.

So I’ve been to Spain and Italy and other member states and Brussels and we say how do we work together? We took electric vehicles. You know, there’s no surprise, China is coming on incredibly strongly with respect to electric vehicles; creates market distortion issues as well as data security issues. Similar, our interests are aligned. So—and AI. You know, I won’t be repetitive to what was said.

But I feel we’ve done quite a lot, tangible results in a short period of time; breathed a new—we’ve reinvigorated the US-EU relationship, I think, in a very concrete way. And as we move forward, AI, EVs, semiconductors, AVs, there’s so much work to be done.

MARGRETHE VESTAGER: And as Gina said, today we have a forum where we can complain about each other in a constructive manner.

FREDERICK KEMPE: So I would like you to—

GINA RAIMONDO: It is true.

FREDERICK KEMPE: I’d like you to do that in front of the audience right now. So where do you see your most pronounced areas of differences that need some working out? And maybe on the flip side of that, today where did you feel you came together? Is there any news in that respect that you could share with us?

GINA RAIMONDO: I’ll answer that in the following way. I think we don’t disagree as it relates to the principles and the goals. The disagreements are—you know, we have two—we have differences in our systems of government. We have, you know, political realities. So, for instance, one of the first things I did when I got this job was come to some resolution around the privacy shield. And I think, you know, I credit the TTC with that.

US and Europe fully agree we want trusted data flows, data privacy protection, et cetera. We have different systems of government. We should but don’t yet have a data privacy law federally. So we work through it. The same is true with sustainability. You know, we’re working on a global steel arrangement. We need to prioritize our trading partners that have green steel, green aluminum, sustainable, you know, sources of energy. We don’t have a Carbon Border Adjustment. We have a different—you know, different way of getting to that. But the goal and the values are the same.

Cybersecurity. We have the same —you know, we have—we share democracy. We share a commitment to protecting individual rights and people’s data and having data security. We share a desire to protect our data from autocratic regimes. We go about it in different ways. 

So I think the irritants develop in the details and we have to work through the details. Today I talked about the cyber certification scheme. We’ll figure out the differences on the implementation. But frankly, it’s fantastic to have a partner who shares our, you know, values, way of government, and principles.

FREDERICK KEMPE: Madam Executive Vice President, Madam Commissioner what do—

MARGRETHE VESTAGER: No, I’d add to that because I fully share that this is the approach that makes us come together, and the second thing that should not be underestimated is that one thing is that we meet as principals but before that while we meet after that the teams are coming together, which means that literally hundreds of people have gotten to know each other really well. 

During their work for the Russian sanctions, you know, it went so fast with very little sort of bumps on the road because people knew each other, and I think it’s really important not to underestimate what it means that you know who to call. 

And then we’ve had, you know, continuing discussions. One of them would be on the Open RAN. So we had the same ambition to make sure that our networks, they were safe, that we didn’t have untrusted vendors in those. 

We had a different approach as to how to achieve that when it comes to untrusted vendors, and as a follow-on debate we’ve had the debate about Open RAN. Is that sufficiently secure? What is the energy use of Open RAN compared to other solutions, getting to, I think, a balanced view that you need to keep developing Open RAN and you need to be neutral in your approach so that those we work with they may have a preference but we shouldn’t push a preference. And I think it’s a good example of an approach that comes out of a discussion that was not trivial initially.

GINA RAIMONDO: I think we will see—I think the TTC will prove to be exceedingly valuable now and in the months and years to come as it relates to artificial intelligence, right. We have spent the past two and a half years developing the TTC, developing the relationships, as you said, having our teams with stakeholders. You know, there’s a lot more stakeholder engagement across the border today. We had a fantastic meeting with semiconductor companies, half European, half American, you know, talking. 

So now we’ve built up this trust. Enter AI, enter generative AI, where we have to now write the rules of the road together and we had an extensive discussion today about standards— how do we together develop standards that will govern AI and the development and use of AI, which is all new? It’s all new, and so now we have this muscle that we’ve built up. 

You know, as Margrethe says, we have the G7 code of conduct. We have the AI Safety Institute. You have the AI office. It’s all so new. TTC will play a key role in bringing us together to write the rules of the road of AI. 

FREDERICK KEMPE: Could you drill down on that? You have the executive order—US executive order. You have the EU AI Act. The technology is moving ahead faster than, I would say, the regulatory world is. I’m not sure that’s entirely a bad thing. 

But who does write the—where will the standards be made? Who does write the rules? You say TTC will do this together but how will that be then rolled out? How are you balancing the need to mitigate risks but not stifle the incredible innovation that’s going on?

MARGRETHE VESTAGER: Well, first, for me a very fundamental point, which is I think that governments’ legislative bodies are legitimate in dealing with technology. So this idea that we will always be behind and technology will just have to lead the way I think that is just plain wrong because we have a responsibility to make sure that technology also respects the fundamentals of our society and this is what is expressed, I think, in the executive order by the president. 

This is what is expressed in the G7 code of conduct. This is what is expressed in the AI Act in Europe, that there are some fundamentals where we have full legitimacy in saying this is how we want things done. Where we can help industry and make sure that the market is as big as possible is, as Gina said, to develop standards.

So what does watermarking look like? We all want watermarking so that we know what is fake and what is real. What is red teaming? How deep should that go to be real, that you can sort of tick the box I have done red teaming so that I know my AI is safe? And one of the things we started on very early was to say, listen, we need to be much more present in standardization foras because they are being more and more dominated by nonmarket players, or Chinese players for that matter, and we need to have a presence. We need to coordinate. We need to be much more strategic. So in all the different foras where these things are being dealt with, we need to have a presence and we need to coordinate. And here the TTC setup comes in extremely handy.

FREDERICK KEMPE: But is that, then, a transatlantic approach, joint regulatory? What comes out of this for AI?

GINA RAIMONDO: I think, yeah, absolutely it’s a transatlantic approach. Whether joint regulation, I don’t know if that’s feasible, obviously.

But, look, it will be some time before the US Congress passes a law that relates to the governing of AI. I’m just going to stipulate that—and I’m sure everybody’s going to agree with me. So, between now and whenever that—and we need that. To be—to be clear, we need that, right? To have a regulatory structure with enforcement mechanisms and penalties, we need a statute to do that. And we will get there.

In the absence of that, there’s an awful lot of work to be done, for example, with standards. Right now—right now, one thing I do hope Congress does is there is a bipartisan agreement to invest ten million dollars in the AI Safety Institute which we are standing up in the Department of Commerce, a tiny amount of money, to focus on these standards, exactly what you said. You know, what is adequate watermarking? What is safe? You know, what does it mean to say the red teaming is adequate? What does it mean to say certain guidelines around what testing equals safety? So that’s what we are going to be thinking about at the AI Safety Institute. I do hope Congress funds that.

But all of those standards and all of this, of course, happens in not just bilateral standard-setting bodies, but global standard-setting bodies. I promise you if the US and the EU don’t show up, China will, autocracies will. We’ve had our lunch eaten over the years in—like, in the ITU, that standard-setting body for the internet. You talk about ORAN, telecommunications. This is, I know, really boring-sounding stuff, but it matters.

FREDERICK KEMPE: Yeah.

GINA RAIMONDO: So, anyway, we’re going to harmonize our approach.

And was it you that said this today? Somebody said this; I thought it was really smart. You know, normally we go about our thing, other countries go about their thing, and then we try to harmonize. With AI, we can harmonize from the get-go because we haven’t yet, you know, written these regulations or rules or standards.

FREDERICK KEMPE: So you’ve both mentioned China, so let’s go there and we can circle back on other issues as time allows. But you know, can one harmonize, is one trying to harmonize toward China? So, Madam Commissioner, on electric vehicles, it’s become a major issue in DC as well as Brussels. The Commission has launched an investigation into support Beijing is providing domestic manufacturers. China’s responded with a dumping probe, including French brandy, which I’ve been to some parties where it seems to have been dumped. But in any case—so I’d love to—are we at the start of a more contentious trading relationship with China?

And then, how are the two of you looking at the—in the context of TTC but also beyond, how are the two of you looking at this? You know, we’ve been tracking the rapid expansion of Chinese manufacturing, noticed lower-cost producers increasingly look to export. And so also from your side, Madam Secretary, this is a—this is a big issue. So talk a little bit about this.

MARGRETHE VESTAGER: It’s a very big issue. So from Europe, when we look at China, we see a very complex relationship. We need China as a partner in fighting climate change. Without China on board, it will not happen. But China is also a systemic rival in how they see their mode of governing versus our democracies. 

And they’re an economic competitor. And in order to materialize that view, we have our strategy for economic security. And we just gave that some muscle last week to say we need member states to have, you know, a toolbox for research organization to do their due diligence to know who they are actually dealing with. We’ve done that already for Horizon Europe, that big European research program. We need everybody to be able to do the screening of foreign direct investments. Now twenty-two European countries would do that. We need everybody to come on board. It’s important.

We need a European prism for export controls. Each country has their own competences, but we need to have a European prism to look at that. And then we have taken the first steps to try to figure out how to prevent that some which circumvents export controls by outbound investments. And be very careful, because Europe is really open for business. So a lot of investment is coming in. Lots of investment is going out. So we want, of course, to be very precise, because the point of globalization is that we may calibrate it right now, but we still, you know, really benefit from it. We have complex value and supply chains. So what we’re doing now is to sort of de-risk our interdependencies. And, of course, Chinese dependencies are one of those that we focus on.

FREDERICK KEMPE: And this new economic security package, how does this affect US-EU dialogue? What knock-on impact will it have on that?

MARGRETHE VESTAGER: Well, as a matter of principle, our package is country neutral. But just to tell you about the differences. So, for instance, on quantum, it is on the list of critical technologies where you can be exempted, or you can be not allowed to participate in our research programs. Well, we were just talking about today how can we do a memorandum of understanding on quantum to do some things together? And I think that’s a very good illustration of the differences as to whom we will not work with, and with whom it is absolutely essential.

FREDERICK KEMPE: And, Madam Secretary, how does one manage this China issue across the Atlantic? Particularly considering if you’re looking at AI you can set whatever you want to set, but if China goes in a totally different direction, then it’s messy?

GINA RAIMONDO: Yeah. AI, in particular. AI knows no boundaries. These models freely travel across boundaries, et cetera. All the more reason to work with allies. And I think, as it relates to China, Europe, and the US, it’s in each of our self-interest to work together. Listen, we—both of our—Europe and the United States have huge trading relationships with China. Hundreds of billions of dollars. And that is a good thing. Selling goods to China creates jobs in both of our countries.

Having said that, there are real national security concerns for both of us. Once again, we share values. And we have to be eyes wide open about that and work together to protect the people of our countries. Export controls is a perfect example. We worked in a trilateral relationship, in that case with the Japanese and Europeans and the United States, to deny China the most sophisticated semiconductor equipment. We need to move in that direction. Electric vehicles. Electric vehicles we have to keep our eye on. The number of Chinese-made electric vehicles being sold in Europe today is vastly more than even a year or two years ago. Why is that? What is really going on in China? How is the government subsidizing the whole ecosystem?

That’s a trade distortion. Separately, there’s a national security distortion. Tesla is not allowed—you can’t drive a Tesla on certain parts of Chinese roads, they say for national security reasons. Well, think about that. What are the national security concerns—forget about trade, OK? Forget about trade. Forget about tariffs. Forget about the economics of it. I’m just talking national security. We talked this morning with Jim Farley, the CEO of Ford. A sophisticated EV, and then an autonomous vehicle, is filled with thousands of semiconductors and sensors. It collects a huge amount of information about the driver, the location of the vehicle, the surroundings of the vehicle. Do we want all that data going to Beijing? That’s a question for you. So—and it’s a question for you.

So I think that that’s just one example, EVs. You could ask the same questions about semiconductors, many of which are made in China. We had a session about that this morning; same thing. US and European interests, economically, but even more important national security, are really intertwined. And the way you do it—AI, CHIPS, quantum, EVs—together.

FREDERICK KEMPE: And on that score, where does your investigation right now stand of the subsidies for Chinese electric vehicles in Europe? When can we expect some findings from that?

MARGRETHE VESTAGER: It’s my colleague Valdis Dombrovskis who’s heading it.

FREDERICK KEMPE: Yes.

MARGRETHE VESTAGER: And I don’t know what is the state of play of this investigation.

FREDERICK KEMPE: You raised something at the beginning, Madam Secretary, about elections, what came before, what could come after. This was suggested. The EU first suggested this menu for discussions in the summer of 2022—2020, late in the Trump administration. All the ministerials, however, have taken place, including you and also EVP Dombrovskis and Ambassador Tai, have taken place during the Biden administration.

How are you and your respective administrations thinking about how to ensure the format of TTC and the cooperation it’s enabled continue? What can one do? We have elections on both sides of the Atlantic. These things change.

GINA RAIMONDO: Yes. We talked about this today. And, look, I think we have to be realistic, right? There’s only so much you can put in cement. But I would offer a couple of things.

Number one, we have been, I think, very good at engaging stakeholders in our work. So wholly apart from what we government folks do, I hope there’s demand from industry and civil society to keep the TTC going. At every convening, we have robust stakeholder engagement. And I think they think it’s been successful, and I think they’re going to require the TTC to continue the work we’ve done.

Separately, we decided today we’re going to re-execute and renew all of the MOUs that we have. We have the task force for future growth. We’re going to reup all the members on that to continue the work. We have a robust agenda for our April meeting. So we’re just going to put on paper the plans and execute the contracts that we have and just sort of assume that it will go forward.

MARGRETHE VESTAGER: And no matter who will be at the helm, there are also things that we have learned. And we would also like to sort of put on paper what have we learned over these years? What should be done better in the next generation of the TTC? How can we make our precious stakeholder outreach more effective? Can we make it more strategic, more focused?

So try to also look back, not to, you know, a pointed finger for the future, but just to say this was what we achieved. Now next iteration of the TTC, what can you learn from our experiences? And I think that’s a way to go about it instead of being prescriptive, then trying to enable people to sort of get a sense of the experiences that we have gained.

FREDERICK KEMPE: Well, the—our time has run out. There’s many more questions I would like to ask you, drilling deeper. I hope we can invite you back to the Atlantic Council at another time; perhaps at the next TTC, if not before. But thank you so much, Madam Secretary and Madam Executive Vice President.

GINA RAIMONDO: You’re welcome.

FREDERICK KEMPE: Thanks for your time.

GINA RAIMONDO: Thank you.

MARGRETHE VESTAGER: Thank you.


Watch the event

The post Gina Raimondo and Margrethe Vestager on transatlantic approaches to trade, AI, and China appeared first on Atlantic Council.

]]>
CBDC Tracker cited by Cointelegraph on central bank digital currency development https://www.atlanticcouncil.org/insight-impact/in-the-news/cbdc-tracker-cited-by-cointelegraph-on-central-bank-digital-currency-development/ Tue, 30 Jan 2024 21:31:47 +0000 https://www.atlanticcouncil.org/?p=731071 Read the full article here.

The post CBDC Tracker cited by Cointelegraph on central bank digital currency development appeared first on Atlantic Council.

]]>
Read the full article here.

The post CBDC Tracker cited by Cointelegraph on central bank digital currency development appeared first on Atlantic Council.

]]>
Big Tech must listen to the concerns of Russia’s pro-democracy voices https://www.atlanticcouncil.org/blogs/ukrainealert/big-tech-must-listen-to-the-concerns-of-russias-pro-democracy-voices/ Tue, 30 Jan 2024 19:26:39 +0000 https://www.atlanticcouncil.org/?p=730562 Big Tech companies offer a variety of opportunities for free expression in Putin's Russia, write Joanna Nowakowska, Anna Kuznetsova, and Marta Bilska.

The post Big Tech must listen to the concerns of Russia’s pro-democracy voices appeared first on Atlantic Council.

]]>
Vladimir Putin has committed serious resources to ensure that the Russian people only see what he wants them to see. Yet despite the best efforts of the Russian dictator, the ever-evolving world of Big Tech offers a variety of avenues for free expression, even in closed societies. But without the right policy structures, Big Tech can be exploited to aid the designs of authoritarian rulers like Putin, making it crucial to spur discussions between Russian civil society and tech companies to avoid this outcome.

Tech companies are crucial to disseminating information, organizing platforms, creating fundraising tools, and recording war crimes and human rights abuses. As a result, their actions profoundly impact social and political issues in many countries.

Ongoing efforts to deliver accurate information to the Russian people illustrate these new realities. The Kremlin tightened censorship after the full-scale invasion of Ukraine in February 2022 to make sure the only information Russian citizens receive is state-controlled propaganda. Independent Russian media and civil society groups opposing the war face persecution and censorship on a scale not seen since the days of the Soviet Union.

As a result of this crackdown, international social media platforms and communication technologies became just about the only way to deliver factual information to Russians inside the country, and to inform the international public on the situation in Russia.

Western tech companies initially took steps to comply with international sanctions against Russia and to mitigate the spread of Kremlin-backed disinformation. However, new research suggests this effort has had the unintended consequence of significantly hindering independent media and civil society efforts inside Russia.

Stay updated

As the world watches the Russian invasion of Ukraine unfold, UkraineAlert delivers the best Atlantic Council expert insight and analysis on Ukraine twice a week directly to your inbox.

Out of a group of 16 independent Russian media and civil society organizations (CSO) featured in recent research, all experienced negative impacts to their online presence after the Russian invasion of Ukraine, with 14 reporting periodical sharp decreases in traffic and social media engagement.

These organizations saw an abrupt fall or lack of change in viewership, followers, subscribers, and engagements on some platforms, all while growing on others. Suddenly, content that generated substantial interest in the past was not getting any attention, while posts, videos, or even entire channels that were attracting significant engagement suddenly vanished from recommendation features.

Researchers believe that independent Russian media websites may have been deprioritized or omitted in Google search results and the Google Discover service, which inadvertently led to the amplification of Kremlin propaganda by directing millions of Russians to anti-Ukrainian and anti-Western messaging every day. This aligns with data recently published by Lev Gershenzon, one of the former heads of Yandex News, Russia’s largest search engine, now fully controlled by the state.

According to Gershenzon, Google Discover’s content recommendation system features Kremlin-affiliated sources high up in its recommendations. Close to 90 percent of Russian smartphones operate with Android, with Google products pre-installed by default, so Google has unprecedented influence over the content Russians view every day.

Eleven of the 16 groups cited claim to have lost access to essential Western software, tools, and equipment, and experienced restricted access to certain online advertising services. After their outlets were outlawed and Russian providers canceled their services, four of these groups said they could not find a Western hosting service, and several noted that one mass email service abruptly closed all of its Russian accounts. As a result, many Russian independent media and CSOs lost entire databases of readers, supporters, and donors.

Meanwhile, the online collaboration platform Slack shut down while Adobe, Windows, and Microsoft Office also left Russia. The resulting lack of access to basic online tools has proven challenging for Russia’s already-embattled independent voices.

While most of these groups attempted to contact companies to find solutions to their lack of access, few cases were resolved. No matter the outcome, the circuitous and demoralizing process of even getting an answer from decision makers at Western tech companies has proven to be a significant obstacle to addressing these issues.

The resulting status quo has, albeit unintentionally, reinforced the power imbalance between Russia’s pro-democracy actors and the country’s authoritarian government by depriving an increasingly isolated society of its few remaining independent sources of information. To overcome this impasse, there is an urgent need for dialogue between Western tech companies, Russian media, and civil society.

Joanna Nowakowska, Anna Kuznetsova, and Marta Bilska from the International Republican Institute are co-authors of the recent report “Can Big Tech Contribute to Breaking Putin’s Censorship?”

Further Reading

The views expressed in UkraineAlert are solely those of the authors and do not necessarily reflect the views of the Atlantic Council, its staff, or its supporters.

The Eurasia Center’s mission is to enhance transatlantic cooperation in promoting stability, democratic values and prosperity in Eurasia, from Eastern Europe and Turkey in the West to the Caucasus, Russia and Central Asia in the East.

Follow us on social media
and support our work

The post Big Tech must listen to the concerns of Russia’s pro-democracy voices appeared first on Atlantic Council.

]]>
Lipsky authors op-ed in Banking Risk and Regulation on CBDC adoption https://www.atlanticcouncil.org/insight-impact/in-the-news/lipsky-authors-op-ed-in-banking-risk-and-regulation-on-cbdc-adoption/ Mon, 29 Jan 2024 20:41:34 +0000 https://www.atlanticcouncil.org/?p=730957 Read the full article here.

The post Lipsky authors op-ed in Banking Risk and Regulation on CBDC adoption appeared first on Atlantic Council.

]]>
Read the full article here.

The post Lipsky authors op-ed in Banking Risk and Regulation on CBDC adoption appeared first on Atlantic Council.

]]>
Lipsky and Kumar quoted in Finextra on Fed CBDC progress https://www.atlanticcouncil.org/insight-impact/in-the-news/lipsky-and-kumar-quoted-in-finextra-on-fed-cbdc-progress/ Mon, 29 Jan 2024 05:00:56 +0000 https://www.atlanticcouncil.org/?p=730796 Read the full piece here.

The post Lipsky and Kumar quoted in Finextra on Fed CBDC progress appeared first on Atlantic Council.

]]>
Read the full piece here.

The post Lipsky and Kumar quoted in Finextra on Fed CBDC progress appeared first on Atlantic Council.

]]>
Lipsky and Kumar quoted in Bitcoin.com on Fed development of CBDCs https://www.atlanticcouncil.org/insight-impact/in-the-news/lipsky-and-kumar-quoted-in-bitcoin-com-on-fed-development-of-cbdcs/ Sun, 28 Jan 2024 05:00:34 +0000 https://www.atlanticcouncil.org/?p=731086 Read the full article here.

The post Lipsky and Kumar quoted in Bitcoin.com on Fed development of CBDCs appeared first on Atlantic Council.

]]>
Read the full article here.

The post Lipsky and Kumar quoted in Bitcoin.com on Fed development of CBDCs appeared first on Atlantic Council.

]]>
Lipsky and Kumar quoted in Business Insider on Fed role in shaping the future of payments https://www.atlanticcouncil.org/insight-impact/in-the-news/lipsky-and-kumar-quoted-in-business-insider-on-fed-role-in-shaping-the-future-of-payments/ Fri, 26 Jan 2024 19:54:02 +0000 https://www.atlanticcouncil.org/?p=729601 Read the full piece here.

The post Lipsky and Kumar quoted in Business Insider on Fed role in shaping the future of payments appeared first on Atlantic Council.

]]>
Read the full piece here.

The post Lipsky and Kumar quoted in Business Insider on Fed role in shaping the future of payments appeared first on Atlantic Council.

]]>
Lipsky quoted in CoinTelegraph on politicization of central bank digital currencies https://www.atlanticcouncil.org/insight-impact/in-the-news/lipsky-quoted-in-cointelegraph-on-us-politicization-of-central-bank-digital-currencies/ Thu, 25 Jan 2024 16:56:59 +0000 https://www.atlanticcouncil.org/?p=729129 Read the full piece here.

The post Lipsky quoted in CoinTelegraph on politicization of central bank digital currencies appeared first on Atlantic Council.

]]>
Read the full piece here.

The post Lipsky quoted in CoinTelegraph on politicization of central bank digital currencies appeared first on Atlantic Council.

]]>
The IMF’s perspective on CBDCs https://www.atlanticcouncil.org/blogs/econographics/the-imfs-perspective-on-cbdcs/ Fri, 19 Jan 2024 16:27:39 +0000 https://www.atlanticcouncil.org/?p=726611 Tobias Adrian outlines the IMF's view on CBDCs' potential for payment systems, financial inclusion, and cross-border payments, emphasizing innovation and collaboration for effective implementation.

The post The IMF’s perspective on CBDCs appeared first on Atlantic Council.

]]>
New forms of money and new technologies have the potential to improve payment systems, enhance financial inclusion, and facilitate cross-border payments. In particular, central bank digital currencies (CBDCs) have gained significant attention, with approximately 60 percent of countries exploring their potential. The IMF has a unique view across these efforts and we have done our own exploration of CBDCs’ potential—including the publication of the new CBDC Virtual Handbook that provides guidance to countries exploring the topic. In this post, based on remarks I made at the Atlantic Council’s conference in November, I describe some of the key issues around CBDCs as the Fund sees them.

CBDCs may have various benefits, such as replacing cash in island economies, enhancing resilience in more advanced economies, and improving financial inclusion. The tokenization of financial assets, such as bonds issued on blockchains, opens doors for CBDCs to be used in wholesale forms of payment.

Efforts to enhance cross-border payments have also gained momentum. Sending funds across jurisdictions is still too expensive, slow, and limited in availability. Cross-border payments must be improved for the sake of users, inclusion, and business efficiency. The cost of inaction on this front may include fragmentation in capital flows and compliance with international standards, as well as diminished effectiveness of policies for monetary and financial stability.

While resources are allocated to near-term improvements, it is important to explore medium-term solutions that leverage new technologies. This could include infrastructure based on blockchain technology to facilitate settlement (not just clearing) of cross-border payments and to manage risks and information flows through programming of basic financial contracts and encryption. This infrastructure (“cross-border platforms”) could facilitate the exchange of CBDCs in wholesale or retail form, interface with traditional forms of money, provide FX conversion, and manage payment risks. The use cases could be both small- and large-value payments.

The role of the public sector in developing new platforms would be key. While the private sector is actively piloting and testing the transfer of on-chain financial assets, the public sector should actively investigate and establish desirable features to support policy objectives. These objectives encompass operational efficiency and stability; market contestability and integration; innovation; and applicability to both large- and small-value payments in the context of financial inclusion. Other areas of focus include effective monitoring; data integrity and privacy; implementation of domestic macro-financial policies; monetary sovereignty and financial stability; limited spillover effects; evenhandedness; and fair representation, among others.

Solid governance and oversight will also be needed for these infrastructures to ensure they are aligned with policy objectives and that the infrastructure and participants are compliant with rules and standards. Indeed, this will be key as trust in ensuring that compliance checks are appropriate is fundamental to safeguarding financial integrity. An important question is who will be responsible for the application of Anti-Money Laundering and Countering the Financing of Terrorism (AML/CFT) measures and for monitoring compliance. Other challenges will include determining the jurisdictional domicile of the platform, ensuring coherence of legal requirements of participating jurisdictions, as well as addressing legal uncertainties including smart contracts, data protection, and roles and responsibilities of operating and oversight bodies.

There should be no presumption that platforms are necessarily desirable, nor of who should build and operate them—whether the public or private sector. To the extent the private sector is involved and pursues its own interests, platforms should still be designed to facilitate the payment and financial needs of the underserved, to the extent they are compliant with rules and standards.

New technologies like programmability and encryption offer new functionalities that could increase efficiencies and help develop new solutions and business models. Competition from purely private solutions (including stablecoins and crypto assets) pushes the public sector to improve infrastructures and services and to counter the forces of fragmentation that could undermine the International Monetary System. Collaboration among international institutions, central banks, and ministries of finance is crucial in providing guidance and setting design contours for cross-border platforms. The IMF is committed to playing its part in this collaborative effort.


Tobias Adrian is a guest contributor to the GeoEconomics Center and IMF Financial Counsellor and Director of the Monetary and Capital Markets Department.

At the intersection of economics, finance, and foreign policy, the GeoEconomics Center is a translation hub with the goal of helping shape a better global economic future.

The post The IMF’s perspective on CBDCs appeared first on Atlantic Council.

]]>
CBDC Tracker cited by BeInCrypto on central bank digital currency adoption https://www.atlanticcouncil.org/insight-impact/in-the-news/cbdc-tracker-cited-by-beincrypto-on-central-bank-digital-currency-adoption/ Thu, 18 Jan 2024 17:48:37 +0000 https://www.atlanticcouncil.org/?p=726744 Read the full article here.

The post CBDC Tracker cited by BeInCrypto on central bank digital currency adoption appeared first on Atlantic Council.

]]>
Read the full article here.

The post CBDC Tracker cited by BeInCrypto on central bank digital currency adoption appeared first on Atlantic Council.

]]>
CBDC Tracker cited by The Block on central bank digital currency adoption https://www.atlanticcouncil.org/insight-impact/in-the-news/cbdc-tracker-cited-by-the-block-on-central-bank-digital-currency-adoption/ Thu, 18 Jan 2024 17:46:34 +0000 https://www.atlanticcouncil.org/?p=726737 Read the full article here.

The post CBDC Tracker cited by The Block on central bank digital currency adoption appeared first on Atlantic Council.

]]>
Read the full article here.

The post CBDC Tracker cited by The Block on central bank digital currency adoption appeared first on Atlantic Council.

]]>
Global China Hub Nonresident Fellow Dakota Cary Featured on Click Here https://www.atlanticcouncil.org/insight-impact/in-the-news/global-china-hub-nonresident-fellow-dakota-cary-on-click-here/ Thu, 11 Jan 2024 15:47:34 +0000 https://www.atlanticcouncil.org/?p=723872 On January 10, GCH Nonresident Fellow Dakota Cary was brought on to Click Here to discuss his report, “Sleigh of hand: How China weaponizes software vulnerabilities,” which explains how Chinese software vulnerability laws require Chinese businesses to report coding flaws to a government agency, which in turn shares this information with state-sponsored hacking groups.

The post Global China Hub Nonresident Fellow Dakota Cary Featured on Click Here appeared first on Atlantic Council.

]]>

On January 10, GCH Nonresident Fellow Dakota Cary was brought on to Click Here to discuss his report, “Sleigh of hand: How China weaponizes software vulnerabilities,” which explains how Chinese software vulnerability laws require Chinese businesses to report coding flaws to a government agency, which in turn shares this information with state-sponsored hacking groups.

The post Global China Hub Nonresident Fellow Dakota Cary Featured on Click Here appeared first on Atlantic Council.

]]>
Ukraine is on the front lines of global cyber security https://www.atlanticcouncil.org/blogs/ukrainealert/ukraine-is-on-the-front-lines-of-global-cyber-security/ Tue, 09 Jan 2024 21:37:52 +0000 https://www.atlanticcouncil.org/?p=722954 Ukraine is currently on the front lines of global cyber security and the primary target for groundbreaking new Russian cyber attacks, writes Joshua Stein.

The post Ukraine is on the front lines of global cyber security appeared first on Atlantic Council.

]]>
There is no clear dividing line between “cyber warfare” and “cyber crime.” This is particularly true with regard to alleged acts of cyber aggression originating from Russia. The recent suspected Russian cyber attack on Ukrainian mobile operator Kyivstar is a reminder of the potential dangers posed by cyber operations to infrastructure, governments, and private companies around the world.

Russian cyber activities are widely viewed as something akin to a public-private partnership. These activities are thought to include official government actors who commit cyber attacks and unofficial private hacker networks that are almost certainly (though unofficially) sanctioned, directed, and protected by the Russian authorities.

The most significant government actor in Russia’s cyber operations is reportedly Military Unit 74455, more commonly called Sandworm. This unit has been accused of engaging in cyber attacks since at least 2014. The recent attack on Ukraine’s telecommunications infrastructure was probably affiliated with Sandworm, though specific relationships are intentionally hard to pin down.

Stay updated

As the world watches the Russian invasion of Ukraine unfold, UkraineAlert delivers the best Atlantic Council expert insight and analysis on Ukraine twice a week directly to your inbox.

Attributing cyber attacks is notoriously difficult; they are designed that way. In some cases, like the attacks on Ukraine’s electrical and cellular infrastructure, attribution is a matter of common sense. In other cases, if there is enough information, security firms and governments can trace attacks to specific sources.

Much of Russian cyber crime occurs through private hacker groups. Russia is accused of protecting criminals who act in the interests of the state. One notable case is that of alleged hacker Maksim Yakubets, who has been accused of targeting bank accounts around the world but remains at large in Russia despite facing charges from the US and UK.

The Kremlin’s preferred public-private partnership model has helped make Russia a major hub for aggressive cyber attacks and cyber crime. Private hacker networks receive protection, while military hacking projects are often able to disguise their activities by operating alongside private attacks, which provide the Kremlin with a degree of plausible deniability.

More than ten years ago, Thomas Rid predicted “cyber war will not take place.” Cyber attacks are not a battlefield, they are a race for digital resources (including access to and control of sensitive devices and accounts). This race has been ongoing for well over a decade.

Part of the reason the US and other NATO allies should be concerned about and invested in the war in Ukraine is that today’s cyber attacks are having an impact on cyber security that is being felt far beyond Ukraine. As Russia mounts further attacks against Ukrainian targets, it is also expanding its resources in the wider global cyber race.

Andy Greenberg’s book Sandworm documents a range of alleged Russian attacks stretching back a number of years and states that Sandworm’s alleged operations have not been limited to cyber attacks against Ukraine. The United States indicted six GRU operatives as part of Sandworm for their role in a series of attacks, including attempts to control the website of the Georgian Parliament. Cyber security experts are also reasonably sure that the NotPetya global attack of 2016 was perpetrated by Sandworm.

The NotPetya attack initially targeted Ukraine and looked superficially like a ransomware operation. In such instances, the victim is normally prompted to send cryptocurrency to an account in order to unlock the targeted device and files. This is a common form of cyber crime. The NotPetya attack also occurred after a major spree of ransomware attacks, so many companies were prepared to make payouts. But it soon became apparent that NotPetya was not ransomware. It was not meant to be profit-generating; it was destructive.

The NotPetya malware rapidly spread throughout the US and Europe. It disrupted global commerce when it hit shipping giant Maersk and India’s Jawaharlal Nehru Port. It hit major American companies including Merck and Mondelez. The commonly cited estimate for total economic damage caused by NotPetya is $10 billion, but even this figure does not capture the far greater potential it exposed for global chaos.

Ukraine is currently on the front lines of global cyber security and the primary target for groundbreaking new cyber attacks. While identifying the exact sources of these attacks is necessarily difficult, few doubt that what we are witnessing is the cyber dimension of Russia’s ongoing invasion of Ukraine.

Looking ahead, these attacks are unlikely to stay in Ukraine. On the contrary, the same cyber weapons being honed in Russia’s war against Ukraine may be deployed against other countries throughout the West. This makes it all the more important for Western cyber security experts to expand cooperation with Ukraine.

Joshua Stein is a researcher with a PhD from the University of Calgary.

Further reading

The views expressed in UkraineAlert are solely those of the authors and do not necessarily reflect the views of the Atlantic Council, its staff, or its supporters.

The Eurasia Center’s mission is to enhance transatlantic cooperation in promoting stability, democratic values and prosperity in Eurasia, from Eastern Europe and Turkey in the West to the Caucasus, Russia and Central Asia in the East.

Follow us on social media
and support our work

The post Ukraine is on the front lines of global cyber security appeared first on Atlantic Council.

]]>
Ukrainian telecoms hack highlights cyber dangers of Russia’s invasion https://www.atlanticcouncil.org/blogs/ukrainealert/ukrainian-telecoms-hack-highlights-cyber-dangers-of-russias-invasion/ Thu, 21 Dec 2023 00:09:09 +0000 https://www.atlanticcouncil.org/?p=718878 An unprecedented December 12 cyber attack on Ukraine's largest telecoms operator Kyivstar left tens of millions of Ukrainians without mobile services and underlined the cyber warfare potential of Russia's ongoing invasion, writes Mercedes Sapuppo.

The post Ukrainian telecoms hack highlights cyber dangers of Russia’s invasion appeared first on Atlantic Council.

]]>
A recent cyber attack on Ukraine’s largest telecommunications provider, Kyivstar, caused temporary chaos among subscribers and thrust the cyber front of Russia’s ongoing invasion back into the spotlight. Kyivstar CEO Oleksandr Komarov described the December 12 hack as “the biggest cyber attack on telco infrastructure in the world,” underlining the scale of the incident.

This was not the first cyber attack targeting Kyivstar since Russia launched its full-scale invasion in February 2022. The telecommunications company claims to have repelled around 500 attacks over the past twenty-one months. However, this latest incident was by far the most significant.

Kyivstar currently serves roughly 24 million Ukrainian mobile subscribers and another million home internet customers. This huge client base was temporarily cut off by the attack, which also had a knock-on impact on a range of businesses including banks. For example, around 30% of PrivatBank’s cashless terminals ceased functioning during the attack. Ukraine’s air raid warning system was similarly disrupted, with alarms failing in several cities.

Kyivstar CEO Komarov told Bloomberg that the probability Russian entities were behind the attack was “close to 100%.” While definitive evidence has not yet emerged, a group called Solntsepyok claimed responsibility for the attack, posting screenshots that purportedly showed the hackers breaching Kyivstar’s digital infrastructure. Ukraine’s state cyber security agency, known by the acronym SSSCIP, has identified Solntsepyok as a front for Russia’s GRU military intelligence agency.

Stay updated

As the world watches the Russian invasion of Ukraine unfold, UkraineAlert delivers the best Atlantic Council expert insight and analysis on Ukraine twice a week directly to your inbox.

The details of the attack are still being investigated but initial findings indicate that hackers were able to breach Kyivstar security via an employee account at the telecommunications company. This highlights the human factor in cyber security, which on this occasion appears to have enabled what Britain’s Ministry of Defense termed as “one of the highest-impact disruptive cyber attacks on Ukrainian networks since the start of Russia’s full-scale invasion.”

This latest cyber attack is a reminder of the threat posed by Russia in cyberspace. Ever since a landmark 2007 cyber attack on Estonia, Russia has been recognized as one of the world’s leading pioneers in the field of cyber warfare. The Kremlin has been accused of using both state security agencies and non-state actors in its cyber operations in order to create ambiguity and a degree of plausible deniability.

While cyber attacks have been a feature of Russian aggression against Ukraine since hostilities first began in 2014, the cyber front of the confrontation has been comparatively quiet following the launch of the full-scale invasion almost two years ago. Some experts are now warning that the recent attack on the Kyivstar network may signal an intensification of Russian cyber activities, and are predicting increased cyber attacks on key infrastructure targets in the coming months as the Kremlin seeks to make the winter season as uncomfortable as possible for Ukraine’s civilian population.

Ukraine’s cyber defense capabilities were already rated as robust before Russia’s full-scale invasion. These capabilities have improved considerably since February 2022, not least thanks to a rapid expansion in international cooperation between Ukraine and leading global tech companies. “Ukraine’s cyber defense offers an innovative template for other countries’ security efforts against a dangerous enemy,” the Financial Times reported in July 2023. “Constant vigilance has been paired with unprecedented partnerships with US and European private sector groups, from Microsoft and Cisco’s Talos to smaller firms like Dragos, which take on contracts to protect Ukraine in order to gain a close-up view of Russian cyber tradecraft. Amazon Web Services has sent in suitcase-sized back-up drives. Cloudfare has provided its protective service, Project Galileo. Google Project Shield has helped fend off cyber intrusions.”

As Ukraine’s cyber defenses grow more sophisticated, Russia is also constantly innovating. Ukrainian cyber security officials recently reported the use of new and more complex malware to target state, private sector, and financial institutions. Accelerating digitalization trends evident throughout Ukrainian society in recent years leave the country highly vulnerable to further cyber attacks.

There are also some indications that Ukrainian cyber security bodies may require reform. In November 2023, two senior officials were dismissed from leadership positions at the SSSCIP amid a probe into alleged embezzlement at the agency. Suggestions of corruption within Ukraine’s cyber security infrastructure are particularly damaging at a time when Kyiv needs to convince the international community that it remains a reliable partner in the fight against Russian cyber warfare.

The Kyivstar attack is a reminder that the Russian invasion of Ukraine is not only a matter of tanks, missiles, and occupying armies. In the immediate aftermath of the recent attack on the country’s telecommunications network, Ukrainian Nobel Peace Prize winner and human rights activist Oleksandra Matviichuk posted that the incident was “a good illustration of how much we all depend on the internet, and how easy it is to destroy this whole system.” Few would bet against further such attacks in the coming months.

Mercedes Sapuppo is a program assistant at the Atlantic Council’s Eurasia Center.

Further reading

The views expressed in UkraineAlert are solely those of the authors and do not necessarily reflect the views of the Atlantic Council, its staff, or its supporters.

The Eurasia Center’s mission is to enhance transatlantic cooperation in promoting stability, democratic values and prosperity in Eurasia, from Eastern Europe and Turkey in the West to the Caucasus, Russia and Central Asia in the East.

Follow us on social media
and support our work

The post Ukrainian telecoms hack highlights cyber dangers of Russia’s invasion appeared first on Atlantic Council.

]]>
#AtlanticDebrief – What is the impact of the EU’s AI Act? | A Debrief from Dragoș Tudorache and Brando Benifei https://www.atlanticcouncil.org/content-series/atlantic-debrief/atlanticdebrief-what-is-the-impact-of-the-eus-ai-act-a-debrief-from-dragos-tudorache-and-brando-benifei/ Thu, 14 Dec 2023 19:56:38 +0000 https://www.atlanticcouncil.org/?p=716529 Fran Burwell sits down with the co-rapporteurs of the EU’s AI Act to discuss the latest negotiations and what impact the legislation will have on the global regulatory landscape.

The post #AtlanticDebrief – What is the impact of the EU’s AI Act? | A Debrief from Dragoș Tudorache and Brando Benifei appeared first on Atlantic Council.

]]>

IN THIS EPISODE

The EU reached a political agreement on the AI Act, becoming the first in the world to set comprehensive rules on the regulation of AI technologies. With the political agreement in place, a final agreement will need to be ratified by the European Parliament and Council. What is the most important achievement of the EU’s AI Act? What was the most difficult compromise in the trialogue negotiations? What impact will the regulation have on innovation in Europe? Where did the trialogues end up on key issues such as facial recognition and biometric surveillance? What influence will the regulation have on the global regulatory landscape? Will similar legislation be adopted elsewhere? 

On this episode of #AtlanticDebrief, Fran Burwell sits down with the co-rapporteurs of the EU’s AI Act, MEPs Dragoș Tudorache and Brando Benifei, to discuss the latest negotiations and what impact the legislation will have on the global regulatory landscape.

You can watch #AtlanticDebrief on YouTube and as a podcast.  

MEET THE #ATLANTICDEBRIEF HOST

The Europe Center promotes leadership, strategies, and analysis to ensure a strong, ambitious, and forward-looking transatlantic relationship.

The post #AtlanticDebrief – What is the impact of the EU’s AI Act? | A Debrief from Dragoș Tudorache and Brando Benifei appeared first on Atlantic Council.

]]>
House and Kumar interviewed in GAO Report on Sanctions Risks Posed by Digital Assets https://www.atlanticcouncil.org/insight-impact/in-the-news/house-cited-in-gao-report-on-sanctions-risks-posed-by-digital-assets/ Wed, 13 Dec 2023 14:28:55 +0000 https://www.atlanticcouncil.org/?p=717160 Read the full report here.

The post House and Kumar interviewed in GAO Report on Sanctions Risks Posed by Digital Assets appeared first on Atlantic Council.

]]>
Read the full report here.

The post House and Kumar interviewed in GAO Report on Sanctions Risks Posed by Digital Assets appeared first on Atlantic Council.

]]>
CBDC Tracker cited in GAO Report on Sanctions Risks Posed by Digital Assets https://www.atlanticcouncil.org/insight-impact/in-the-news/cbdc-tracker-cited-in-gao-report-on-sanctions-risks-posed-by-digital-assets/ Wed, 13 Dec 2023 14:25:54 +0000 https://www.atlanticcouncil.org/?p=717156 Read the full report here.

The post CBDC Tracker cited in GAO Report on Sanctions Risks Posed by Digital Assets appeared first on Atlantic Council.

]]>
Read the full report here.

The post CBDC Tracker cited in GAO Report on Sanctions Risks Posed by Digital Assets appeared first on Atlantic Council.

]]>
Ukraine’s AI road map seeks to balance innovation and security https://www.atlanticcouncil.org/blogs/ukrainealert/ukraines-ai-road-map-seeks-to-balance-innovation-and-security/ Tue, 12 Dec 2023 21:37:02 +0000 https://www.atlanticcouncil.org/?p=715576 As the world grapples with the implications of rapidly evolving Artificial Intelligence (AI) technologies, Ukraine has recently presented a national road map for AI regulation that seeks to balance the core values of innovation and security, writes Ukraine's Minister for Digital Transformation Mykhailo Fedorov.

The post Ukraine’s AI road map seeks to balance innovation and security appeared first on Atlantic Council.

]]>
As the world grapples with the implications of rapidly evolving Artificial Intelligence (AI) technologies, Ukraine has recently presented a national road map for AI regulation that seeks to balance the core values of innovation and security.

Businesses all over the world are currently racing to integrate AI into their products and services. This process will help define the future of the tech sector and will shape economic development across borders.

It is already clear that AI will allow us all to harness incredible technological advances for the benefit of humanity as a whole. But if left unregulated and uncontrolled, AI poses a range of serious risks in areas including identity theft and the dissemination of fake information on an unprecedented scale.

One of the key objectives facing all governments today is to maximize the positive impact of AI while minimizing any unethical use by both developers and users, amid mounting concerns over cyber security and other potential abuses. Clearly, this exciting new technological frontier must be regulated that ensure the safety of individuals, businesses, and states.

Some governments are looking to adopt AI policies that minimize any potential intervention while supporting business; others are attempting to prioritize the protection of human rights. Ukraine is working to strike a balance between these strategic priorities.

Stay updated

As the world watches the Russian invasion of Ukraine unfold, UkraineAlert delivers the best Atlantic Council expert insight and analysis on Ukraine twice a week directly to your inbox.

Today, Ukraine is among the world’s leading AI innovators. There are more than 60 Ukrainian tech companies registered as active in the field of artificial intelligence, but this is by no means an exhaustive list. Throughout Ukraine’s vibrant tech sector, a large and growing number of companies are developing products and applications involving AI.

The present objective of the Ukrainian authorities is to support this growth and avoid over-regulation of AI. We recognize that the rapid adoption of regulations is always risky when applied to fast-moving innovative fields, and prefer instead to adopt a soft approach that takes the interests of businesses into account. Our strategy is to implement regulation through a bottom-up approach that will begin by preparing businesses for future regulation, before then moving to the implementation stage.

During the first phase, which is set to last two to three years, the Ukrainian authorities will assist companies in developing a culture of self-regulation that will enable them to control the ethics of their AI systems independently. Practical tools will be provided to help businesses adapt their AI-based products in line with future Ukrainian and European legislative requirements. These tools will make it possible to carry out voluntary risk assessment of AI products, which will help businesses identify any areas that need improvement or review.

Ukraine also plans to create a product development environment overseen by the government and involving expert assistance. The aim is to allow companies to develop and test AI products for compliance with future legislation. Additionally, a range of recommendations will be created to provide stakeholders with practical guidelines for how to design, develop, and use AI ethically and responsibly before any legally binding regulations come into force.

For those businesses willing to do more during the initial self-regulation phase, the Ukrainian authorities will prepare voluntary codes of conduct. Stakeholders will also be issued a policy overview providing them with a clear understanding of the government’s approach to AI regulation and clarifying what they can expect in the future.

During the initial phase, the Ukrainian government’s role is not to regulate AI usage, but to help Ukrainian businesses prepare for inevitable future AI regulation. At present, fostering a sense of business responsibility is the priority, with no mandatory requirements or penalties. Instead, the focus is on voluntary commitments, practical tools, and an open dialogue between government and businesses.

The next step will be the formation of national AI legislation in line with the European Union’s AI Act. The bottom-up process chosen by Ukraine is designed to create a smooth transition period and guarantee effective integration.

The resulting Ukrainian AI regulations should ensure the highest levels of human rights protection. While the development of new technologies is by nature an extremely unpredictable process for both businesses and governments, personal safety and security remain the top priority.

At the same time, the Ukrainian approach to AI regulation is also designed to be business-friendly and should help fuel further innovation in Ukraine. By aligning the Ukrainian regulatory framework with EU legislation, Ukrainian tech companies will be able to enter European markets with ease.

AI regulation is a global issue that impacts every country. It is not merely a matter of protections or restrictions, but of creating the right environment for safe innovation. Ukraine’s AI regulation strategy aims to minimize the risk of abuses while making sure the country’s tech sector can make the most of this game-changing technology.

Mykhailo Fedorov is Ukraine’s Vice Prime Minister for Innovations and Development of Education, Science, and Technologies, and Minister of Digital Transformation.

Further reading

The views expressed in UkraineAlert are solely those of the authors and do not necessarily reflect the views of the Atlantic Council, its staff, or its supporters.

The Eurasia Center’s mission is to enhance transatlantic cooperation in promoting stability, democratic values and prosperity in Eurasia, from Eastern Europe and Turkey in the West to the Caucasus, Russia and Central Asia in the East.

Follow us on social media
and support our work

The post Ukraine’s AI road map seeks to balance innovation and security appeared first on Atlantic Council.

]]>
Head of BIS Innovation Hub Cecilia Skingsley’s remarks cited in Coingeek on the new future of money project https://www.atlanticcouncil.org/insight-impact/in-the-news/head-of-bis-innovation-hub-cecilia-skingsleys-remarks-cited-in-coingeek-on-the-new-future-of-money-project/ Mon, 04 Dec 2023 21:04:34 +0000 https://www.atlanticcouncil.org/?p=713873 Read the full piece here.

The post Head of BIS Innovation Hub Cecilia Skingsley’s remarks cited in Coingeek on the new future of money project appeared first on Atlantic Council.

]]>
Read the full piece here.

The post Head of BIS Innovation Hub Cecilia Skingsley’s remarks cited in Coingeek on the new future of money project appeared first on Atlantic Council.

]]>
Community watch: China’s vision for the future of the internet https://www.atlanticcouncil.org/in-depth-research-reports/report/community-watch-chinas-vision-for-the-future-of-the-internet/ Mon, 04 Dec 2023 14:00:00 +0000 https://www.atlanticcouncil.org/?p=707988 In 2015, Beijing released Jointly Building a Community with a Shared Future in Cyberspace, a white paper outlining the CCP’s vision for the future of the internet. In the eight years since then, this vision has picked up steam outside of China, largely as the result of Beijing’s efforts to export these ideas to authoritarian countries.

The post Community watch: China’s vision for the future of the internet appeared first on Atlantic Council.

]]>
Table of contents

Executive summary
Introduction
The core of China’s approach
Case studies in China’s “shared future”

Executive summary

China recognizes that many nondemocratic and illiberal developing nations need internet connectivity for economic development. These countries aim to digitize trade, government services, and social interactions, but interconnectivity risks better communication and coordination among political dissidents. China understands this problem and is trying to build global norms that facilitate the provision of its censorship and surveillance tools to other countries. This so-called Community with a Shared Future in Cyberspace, is based around the idea of cyber sovereignty. China contends that it is a state’s right to protect its political system, determine what content is appropriate within its borders, create its own standards for cybersecurity, and govern access to the infrastructure of the internet. 

Jointly Building a Community with a Shared Future in Cyberspace, a white paper from the government of the People’s Republic of China (most recently released in 2022 but reissued periodically since 2015), is a continuation of diplomatic efforts to rally the international community around China’s concept of cyber sovereignty.1 By extending the concept of sovereignty to cyberspace, China makes the argument that the state decides the content, operations, and norms of its internet; that each state is entitled to such determinations as a de facto right of its existence; that all states should have equal say in the administration of the global internet; and that it is the role of the state to balance claims of citizens and the international community (businesses, mostly, but also other states and governing bodies). 

But making the world safe for authoritarian governments is only part of China’s motivation. As the key provider of censorship-ready internet equipment and surveillance tools, China’s concept of cyber sovereignty offers political security to other illiberal governments. Case studies in this report demonstrate how such technologies may play a role in keeping China’s friends in power.

The PRC supports other authoritarian governments for good reason. Many countries in which Chinese state-owned enterprises and PRC-based companies own mineral drawing rights or have significant investments are governed by authoritarians. Political instability threatens these investments, and, in some cases, China’s access to critical mineral inputs to its high-tech manufacturing sector. Without a globally capable navy to compel governments to keep their word on contracts, China is at the mercy of democratic revolutions and elite power struggles in these countries. By providing political security to a state through censorship, surveillance, and hacking of dissidents, China improves its chances of maintaining access to strategic plots of land for military bases or critical manufacturing inputs. A government that perceives itself to be dependent on China for political security is in no position to oppose it.

Outside of China’s strategic objectives, the push for a Community with a Shared Future in Cyberspace may also have an operational impact on state-backed hacking teams.  

As China’s cybersecurity companies earn more customers, their defenders gain access to more endpoints, better telemetry, and a more complete view of global cyber events. Leveraged appropriately, a larger customer base improves defenses. The Ministry of Industry and Information Technology’s Cybersecurity Threat and Vulnerability Information Sharing Platform, which collects information about software vulnerabilities, also collects voluntary incident response reports from Chinese firms responding to breaches of their customers.2 Disclosure of incidents and the vulnerabilities of overseas clients of Chinese cybersecurity firms would significantly increase the PRC’s visibility into global cyber operations by other nations or transnational criminal groups. China’s own defensive posture should also improve as its companies attract more global clients. 

China’s offensive teams could benefit, too. Many cybersecurity firms often allow their own country’s security services to operate unimpeded in their customers’ networks.3 Therefore, it is likely that more companies protected by Chinese cybersecurity companies means fewer networks where China’s offensive hacking teams must worry about evading defenses. 

This report uses cases studies from the Solomon Islands, Russia, and beyond to show how China is operationalizing its view of cyber sovereignty. 

Introduction

A long black slate wall covered in dark hexagonal tiles runs along the side of Nuhong Street in Wuzhen, China, eighty miles southwest of Shanghai. A gap in the middle of the wall leads visitors to the entrance of the Waterside Resort that, for the last nine years, has hosted China’s World Internet Conference, a premier event for Chinese Communist Party (CCP) cyber policymakers.

The inaugural conference didn’t seem like a foreign policy forum. The thousand or so attendees from a handful of countries and dozens of companies listened to a speaker circuit asserting that 5G is the future, big data was changing the world, and the internet was great for economic development—hardly groundbreaking topics in 2014.4 But the internet conference was more than a platform for platitudes about the internet: it also served as China’s soft launch for its international strategy on internet governance.

By the last evening of the conference, some of the attendees had already left, choosing the red-eye flight home over another night by the glass-encased pool on the waterfront. Around 11 p.m., papers slid under doorways up and down the hotel halls. Conference organizers went room by room distributing a proclamation they hoped attendees would endorse just nine hours later.5 Attendees were stunned. The document said: “During the conference, many speakers and participants suggest [sic] that a Wuzhen declaration be released at the closing ceremony.” The papers, stapled and stuffed under doors, outlined Beijing’s views of the internet. The conference attendees—many of whom were members of the China-friendly Shanghai Cooperation Organization—balked at the last-minute, tone-deaf approach to getting an endorsement of Beijing’s thoughts on the internet. The document went unsigned, and the inaugural Wuzhen internet conference wrapped without a sweeping declaration. It was clear China needed the big guns, and perhaps less shady diplomatic tactics, to persuade foreigners of the merits of their views of the internet. 

President Xi Jinping headlined China’s second World Internet Conference in 2015.6 This time the organizers skipped the late-night antics. On stage and reportedly in front of representatives from more than 120 countries and many more technology firm CEOs, Xi outlined a vision that is now enshrined in text as “Jointly Building a Community with a Shared Future in Cyberspace.”7 The four principles and five proposals President Xi laid out in his speech, which generally increase the power of the state and aim to model the global internet in China’s image, remain a constant theme in China’s diplomatic strategy on internet governance.8 In doing so, Xi fired the starting gun on an era of global technology competition that may well lead to blocs of countries aligned by shared censorship and cybersecurity standards. China has reissued the document many times since Xi’s speech, with the latest coming in 2022. 

Xi’s 2015 speech came at a pivotal moment in history for China and many other authoritarian regimes. The Arab Spring shook authoritarian governments around the world just years earlier.9 Social media-fueled revolutions saw some autocrats overthrown or civil wars started in just a few months. China shared the autocrats’ paranoia. A think tank under the purview of the Cyberspace Administration of China acutely summarized the issue of internet governance, stating: “If our party cannot traverse the hurdle represented by the Internet, it cannot traverse the hurdle of remaining in power for the long term.”10 Another PRC government agency report went even further: blaming the US Central Intelligence Agency for no fewer than eleven “color revolutions” since 2003: the National Computer Virus Emergency Response Center claimed that the United States was providing critical technical support to pro-democracy protestors.11 Specifically, the center blamed the CIA for five technologies—ranging from encrypted communications to “anti-jamming” WiFi that helped connect protestors—that played into the success of color revolutions. Exuberance in Washington over the internet leveling the playing field between dictators and their oppressed citizens was matched in conviction, if not in tone, by leaders from Beijing to Islamabad.

But China and other repressive regimes could not eschew the internet. The internet was digitizing everything, from social relationships and political affiliations to commerce and trade. Authoritarians needed a way to reap the benefits of the digital economy without introducing unacceptable risks to their political systems. China’s approach, called a Community with a Shared Future in Cyberspace,12 responds to these threats as a call to action for authoritarian governments and a path toward more amenable global internet governance for authoritarian regimes. It is, as one expert put it, China switching from defense to offense.13

The core of China’s approach

The PRC considers four principles key to structuring the future of cyberspace. These principles lay the conceptual groundwork for the five proposals, which reflect the collective tasks to build this new system. Table 1 shows the principles, which were drawn from Xi’s 2015 speech.14


Table 1: China’s Four Principles, in Xi’s Words

  • Respect for cyber sovereignty: “The principle of sovereign equality enshrined in the Charter of the United Nations is one of the basic norms in contemporary international relations. It covers all aspects of state-to-state relations, which also includes cyberspace. We should respect the right of individual countries to independently choose their own path of cyber development, model of cyber regulation and Internet public policies, and participate in international cyberspace governance on an equal footing. No country should pursue cyber hegemony, interfere in other countries’ internal affairs or engage in, connive at or support cyber activities that undermine other countries’ national security.”
  • Maintenance of peace and security: “A secure, stable and prosperous cyberspace is of great significance to all countries and the world. In the real world, there are still lingering wars, shadows of terrorism and occurrences of crimes. Cyberspace should not become a battlefield for countries to wrestle with one another, still less should it become a hotbed for crimes. Countries should work together to prevent and oppose the use of cyberspace for criminal activities such as terrorism, pornography, drug trafficking, money laundering and gambling. All cyber crimes, be they commercial cyber thefts or hacker attacks against government networks, should be firmly combated in accordance with relevant laws and international conventions. No double standards should be allowed in upholding cyber security. We cannot just have the security of one or some countries while leaving the rest insecure, still less should one seek the so-called absolute security of itself at the expense of the security of others.”
  • Promotion of openness and cooperation: “As an old Chinese saying goes, ‘When there is mutual care, the world will be in peace; when there is mutual hatred, the world will be in chaos.’ To improve the global Internet governance system and maintain the order of cyberspace, we should firmly follow the concept of mutual support, mutual trust and mutual benefit and reject the old mentality of zero-sum game or ‘winner takes all.’ All countries should advance opening-up and cooperation in cyberspace and further substantiate and enhance the opening-up efforts. We should also build more platforms for communication and cooperation and create more converging points of interests, growth areas for cooperation and new highlights for win-win outcomes. Efforts should be made to advance complementarity of strengths and common development of all countries in cyberspace so that more countries and people will ride on the fast train of the information age and share the benefits of Internet development.”
  • Cultivation of good order: “Like in the real world, freedom and order are both necessary in cyberspace. Freedom is what order is meant for and order is the guarantee for freedom. We should respect Internet users’ rights to exchange their ideas and express their minds, and we should also build a good order in cyberspace in accordance with law as it will help protect the legitimate rights and interests of all Internet users. Cyberspace is not a place beyond the rule of law. Cyberspace is virtual, but players in cyberspace are real. Everyone should abide by the law, with the rights and obligations of parties concerned clearly defined. Cyberspace must be governed, operated and used in accordance with law, so that the Internet can enjoy sound development under the rule of law. In the meantime, greater efforts should be made to strengthen ethical standards and civilized behaviors in cyberspace. We should give full play to the role of moral teachings in guiding the use of the Internet to make sure that the fine accomplishments of human civilizations will nourish the growth of cyberspace and help rehabilitate cyber ecology.”

The four principles are not of equal importance. “Respecting cyber sovereignty” is the cornerstone of China’s vision for global cyber governance. China introduced and argued for the concept in its first internet white paper in 2010.15 But cyber sovereignty is not itself controversial. The idea that a government can regulate things within its borders is nearly synonymous with what it means to be a state. Issues arise with the prescriptive and hypocritical nature of the three following principles. 

Under the “maintenance of peace and security principle,” China—a country with a famously effective and persistent ability to steal and commercialize foreign intellectual property16—suggests that all countries should abhor cyberattacks that lead to IP theft or government spying. Xi’s statement establishes equivalency between two things held separate in Western capitalist societies: intellectual property rights and trade secrets versus espionage against other governments. China holds what the US prizes but cannot defend well, IP and trade secrets, next to what China prizes but cannot guarantee for itself, the confidentiality of state secrets. The juxtaposition was an implicit bargain and one that neither would accept. In considering China’s proposition, the US continuation of traditional intelligence-collection activities contravenes China’s “peace and security principle,” providing the Ministry of Foreign Affairs spokesperson a reason to blame the United States when China is caught conducting economic espionage. 

“Promotion of openness and cooperation” is mundane enough to garner support until users read the fine print or ask China to act on this principle. Asking other countries to throw off a zero-sum mentality and view the internet as a place for mutual benefit, Xi unironically asks states to pursue win-win benefits. This argument blatantly ignores the clear differences in market access between foreign tech companies in the PRC and Chinese firms’ access to foreign markets. Of course, if a country allows a foreign firm into its market, by Xi’s argumentation, the country must have decided it was a win-win decision. It’s unclear if refusing market access to a Chinese company would be acceptable or if that would fall under zero-sum mentality and contravene the value of openness. Again, China’s rhetoric misrepresents the conditions it would likely accept. 

Cultivating “good order” in cyberspace, at least as Xi conceptualizes it, is impossible for democratic countries with freedom of speech. Entreaties that “order” be the guarantor of freedom of speech won’t pass muster in many nations, at least not the “order” sought by China’s policymakers. A report from the Institute for a Community with a Shared Future shines light onto what type of content might upset the “good order.” In its Governing the Phenomenon of Online Violence Report, analysts identify political scandals like a deadly 2018 bus crush in Chongqing or the 2020 “Wuhan virus leak rumor” as examples of online violence, alongside a case where a woman was bullied to suicide.17 Viewing political issues as “online violence” associated with good order is not just a one-off report. Staff at the Institute argue that rumors spread at the start of the pandemic in 2020 “highlight the necessity and urgency of building a community with a shared future in cyberspace.”18 For China, “online violence” is a euphemism for speech deemed politically sensitive by the government. If “making [the internet] better, cleaner and safer is the common responsibility of the international community,”19 as Xi argues, how will China treat countries it sees as abrogating its responsibility to combat such online violence? Will countries whose internet service providers rely on Chinese cloud companies or network devices be able to decide that criticizing China is acceptable within its own borders?

China’s five proposals 

The five proposals used to construct China’s Community with a Shared Future in Cyberspace carry less weight and importance than its four principles. The proposals are not apparently attached to specific funding or policy initiatives, and did not receive attention from China’s foreign ministry. They are, at most, way stations along the path to a shared future. The proposals are:

  1. Speeding up the construction of a global internet infrastructure and promoting interconnectivity.
  2. Building an online platform for cultural exchange and mutual learning.
  3. Promoting the innovative development of the cyber economy and common prosperity. 
  4. Maintaining cyber security and promoting orderly development. 
  5. Building an internet governance system and promoting equity and justice.

Implications and the future of the global internet

China’s argument for its view of global internet governance and the role of the state rests on solid ground. The PRC frequently points to the General Data Protection Regulation (GDPR) in the European Union as a leading example of the state’s role in internet regulation. The GDPR allows EU citizens to have their data deleted, forces businesses to disclose data breaches, and requires websites to give users a choice to accept or reject cookies (and what kind) each time they visit a new website. China points to concerns in the United States over foreign interference on social media as evidence of US buy-in on China’s view of cyber sovereignty. Even banal regulations like the US “know your customer” rule—which requires some businesses to collect identifying personal information about users, usually for tax purposes—fit into Beijing’s bucket of evidence. But the alleged convergence between the views of China and democratic nations stops there.

Divergent values between liberal democracies and the coterie of PRC-aligned autocracies belie our very different interpretations of the meaning of cyber sovereignty. A paper published in the CCP’s top theoretical journal mentions both the need to regulate internet content and “promote positive energy,” a Paltrowesque euphemism for party-boosting propaganda, alongside 

endorsements of the cyber sovereignty principle.20 The article extrapolates on what Xi made clear in his 2015 speech. For the CCP, censorship and sovereignty are inextricably linked. 

These differences are not new. Experts dedicate significant coverage to ongoing policy arguments at the UN, where China repeatedly pushes to classify the dissemination of unwanted content—read politically intolerable—as a crime.21 As recently as January 2023, China offered an amendment to a UN treaty attempting to make sharing false information online illegal.22 A knock-on effect of media coverage related to disinformation campaigns from China and Russia—despite their poor performance23—means policymakers, pundits, and journalists make China’s point that narratives promoted by other nations is an issue to be solved. What counts as disinformation can be meted out on a country-by-country basis. The tension between the desire to protect democracy from foreign influence and the liberal value of promoting free speech and truth in authoritarian systems is palpable. 
The United States has fueled the CCP’s concern with its public statements. China’s internet regulators criticized the United States’ Declaration for the Future of the Internet.24 The CCP, which is paranoid about foreign attempts to support “color revolutions” or foment regime change, is rightfully concerned. The United States’ second stated principle for digital technologies is to promote “democracy,” a value antithetical to continuing CCP rule over the PRC. The universal value democratic governments subscribe to—the consent of the governed—drives the US position on the benefits of connectedness. That same value scares authoritarian governments. 

Operationalizing our shared future

Jointly Build a Community with a Shared Future in Cyberspace alludes to the pathways the CCP will use to act on its vision. The document includes detailed statistics about the rollout of IPv6—a protocol for issuing internet-connected device addresses that could ease surveillance—use of the Beidou Satellite Navigation system within China and elsewhere, the domestic and international use of 5G, development of transformational technologies like artificial intelligence and Internet of Things devices, and the increasingly widespread use of internet-connected industrial devices.25 The value of different markets, like that of e-commerce or trade enabled by any of the preceding systems, are repeated many times over the course of the document. It’s clear that policymakers see the fabric of the internet—its devices, markets, and economic value—as expanding. Owning the avenues of expansion, then, is key to spreading the CCP’s values as much as it is about making money.  

Authoritarian and nondemocratic developing countries provide a bountiful market for China’s goods. Plenty of developing nations and authoritarian governments want to tighten control over the internet in their countries. Recent research demonstrates an increasing number of incidents when governments shut off the internet in their countries—a good proxy for their interest in censorship.26 These governments need the technology and tools to finely tune their control over the internet. Owing to the political environment inside the PRC, Chinese tech firms already build their products to facilitate censorship and surveillance.27 Some countries are having luck rolling out these services. The Australian Strategic Policy Institute found that “with technical support from China, local governments in East Africa are escalating censorship on social media platforms and the internet.”28 These findings are mirrored by reporting from Censys, a network data company, that found, among other things, a significant footprint for PRC-made network equipment in four African countries.29 In fact, there is no public list of countries that acknowledge supporting the Community with a Shared Future in Cyberspace approach, but there are good indicators for which nations are mostly likely to participate. 

A 2017 policy paper entitled International Strategy of Cooperation on Cyberspace indicated that China would carry out “cybersecurity cooperation” with “the Conference on Interaction and Confidence Building Measures in Asia (CICA), Forum on China-Africa Cooperation (FOCAC), China-Arab States Cooperation Forum, Forum of China and the Community of Latin American and Caribbean States and Asian-African Legal Consultative Organization.”30 But an international strategy document stating the intent to cooperate with most of the Global South is not the same as actually doing so. The 2017 strategy document is, at most, aspirational.

Instead, bilateral agreements and technical agreements between government agencies to work together on cybersecurity or internet governance are better indicators of who is part of China’s “community with a shared future.” For example, Cuba and the PRC signed a comprehensive partnership agreement on cybersecurity in early 2023, though the content of the deal remains secret.31 China has made few public announcements about other such agreements. In their place, the China National Computer Emergency Response Center (CNCERT) has “established partnerships with 274 CERTs in 81 countries and territories and signed cybersecurity cooperation memorandums with 33 of them.”32 But even these countries are not publicly identified.33 A few nations or groups are regularly mentioned around the claims of CNCERT’s international partnerships, however. Thailand, Cambodia, Laos, Malaysia, the Association of Southeast Asian Nations, the United Arab Emirates, Saudi Arabia, Brazil, South Africa, Benin, and the Shanghai Cooperation Organization are frequently mentioned. The paper on jointly building a community also mentions the establishment of the China-ASEAN Cybersecurity Exchange and Training Center, the utility of which may be questioned given China’s track record of state-backed hacking campaigns against its members.34

Along with the identity of their signatories, the contents of these agreements and their benefits also remain private. Without access to any of these agreements, one can only speculate about their benefits. There are also no countries especially competent at cyber operations or cybersecurity mentioned in the list above. The result may be that CNCERT and its certified private-sector partners receive “first dibs” when government agencies or other entities in these countries need incident response services; receiving favorable terms or financing from the Export-Import Bank of China to facilitate the purchase of PRC tech also aligns with other observed behavior.35

Besides favorable terms of trade for PRC tech and cybersecurity firms, some of the CNCERT international partners may also be subject to intelligence-sharing agreements. CNCERT operates a software vulnerability database called China National Information Security Vulnerability Sharing Platform, which accepts submissions from the public and partners with at least three other vulnerability databases.36 CNCERT’s international partnerships could add another valuable pipeline of software vulnerability information into China’s ecosystem. Moreover, under a 2021 regulation, Chinese firms conducting incident response for clients can voluntarily disclose those incidents to the Ministry of Industry and Information Technology’s “Cybersecurity Threat and Vulnerability Information Sharing Platform,” which has a separate system for collecting information about breaches.37 The voluntary disclosure of incidents and mandatory disclosure of vulnerabilities observed in overseas clients of Chinese cybersecurity firms would significantly increase the PRC’s visibility into global cyber operations by other nations or transnational criminal groups. 

Offensive capabilities, not just global cybersecurity, might be on CCP policymakers’ minds, too, when other countries agree to partner with China. Cybersecurity firms frequently allow their own country’s offensive teams to work unimpeded on their customers’ networks: with each new client China’s cybersecurity companies add to their rosters, China’s state-backed hackers may well gain another network where they can work without worrying about defenders.38 In this vein, Chen Yixin, the head of the Ministry of State Security, attended a July 2023 meeting of the Cyberspace Administration of China that underlined the importance of the Community with a Shared Future in Cyberspace.39 In September 2023, Chen published commentary in the magazine of the Cyberspace Administration of China arguing that supporting the Shared Future in Cyberspace was important work.40 Researchers from one cybersecurity firm found that the PRC has been conducting persistent, offensive operations against many African and Latin American states, even launching a special cross-industry working group to monitor PRC activities in the Global South.41 Chinese cybersecurity companies operating in those markets have not drawn similar attention to those operations. 

But China’s network devices and cybersecurity companies don’t just facilitate surveillance, collect data for better defense, or offer a potential offensive advantage, they can also be used to shore up relationships between governments and provide Beijing an avenue for influence. The Wall Street Journal exposed how Huawei technicians were involved in helping Ugandan security services track political opponents of the government.42 China’s government and its companies support such operations elsewhere, too. One source alleged that PRC intelligence officers were involved in cybersecurity programs of the UAE government, including offensive hacking and collection for the security services.43 The closeness of the relationship is apparent in other ways, too. The UAE is reportedly allowing China’s military to build a naval facility, jeopardizing the longevity of US facilities in the area, and tarnishing the UAE’s relationship with the United States.44

Providing other nondemocratic governments with offensive services and capabilities allows China to form close relationships with other regimes whose primary goal, like the CCP, is to maintain the current government’s hold on power. In illiberal democracies, such cooperation helps Beijing expand its influence and provides backsliding governments capabilities they would not otherwise have. 

China is plainly invested in the success of many other nondemocratic governments. Around the world, state-owned enterprises and private companies have inked deals in extractive industries that total billions of dollars. Many of these deals, say for mining copper or rare earth elements, provide critical inputs to China’s manufacturing capacity—they are the lifeblood of many industries, from batteries to semiconductors.45 In countries without strong rule of law, continued access to mining rights may depend on the governments that signed and approved those operations staying in power. China is already suffering from such abrogation of agreements in Mexico after the country’s president renationalized the country’s lithium deposits.46 Countries where China has significant interests, like the Democratic Republic of the Congo, are also considering nationalizing such assets.47 Close relationships with political elites, bolstered by agreements that provide political security, make it more difficult for those elites to renege on their contracts—or lose power to someone else who might. 

China cannot currently project military power around the world to enforce contracts or compel other governments. In lieu of a blue-water navy, China offers what essentially amounts to political security services by censoring internet content, monitoring dissidents, and hacking political opponents—and a way to align the interests of other authoritarian governments with its own. If a political leader feels that China is a guarantor of their own rule, they are much more likely to side with Beijing on matters big and small. A recent series of events in the Solomon Islands provide a portrait of what this can look like. 

Case studies in China’s “shared future”

The saga surrounding the Solomon Islands provides a good example of China’s model for internet governance and the reasons for its adoption. 

Over the course of 2022, the international community watched as the Solomon Islands vacillated on its course and in statements, and prevaricated about secret commitments to build a naval base for China. After a draft agreement for the Solomon Islands to host the People’s Liberation Army Navy (PLAN), the navy of the CCP’s military, was leaked to the press in March 2022, representatives of the Solomon Islands stated the agreement would not allow PLA military bases.48 Senior delegations from both Australia and the United States rushed to meet with representatives of the Pacific Island nation.49 Even opposition leaders in the Solomon Islands—who were surprised by the leaked documents—agreed that claims of PLA military bases should not be taken at face value.50 The back and forth by the Solomon Islands’ political parties worried China. In May 2022, a Chinese hacking team breached the Solomon Islands’ government systems, likely to assess the future of their agreement in the face of the island nation’s denials.51

But the denials only bought Solomon Islands Prime Minister Manasseh Sogavare more time. In August, the ruling party introduced a bill to delay elections from May 2023 to December of that year.52 Shortly thereafter, the Solomon Islands announced a deal to purchase 161 Huawei telecoms towers financed by the Export-Import Bank of China.53 (The deal came just four years after Australia had successfully prevented the Solomon Islands from partnering with Huawei to lay undersea cables to provide internet access to the island nation.)54 Months later, the foreign press reported in October 2022 that the Solomon Islands had sent police to China for training.55 Local contacts in the security services may be useful for the PRC. A provision of the drafted deal leaked in March 2022 allows PLA service members to travel off base in the event of “social unrest.”56 Such contacts could facilitate interventions in a political crisis on behalf of PM Sogavare or his successor. In the summer of 2023, China and the Solomon Islands signed an agreement expanding cooperation on cybersecurity and policing.57

To recap, in a single year the Solomon Islands agreed to host a PLAN base, delayed an election for Beijing’s friend, sent security services to train in the PRC, and rolled out PRC-made telecommunications equipment that can facilitate surveillance of political opponents. In the international system the CCP seeks, one that makes normal the censorship of political opponents and makes it a crime to disseminate information critical of authoritarian regimes, the sale of censorship as a service directly translates into the power to influence domestic politics in other nations. If there was a case study to sell China’s version of internet governance to nascent authoritarian regimes around the world, it would be the Solomon Islands.


In the international system the CCP seeks, one that makes normal the censorship of political opponents and makes it a crime to disseminate information critical of authoritarian regimes, the sale of censorship as a service directly translates into the power to influence domestic politics in other nations.


For countries with established authoritarian regimes, buying into China’s vision of internet governance and control is less about delaying elections and buying Huawei cell towers, and more about the transfer of expertise and knowledge of how to repress more effectively. Already convinced on the merits of China’s vision, these governments lack the expertise and technical capabilities to implement their shared vision of control over the internet. 

Despite its capable but sometimes blunder-prone intelligence services, Russia was recently found to be soliciting technical expertise and training from China on how to better control its domestic internet content.58Documents obtained by Radio Free Europe/Radio Liberty detailed how Russian government officials met with teams from the Cyberspace Administration of China in 2017 and 2019 to discuss how to crack down on virtual private networks, messaging apps, and online content. Russian officials even went so far as to request that a Russian team visit China to better understand how China’s Great Firewall works and how to “form a positive image” of Russia on the domestic and foreign internet.59 The leaked documents align with what the PRC’s policy document details already. 

Since 2016, they have co-hosted five China-Russia Internet Media Forum[s] to strengthen new media exchanges and cooperation between the two sides. Through the Sino-Russian Information Security Consultation Mechanism, they have constantly enhanced their coordination and cooperation on information security.

The two countries formalized the agreement that served as the basis for their cooperation on the sidelines of the World Internet Conference in 2019.60 They could not have picked a better venue to signify what China’s Community with a Shared Future in Cyberspace policy would mean for the world. 

The Solomon Islands and Russia neatly capture the spectrum of countries that might be most interested in China’s vision for the global internet. At each step along the spectrum, China has technical capabilities, software, services, and training it can offer to regimes from Borneo to Benin. 

In conclusion, the chart below provides a visualization of the spectrum of countries that could be the most interested in implementing China’s Community for a Shared Future in Cyberspace.61

Figure 1: PRC tech influence vs. democracy index score

Sources: Data from “China Index 2022: Measuring PRC Influence Around the Globe,” Doublethink Lab and China In The World Lab, https://china-index.io/; and “The World’s Most, and Least, Democratic Countries in 2022,” Economist, February 1, 2023, https://www.economist.com/graphic-detail/2023/02/01/the-worlds-most-and-least-democratic-countries-in-2022

By combining data from The Economist Democracy Index (a proxy for a country’s adherence to democratic norms and institutions) and Doublethink Lab’s China Index for PRC Technology Influence (limited to eighty countries and a proxy for a country’s exposure to, and integration of, PRC technology in its networks and services), this chart represents countries with low scores on democracy and significant PRC technology influence in the bottom right. Based on this chart, Pakistan in the most likely to support the Shared Future concept. Indeed, Pakistan has its own research center on the “Community for a Shared Future” concept.62The research center is hosted by the Communications University of China, which works closely with the CCP’s International Liaison Department responsible for keeping good relationships with foreign political parties. 

Internet conference goes prime time

The 2022 Wuzhen World Internet Conference got an upgrade and name change: the annual conference became an organization based in Beijing and the summit continues as its event, now called the World Internet Conference (WIC). The content from all previous Wuzhen conferences plasters the new organization’s website.63

An odd collection of six entities founded the new WIC organization: Groupe Speciale Mobile Association (GSMA), a mobile device industry organization; China Internet Network Information Center (CNNIC), which is responsible for China’s top-level .cn domain and IPv6 rollout, among others functions; ChinaCERT, mentioned above; Alibaba; Tencent; and Zhejiang Labs.64 Another report by the author connects the last organization, Zhejiang Labs, to research on AI for cybersecurity and some oversight by members of the PLA defense establishment.65

Though the Wuzhen iteration of the conference also included components of competition for technical innovation and research, the new collection of organizations overseeing WIC suggests it will focus more on promoting the fabric of the internet—hardware, software, and services—made by PRC firms. China’s largest tech companies including Alibaba and Tencent stand to benefit from China’s vision for global internet governance if the PRC can convince other countries to support its aims (and choose PRC firms to host their data in the process). Any policy changes tied to the elevation of the conference will become apparent over the coming years. For now, WIC will maintain the mission and goals of the Wuzhen conference.

Conclusion

China’s vision for the internet is really a vision for global norms around political speech, political oppression, and the proliferation of tools and capabilities that facilitate surveillance. Publications written by current and former PRC government officials on China’s “Shared Future for Humanity in Cyberspace” argue that the role of the state has been ignored until now, that each state can determine what is allowed on its internet—through the idea of cyber sovereignty, and that the political interests of the state are the core value that drives decision-making. Dressed up in language about the future of humanity, China’s vision for the internet is one safe for authoritarians to extract value from the interconnectedness of today’s economy while limiting risk to their regime’s stability. 

China is likely to pursue agreements on cybersecurity and internet content control in regimes where it stands to lose most if the government changed hands. China’s grip on the critical minerals market is only as strong as its partners’ grip on power. In many authoritarian, resource-rich countries, a change of government could mean the renegotiation of contracts for access to natural resources or their outright nationalization—jeopardizing China’s access to important industrial inputs. Although internet censorship and domestic surveillance capabilities do not guarantee an authoritarian government will stay in power, it does improve their odds. China lacks a globally capable navy to project power and enforce contracts negotiated with former governments, so keeping current signatories in power is China’s best bet. 

China will not have to work hard to promote its vision for internet governance in much of the world. Instead of China advocating for a new system that countries agree to use, then implement, the causality is reversed. Authoritarian regimes that seek economic benefits of widespread internet access are more apt to deploy PRC-made systems that facilitate mass surveillance, thus reducing the risks posed by increased connectivity. China’s tech companies are well-positioned to sell these goods, as their domestic market has forced them to perfect the capabilities of oppression.66 The example of Russia’s cooperation and learning from China demonstrates what the demand signal from other countries might look like. Elsewhere, secret agreements between national CERTs could facilitate access that allows for greater intelligence collection and visibility. Many Arabian Gulf countries already deploy PRC-made telecoms kit and hire PRC cybersecurity firms to do sensitive work. As the world’s autocrats roll out China’s technology, their countries will be added to the brochures of firms advertising internet connectivity, surveillance, and censorship services to their peers. Each nation buying into China’s Community for a Shared Future may well be a case study on the successful use of internet connectivity without increasing political risks: a world with fewer Arab Springs or “color revolutions.” 

About the author

Dakota Cary is a nonresident fellow at the Atlantic Council’s Global China Hub and a consultant at SentinelOne. He focuses on China’s efforts to develop its hacking capabilities.

The author extends special thanks to Nadège Rolland, Tuvia Gering, Tom Hegel, Kenton Thibaut, and Kitsch Liao for their edits and contributions. 

1    “China’s Internet White Paper,” China.org.cn, last modified June 8, 2010, accessed January 24, 2022, https://web.archive.org/web/20220124005101/http:/www.china.org.cn/government/whitepaper/2010-06/08/content_20207978.htm.
2    Dakota Cary and Kristin Del Rosso, “Sleight of Hand: How China Weaponizes Software Vulnerability,” Atlantic Council, 2023, https://www.atlanticcouncil.org/in-depth-research-reports/report/sleight-of-hand-how-china-weaponizes-software-vulnerability/.
3    I assume that a process for counterintelligence and operational deconfliction exists within the PRC security services, particularly for the more than one hundred companies that support the civilian intelligence service. Other mature countries have such processes and I graciously extend that competency to China.
4    Liu Zheng, “Foreign Experts Keen on Interconnected China Market,” China Daily, 2014, https://www.wuzhenwic.org/2014-11/20/c_548230.htm.
5    Catherine Shu, “China Tried to Get World Internet Conference Attendees to Ratify This Ridiculous Draft Declaration,” TechCrunch, 2014, https://techcrunch.com/2014/11/20/worldinternetconference-declaration/.
6    Xi Jinping, “Remarks by H.E. Xi Jinping President of the People’s Republic of China at the Opening Ceremony of the Second World Internet Conference,” Ministry of Foreign Affairs of the People’s Republic of China, December 24, 2015, https://www.fmprc.gov.cn/eng/wjdt_665385/zyjh_665391/201512/t20151224_678467.html.
7    State Council Information Office of the People’s Republic of China, “Full Text: Jointly Build a Community with a Shared Future in Cyberspace,” November 7, 2022, http://english.scio.gov.cn/whitepapers/2022-11/07/content_78505694.htm. At the time, Xi was building on the nascent “shared future for humanity” concept introduced at the Eighteenth Party Congress in 2012; see Xinhua News Agency, “A Community of Shared Future for All Humankind,” Commentary, March 20, 2017, http://www.xinhuanet.com/english/2017-03/20/c_136142216.htm. However, state media has since claimed that the “shared future” concept was launched during a March 2013 event that Xi participated in while visiting Moscow; see Central Cyberspace Affairs Commission of the People’s Republic of China, “共行天下大道 共创美好未来——写在习近平主席提出构建人类命运共同体理念十周年之际,” PRC, March 24, 2023, http://www.cac.gov.cn/2023-03/24/c_1681297761772755.htm. The party rolled out the concept as part of its foreign policy and even added its language to the constitution in 2018; see N. Rolland [@RollandNadege], “My latest for @ChinaBriefJT on China’s ‘community with a shared future for humanity,’ which is BTW now enshrined in PRC Constitution,” Twitter (now X), February 26, 2018, https://twitter.com/RollandNadege/status/968152657226555392, as also seen in N. Rolland, ed., An Emerging China-Centric Order: China’s Vision for a New World Order in Practice, National Bureau of Asian Research, 2020, https://www.nbr.org/wp-content/uploads/pdfs/publications/sr87_aug2020.pdf.
8    The PRC has even republished the 2015 document with updated statistics every few years, most recently in 2022; see State Council Information Office, “Full Text: Jointly Build a Community with a Shared Future in Cyberspace.”
9    US Director of National Intelligence (DNI), “Digital Repression Growing Globally, Threatening Freedoms,” [PDF file],  ODNI, April 24, 2023, https://www.dni.gov/files/ODNI/documents/assessments/NIC-Declassified-Assessment-Digital-Repression-Growing-April2023.pdf.
10    E. Kania et al., “China’s Strategic Thinking on Building Power in Cyberspace,” New America, September 25, 2017, https://www.newamerica.org/cybersecurity-initiative/blog/chinas-strategic-thinking-building-power-cyberspace/.
11    National Computer Virus Emergency Response Center, “‘Empire of Hacking’: The U.S. Central Intelligence Agency—Part I,” [PDF file], May 4, 2023, https://web.archive.org/web/20230530221200/http:/gb.china-embassy.gov.cn/eng/PressandMedia/Spokepersons/202305/P020230508664391507653.pdf.
12    Occasionally, translations refer to this as “a Community with a Shared Destiny [for Mankind]” or “Shared Future for Humanity in Cyberspace.” See State Council Information Office of the People’s Republic of China, “Full text: Jointly Build a Community with a Shared Future in Cyberspace.”
13    Thanks to Nadege Rolland for her keen insight. 
14    Xi, “Remarks by H.E. Xi Jinping President of the People’s Republic of China.” 
15    “China’s Internet White Paper,” China.org.cn. Thanks to Tuvia Gering for flagging this.
16    W. C. Hannas, J. Mulvenon, and A. B. Puglisi, Chinese Industrial Espionage: Technology Acquisition and Military Modernisation (Abingdon, United Kingdom: Routledge, 2013), https://doi.org/10.4324/9780203630174.
17    Institute for a Community with Shared Future, “《网络暴力现象治理报告》 [Governance Report on the Phenomenon of Internet Violence],” Communication University of China, July 1, 2022, https://web.archive.org/web/20221205001148/https:/icsf.cuc.edu.cn/2022/0701/c6043a194580/page.htm; andInstitute for a Community with Shared Future, “Full Text《网络暴力现象治理报告》[Governance Report on the Phenomenon of Internet Violence],” Communication University of China, July 1, 2022, https://archive.ph/B741D.
18    Institute for a Community with Shared Future, “Understanding the Global Cyberspace Development and Governance Trends to Promote the Construction of a Cyberspace Community with a Shared Future,” Communication University of China, September 9, 2020, www.archive.ph/7XQyX.
19    Xi, “Remarks by H.E. Xi Jinping President of the People’s Republic of China.”
20    R. Creemers, P. Triolo, and G. Webster, “Translation: China’s New Top Internet Official Lays Out Agenda for Party Control Online,” New America, September 24, 2018, https://www.newamerica.org/cybersecurity-initiative/digichina/blog/translation-chinas-new-top-internet-official-lays-out-agenda-for-party-control-online/.
21    M. Schmitt, “The Sixth United Nations GGE and International Law in Cyberspace,” Just Security (forum), June 10, 2021, https://www.justsecurity.org/76864/the-sixth-united-nations-gge-and-international-law-in-cyberspace/; and S. Sabin, “The UN Doesn’t Know How to Define Cybercrime,” Axios Codebook (newsletter), January 10, 2023, https://www.axios.com/newsletters/axios-codebook-e4388c1d-d782-4743-b96f-c228cdc7baa1.html.
22    A. Martin, “China Proposes UN Treaty Criminalizes ‘Dissemination of False Information,’ ” Record, January 17, 2023, https://web.archive.org/web/20230118135457/https:/therecord.media/china-proposes-un-treaty-criminalizing-dissemination-of-false-information/.
23    R. Serabian and L. Foster, “Pro-PRC Influence Campaign Expands to Dozens of Social Media Platforms, Websites, and Forums in at Least Seven Languages, Attempted to Physically Mobilize Protesters in the U.S.,” Mandiant, September 7, 2021, https://www.mandiant.com/resources/blog/pro-prc-influence-campaign-expands-dozens-social-media-platforms-websites-and-forums; and G. Eady et al., “Exposure to the Russian Internet Research Agency Foreign Influence Campaign on Twitter in the 2016 US Election and Its Relationship to Attitudes and Voting Behavior, Nature Communications 14, no. 62 (2023), https://www.nature.com/articles/s41467-022-35576-9#MOESM1.
24    State Council of Information Office, PRC, “LIVE: Press Conference on White Paper on Jointly Building Community with Shared Future in Cyberspace,” New China TV, streamed live November 6, 2022, YouTube video, https://www.youtube.com/watch?v=hBYbjnSeLX0.
25    China Daily, “Jointly Build a Community with a Shared Future in Cyberspace,” November 8, 2022, https://archive.ph/ch3LP+.
26    Access Now, “Internet Shutdowns in 2022,” 2023, https://www.accessnow.org/internet-shutdowns-2022/.
27    K. Drinhausen and J. Lee, “CCP 2021: Smart Governance, Cyber Sovereignty, and Tech Supremacy,” Mercator Institute for China Studies (MERICS), June 15, 2021, https://merics.org/en/ccp-2021-smart-governance-cyber-sovereignty-and-tech-supremacy.
28    N. Attrill and A. Fritz, “China’s Cyber Vision: How the Cyberspace Administration of China Is Building a New Consensus on Global Internet Governance,” Australian Strategic Policy Institute, November 24, 2021, https://www.aspi.org.au/report/chinas-cyber-vision-how-cyberspace-administration-china-building-new-consensus-global.
29    S. Hoffman, “Potential Chinese influence on African IT infrastructure,” Censys, March 8, 2023,   https://censys.com/potential-chinese-influence-on-african-it-infrastructure/.
30    Xinhua, “Full Text: International Strategy of Cooperation on Cyberspace,” March 1, 2017, https://perma.cc/GDY6-6ZF8.
31    Prensa Latina, “Cuba and China Sign Agreement on Cybersecurity,” 2023, April 3, 2023,  https://www.plenglish.com/news/2023/04/03/cuba-and-china-sign-agreement-on-cybersecurity/.
32    China Daily, “Jointly Build.” CNCERT is a government-organized nongovernmental organization, not a direct government agency. It reports incidents and software vulnerabilities to PRC government agencies, including the 867-917 National Security Platform, and a couple of Ministry of Public Security Bureaus. See About Us (archive.vn).
33    When asked for records of these international partners, CNCERT directed the author back to the home page of the organization’s website.
35    Asian Development Bank, “Information on the Export-Import Bank of China,” n.d., https://www.adb.org/sites/default/files/linked-documents/46058-002-sd-04.pdf.
36    D. Cary and K. Del Rosso, Sleight of Hand: How China Weaponizes Software Vulnerabilities, Atlantic Council, September 6, 2023,  https://www.atlanticcouncil.org/in-depth-research-reports/report/sleight-of-hand-how-china-weaponizes-software-vulnerability/ 
37    Cary and Del Rosso, Sleight of Hand.
38    I assume that a process for counterintelligence and operational deconfliction exists with the PRC security services. Other mature countries have such processes and I graciously extend that competency to China.
39    Xinhua, “习近平对网络安全和信息化工作作出重要指示,” July 15, 2023, https://archive.ph/GkqnS.
40    Chen Yixin, Secretary of the Party Committee and Minister of the Ministry of National Security, “Strengthening National Security Governance in the Digital Era,” China Internet Information Journal, September 26, 2023,  (中国网信). 国家安全部党委书记、部长陈一新:加强数字时代的国家安全治理–理论-中国共产党新闻网 (archive.ph).
41    M. Hill, “China’s Offensive Cyber Operations Support Soft Power Agenda in Africa,” CSO Online, September 21, 2023, https://www.csoonline.com/article/652934/chinas-offensive-cyber-operations-support-soft-power-agenda-in-africa.html; and T. Hegel, “Cyber Soft Power | China’s Continental Takeover,” SentinelOne, September 21, 2023, https://www.sentinelone.com/labs/cyber-soft-power-chinas-continental-takeover/.
42    J. Parkinson, N. Bariyo, and J. Chin, “Huawei Technicians Helped African Governments Spy on Political Opponents, Wall Street Journal, August 15, 2019, https://archive.ph/Xtwl1.
43    Interview conducted in confidentiality; the name of the interviewee is withheld by mutual agreement.
44    J. Hudson, E. Nakashima, and L. Sly, “Buildup Resumed at Suspected Chinese Military Site in UAE, Leak Says,”  Washington Post, April 26, 2023, https://www.washingtonpost.com/national-security/2023/04/26/chinese-military-base-uae/.
45    Congressional Research Service, “Rare Earth Elements: The Global Supply Chain,” December 16, 2013,   https://crsreports.congress.gov/product/pdf/R/R41347/20; M. Humphries, “China’s Mineral Industry and U.S. Access to Strategic and Critical Minerals: Issues for Congress,” Congressional Research Service, March 20, 2015,  https://sgp.fas.org/crs/row/R43864.pdf; and the White House, “Building Resilient Supply Chains, Revitalizing American Manufacturing, and Fostering Broad-based Growth: 100-Day Reviews Under Executive Order 14017,”  June 2021, https://www.whitehouse.gov/wp-content/uploads/2021/06/100-day-supply-chain-review-report.pdf.
47    “The Green Revolution Will Stall without Latin America’s Lithium,” Economist, May 2, 2023, https://www.economist.com/the-americas/2023/05/02/the-green-revolution-will-stall-without-latin-americas-lithium.
48    N. Fildes and K. Hille, “Beijing Closes in on Security Pact That Will Allow Chinese Troops in Solomon Islands,”  Financial Times, March 24, 2022, https://archive.ph/X5a4h; and Associated Press, “Solomon Islands Says China Security Deal Won’t Include Military Base,” via National Public Radio, April 1, 2022, https://www.npr.org/2022/04/01/1090184438/solomon-islands-says-china-deal-wont-include-military-base
49    N. Fildes, “Australian Minister Flies to Solomon Islands for Urgent Talks on China Pact,” Financial Times, April 12, 2022, https://www.ft.com/content/9da02244-2a10-4f18-a5c5-e88b14a2530b; and K. Lyons and D. Wickham, “The Deal That Shocked the World: Inside the China-Solomons Security Pact,” Guardian, April 20, 2022, https://www.theguardian.com/world/2022/apr/20/the-deal-that-shocked-the-world-inside-the-china-solomons-security-pact.
50    N. Fildes, “Australian PM Welcomes Solomon Islands Denial of Chinese Base Reports,” Financial Times, July 14, 2022, https://www.ft.com/content/789340da-8c1a-4aff-8cf6-276c97c9f200.
51    Microsoft, Microsoft Digital Defense Report 2022, 2022,  https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE5bUvv.
52    Reuters, “Bill to Delay Solomon Islands Election until December 2023 Prompts Concern,” in Guardian, August 9, 2022, https://www.theguardian.com/world/2022/aug/09/bill-to-delay-solomon-islands-election-until-december-2023-prompts-concern; and D. Cave, “Solomon Islands’ Leader, a Friend of China, Gets an Election Delayed,” New York Times, September 8, 2022,  https://www.nytimes.com/2022/09/08/world/asia/solomon-islands-election-delay.html.
53    N. Fildes, “China Funds Huawei’s Solomon Islands Deal in Sign of Deepening Ties,” Financial Times, August 19, 2022, https://archive.ph/R47T0.
54    “Huawei Marine Signs Submarine Cable Contract in Solomon Islands,” Huawei, July 2017, https://web.archive.org/web/20190129114026/https:/www.huawei.com/en/press-events/news/2017/7/HuaweiMarine-Submarine-Cable-Solomon; and W. Qiu, “Coral Sea Cable System Overview,” Submarine Cable Networks, December 13, 2019, https://archive.ph/E049b.
55    Kirsty Needham, “Solomon Island Police Officers Head to China for Training,” Reuters, October 12, 2022,  https://www.reuters.com/world/asia-pacific/solomon-island-police-officers-head-china-training-2022-10-12/.
56    Fildes and Hillie, “Beijing Closes in on Security Pact.”
57    Nikkei Asia, “Solomons Says China Will Assist in Cyber, Community Policing,” Nikkei, July 17, 2023, https://archive.ph/90diZ.
58    D. Belovodyev, A. Soshnikov, and R. Standish, “Exclusive: Leaked Files Show China and Russia Sharing Tactics on Internet Control, Censorship,” Radio Free Europe/Radio Liberty, April 5, 2023, https://www.rferl.org/a/russia-china-internet-censorship-collaboration/32350263.html.
59    Belovodyev, Soshnikov, and Standish, “Exclusive: Leaked Files.”
60    Belovodyev, Soshnikov, and Standish, “Exclusive: Leaked Files.”
61    Thanks to Tuvia Gering for this idea.
62    “〖转载〗人类命运共同体巴基斯坦研究中心主任哈立德·阿克拉姆接受光明日报采访:中巴关系“比山高、比蜜甜”名副其实,” Communication University of China, June 4, 2021, https://comsfuture.cuc.edu.cn/2021/1027/c7810a188141/pagem.htm.
63    Office of the Central Cyberspace Affairs Commission, “我国网络空间国际交流合作领域发展成就与变革,” China Internet Information Journal, December 30, 2023, www.archive.vn/tCnEa; D. Bandurski, “Taking China’s Global Cyber Body to Task,” China Media Project, 2023, https://chinamediaproject.org/2022/07/14/taking-chinas-global-cyber-body-to-task/; and Xinhua, “世界互联网大会成立,” Gov.cn, July 12, 2022,  https://web.archive.org/web/20220714134027/http:/www.gov.cn/xinwen/2022-07/12/content_5700692.htm.
64    World Internet Conference, “Introduction,” WIC website, August 31, 2022, www.archive.ph/Axmuc.
65    Dakota Cary, “Downrange: A Survey of China’s Cyber Ranges,” Issue Brief, Center for Security and Emerging Technology, September 2022, https://doi.org/10.51593/2021CA013.
66    Drinhausen and Lee, “CCP 2021: Smart Governance, Cyber Sovereignty, and Tech Supremacy.”

The post Community watch: China’s vision for the future of the internet appeared first on Atlantic Council.

]]>
IMF Director of the Monetary and Capital Markets Department Tobias Adrian quoted by Payments Journal on the IMF’s proposed XC platform https://www.atlanticcouncil.org/insight-impact/in-the-news/imf-director-of-the-monetary-and-capital-markets-department-tobias-adrian-quoted-by-payments-journal-on-the-imfs-proposed-xc-platform/ Fri, 01 Dec 2023 20:38:38 +0000 https://www.atlanticcouncil.org/?p=713843 Read the full post here.

The post IMF Director of the Monetary and Capital Markets Department Tobias Adrian quoted by Payments Journal on the IMF’s proposed XC platform appeared first on Atlantic Council.

]]>
Read the full post here.

The post IMF Director of the Monetary and Capital Markets Department Tobias Adrian quoted by Payments Journal on the IMF’s proposed XC platform appeared first on Atlantic Council.

]]>
#AtlanticDebrief – How will the OSA be implemented? | A Debrief from Melanie Dawes https://www.atlanticcouncil.org/content-series/atlantic-debrief/atlanticdebrief-how-will-the-osa-be-implemented-a-debrief-from-melanie-dawes/ Fri, 01 Dec 2023 17:47:20 +0000 https://www.atlanticcouncil.org/?p=710674 Susan Ness sits down with Ofcom Chief Executive Dame Melanie Dawes to discuss Ofcom’s regulatory guidance on illegal harms.  

The post #AtlanticDebrief – How will the OSA be implemented? | A Debrief from Melanie Dawes appeared first on Atlantic Council.

]]>

IN THIS EPISODE

The United Kingdom’s landmark Online Safety Act (OSA)—imposing obligations on user-to-user online services and search engines—received royal assent last month. What is the OSA designed to achieve? What are the types of companies in scope and how does the UK communications regulator, Ofcom, plan to ensure that platforms and search engines comply? How will Ofcom address issues of encryption, age assurance and right to privacy? And how will Ofcom work with other regulators in the UK and around the globe around these common issues?  

On this episode of #AtlanticDebrief—in partnership with the Atlantic Council’s Digital Forensic Research Lab—Susan Ness sits down with Ofcom Chief Executive Dame Melanie Dawes to discuss Ofcom’s regulatory guidance on illegal harms.  

You can watch #AtlanticDebrief on YouTube and as a podcast.  

MEET THE #ATLANTICDEBRIEF HOST

The Europe Center promotes leadership, strategies, and analysis to ensure a strong, ambitious, and forward-looking transatlantic relationship.

The post #AtlanticDebrief – How will the OSA be implemented? | A Debrief from Melanie Dawes appeared first on Atlantic Council.

]]>
IMF Financial Counsellor and Director Tobias Adrian quoted by Ledger Insights on the IMF’s proposed XC platform https://www.atlanticcouncil.org/insight-impact/in-the-news/imf-financial-counsellor-and-director-tobias-adrian-quoted-by-ledger-insights-on-the-imfs-proposed-xc-platform/ Thu, 30 Nov 2023 04:45:02 +0000 https://www.atlanticcouncil.org/?p=710326 Read the full article here.

The post IMF Financial Counsellor and Director Tobias Adrian quoted by Ledger Insights on the IMF’s proposed XC platform appeared first on Atlantic Council.

]]>
Read the full article here.

The post IMF Financial Counsellor and Director Tobias Adrian quoted by Ledger Insights on the IMF’s proposed XC platform appeared first on Atlantic Council.

]]>
Head of BIS Innovation Hub Cecilia Skingsley’s remarks quoted by Ledger Insights on joint project with IMF and World Bank to tokenize development funds https://www.atlanticcouncil.org/insight-impact/in-the-news/head-of-bis-innovation-hub-cecilia-skingsley-quoted-by-ledger-insights-on-joint-project-with-imf-and-world-bank-to-tokenize-development-funds/ Tue, 28 Nov 2023 21:36:15 +0000 https://www.atlanticcouncil.org/?p=710313 Read the full article here.

The post Head of BIS Innovation Hub Cecilia Skingsley’s remarks quoted by Ledger Insights on joint project with IMF and World Bank to tokenize development funds appeared first on Atlantic Council.

]]>
Read the full article here.

The post Head of BIS Innovation Hub Cecilia Skingsley’s remarks quoted by Ledger Insights on joint project with IMF and World Bank to tokenize development funds appeared first on Atlantic Council.

]]>
Head of BIS Innovation Hub Cecilia Skingsley quoted by CoinDesk on CBDC privacy standards https://www.atlanticcouncil.org/insight-impact/in-the-news/head-of-bis-innovation-hub-cecilia-skingsley-quoted-by-coindesk-on-cbdc-privacy-standards/ Tue, 28 Nov 2023 21:30:03 +0000 https://www.atlanticcouncil.org/?p=710302 Read the full article here.

The post Head of BIS Innovation Hub Cecilia Skingsley quoted by CoinDesk on CBDC privacy standards appeared first on Atlantic Council.

]]>
Read the full article here.

The post Head of BIS Innovation Hub Cecilia Skingsley quoted by CoinDesk on CBDC privacy standards appeared first on Atlantic Council.

]]>
Head of BIS Innovation Hub Cecilia Skingsley keynote quoted by Reuters on new joint tokenization initiative with IMF and World Bank https://www.atlanticcouncil.org/insight-impact/in-the-news/head-of-bis-innovation-hub-cecilia-skingsley-keynote-quoted-by-reuters-on-new-joint-tokenization-initiative-with-imf-and-world-bank/ Tue, 28 Nov 2023 21:14:06 +0000 https://www.atlanticcouncil.org/?p=710237 Read the full article here.

The post Head of BIS Innovation Hub Cecilia Skingsley keynote quoted by Reuters on new joint tokenization initiative with IMF and World Bank appeared first on Atlantic Council.

]]>
Read the full article here.

The post Head of BIS Innovation Hub Cecilia Skingsley keynote quoted by Reuters on new joint tokenization initiative with IMF and World Bank appeared first on Atlantic Council.

]]>
CBDC Tracker cited by the House of Commons Treasury Committee report on a digital pound https://www.atlanticcouncil.org/insight-impact/in-the-news/cbdc-tracker-cited-by-the-house-of-commons-treasury-committee-report-on-a-digital-pound/ Tue, 28 Nov 2023 16:58:00 +0000 https://www.atlanticcouncil.org/?p=713849 Read the full article here.

The post CBDC Tracker cited by the House of Commons Treasury Committee report on a digital pound appeared first on Atlantic Council.

]]>
Read the full article here.

The post CBDC Tracker cited by the House of Commons Treasury Committee report on a digital pound appeared first on Atlantic Council.

]]>
Kumar quoted and Crypto Regulations Tracker cited by Delve on the status of crypto asset regulations around the world https://www.atlanticcouncil.org/insight-impact/in-the-news/kumar-quoted-and-crypto-regulations-tracker-cited-by-delve-on-the-status-of-crypto-asset-regulations-around-the-world/ Mon, 20 Nov 2023 15:27:26 +0000 https://www.atlanticcouncil.org/?p=704807 Read the full article here.

The post Kumar quoted and Crypto Regulations Tracker cited by Delve on the status of crypto asset regulations around the world appeared first on Atlantic Council.

]]>
Read the full article here.

The post Kumar quoted and Crypto Regulations Tracker cited by Delve on the status of crypto asset regulations around the world appeared first on Atlantic Council.

]]>
Russian War Report: Desperate for recruits, Russia offers one million rubles to join its military https://www.atlanticcouncil.org/blogs/new-atlanticist/russian-war-report-russian-army-recruitment-fundraising/ Thu, 16 Nov 2023 19:14:31 +0000 https://www.atlanticcouncil.org/?p=704603 The Russian army is struggling to fund equipment and recruit as they host fundraisers and drives offering pledges of one million rubles.

The post Russian War Report: Desperate for recruits, Russia offers one million rubles to join its military appeared first on Atlantic Council.

]]>
As Russia continues its assault on Ukraine, the Atlantic Council’s Digital Forensic Research Lab (DFRLab) is keeping a close eye on Russia’s movements across the military, cyber, and information domains. With more than seven years of experience monitoring the situation in Ukraine—as well as Russia’s use of propaganda and disinformation to undermine the United States, NATO, and the European Union (EU)—the DFRLab’s global team presents the latest installment of the Russian War Report

Security

Russian armed forces face difficulties in replenishing military and paramilitary supplies amid failed offensive in Avdiivka

Russian MoD seeks to boost recruitment efforts across Russia

Tracking narratives

Russian disinformation campaign to encourage split in Ukrainian leadership

Investigations

Media investigation finds Ukrainian colonel coordinated Nord Stream pipeline attack

Media policy

Russia gets gradually closer to blocking VPNs in 2024

Russian armed forces face difficulties in replenishing military and paramilitary supplies amid failed offensive in Avdiivka

In a November 8 article, the Wall Street Journal reported that Russian officials in an April visit to Egypt had asked President Abdel Fattah al-Sisi “to give back more than a hundred engines from Russian helicopters that Moscow needed for Ukraine.” Another source quoted by the Wall Street Journal also said that Russian officials were seeking to “[go] back in secret to their customers trying to buy back what they sold them.” The Washington Post, in an October 16 investigation, quoted US intelligence reporting that satellite imagery helped identify a North Korean container ship that could have provided munitions for Russia. The investigation found that three hundred containers had been shipped from North Korea’s Rajin Harbor to the Russian harbor of Dunai and were subsequently located at an ammunition depot next to the Azov Sea.

The DFRLab additionally found evidence that the Russian armed forces are turning to civilians to help with purchasing additional paramilitary equipment, including drones, thermal sights, vehicles, and medicine. The Russian charity fund “All for Victory” hosted fundraisers organized by Russian propagandist Vladimir Solovyov, including an additional “emergency fundraiser” to support soldiers on the front line during the battle of Avdiivka in October 2023. According to an October 13 Telegram post, the fundraiser aimed to collect money to purchase “drones, thermal sights, [. . .] anti-electronic warfare devices to protect themselves against [the enemy], tactical medicine, bulletproof vests and helmets, warm clothes and boots.”

Screencap of a promotional poster for the “People’s Front” and “Everything for victory!” joint fundraiser named “Emergency collection ‘Avdiivka. Everything for victory!’” (Source: People’s Front/archive)
Screencap of a promotional poster for the “People’s Front” and “Everything for victory!” joint fundraiser named “Emergency collection ‘Avdiivka. Everything for victory!’” Source: People’s Front/archive)

The DFRLab also found that several military bloggers reposted the original post to their channels, reaching an audience of nearly eight hundred thousand people, according to data from a query using Telegram monitoring tool TGStat. Pro-Russian news outlet DNR News reported that the initiative had raised around eighty-two million rubles (approximately nine hundred thousand dollars) in seven days.

Screencap of a TGStart readout, breaking down the reach of the People’s Front post that advertised the fundraiser. As of October 31, the post had been viewed a total of 815,204 times.  (Source : TGStat/archive)
Screencap of a TGStart readout, breaking down the reach of the People’s Front post that advertised the fundraiser. As of October 31, the post had been viewed a total of 815,204 times.  (Source : TGStat/archive)

Additionally, Solovyov held a separate fundraiser during a livestream dedicated to the purchase of 1,440 units of Chinese-made DJI Mavic and FPV drones, earning a total of nearly 470 million rubles (approximately 5.175 million dollars) over the course of three days.

Screencap of the collection report for the "Solovyov Live" livestreamed fundraiser. Of the required 480 million rubles, the fundraiser collected 470 million rubles, as of October 25, 2023. (Source: pobeda.onf.ru/archive)
Screencap of the collection report for the “Solovyov Live” livestreamed fundraiser. Of the required 480 million rubles, the fundraiser collected 470 million rubles, as of October 25, 2023. (Source: pobeda.onf.ru/archive)

“The People’s Front,” a Russian organization that President Vladimir Putin directly headed from 2013 until 2018, established the “Everything for Victory” charity fund. The charity fund and the government-sponsored organization are intertwined, as the People’s Front was renamed in May 2022, as “The People’s Front, All for the victory,” only a few months after Russia invaded Ukraine. In 2022, the organization focused on providing humanitarian aid, which it advertises on its VKontakte page; it later shifted to providing paramilitary goods, initially as means to support the self-proclaimed separatist armed forces of the Donbas region. Still later, it shifted to providing the same goods but to the battalions of the Russian army, with additional promotional support on social media from military bloggers.

Upon investigating the phone number on display on the charity fund’s website, the DFRLab found that the number had been promoted as a general helpline across Russia, in which the Russian Red Cross also participates. Regional information portals, including Russia’s public service platform (Gosuslugi) in the region of Saint Petersburg, pushed the phone number as the health ministry’s local helpline during the COVID-19 pandemic.

A comparison of screencaps showing the phone helpline as displayed on the “People’s Front” advertisement alongside an earlier use, where it was presented as the Gosuslugi helpline for the Saint-Petersburg region regarding the COVID pandemic (Source: pobeda.onf.ru/archive, left; Gosuslugi/archive, right)
A comparison of screencaps showing the phone helpline as displayed on the “People’s Front” advertisement alongside an earlier use, where it was presented as the Gosuslugi helpline for the Saint-Petersburg region regarding the COVID pandemic (Source: pobeda.onf.ru/archive, left; Gosuslugi/archive, right)

Valentin Châtelet, Research Associate, Brussels, Belgium

Russian MoD seeks to boost recruitment efforts across Russia

The Russian Ministry of Defense (MoD) website dedicated to the recruitment of contract soldiers is engaging in a massive campaign to build an “elite division of contract soldiers” and has pledged a one-time down payment of one million rubles (approximately eleven thousand dollars) upon signing the contract. This special recruitment bonus will supposedly be available from November 1 to November 25. Russian military bloggers “Старше Эдды” (“Older than Edda”), “Пул N3” (“Bullet No. 3”), “Kotsnews,” and “Военкор Котенок” (“Military correspondent Kotyonok”) all amplified the MoD campaign on their Telegram channels.

A similar campaign took place in October, which another channel, “Reviewer of the war,” referred to as “the biggest one-time down payment for contract soldiers.” In that earlier campaign, the Russian MoD promised that new contractors would be paid six hundred thousand rubles (approximately 6,600 dollars) upon signing.

The Georgia-based “Get lost!” initiative, which aims to help Russians flee from mobilization, drafting, and summons to military commissariats reported on November 12 that Russian authorities had engaged in a widespread SMS campaign to entice men to enroll as soldiers. The initiative claimed that messages were sent to residents of the Bashkortotan and Tatarstan Republics, as well as Irkutsk Oblast. Although the DFRLab was unable to independently confirm the authenticity of most of the senders, it identified one phone number that users on callfilter.app, a website dedicated to report phone scams, identified as “military commissariat.” On November 14, the initiative reported that additional calls to enroll were identified online, as military commissariats sent out messages on messaging apps.

Valentin Châtelet, Research Associate, Brussels, Belgium

Russian disinformation campaign to encourage split in Ukrainian leadership

Echoing an earlier situation featuring a poorly made deepfake of Ukrainian President Volodymyr Zelenskyy, three deepfake videos of General Valerii Zaluzhnyi, commander-in-chief of the Armed Forces of Ukraine, recently surfaced on Telegram. In the new fabricated videos, a clear facsimile of Zaluzhnyi claimed or alluded to Zelenskyy’s supposed intention to kill the general. These videos appeared against the backdrop of the death of Hennadii Chastiakov, Zaluzhnyi’s aide, from an explosion, the cause of which remains under investigation.

On the evening of November 6, the day the aide was killed, Russian Telegram channel Radio Truha published the first of the deepfake videos. The channel is connected to another channel, Truha Barselona. Both channels claim to provide “satire” regarding Anatolii Sharij, a pro-Russian Ukrainian blogger charged with high treason. The video copied graphics of the Armed Forces of Ukraine and supposedly depicted Zaluzhnyi saying that his birthday had already passed and asked viewers not to give him gifts, implying any such gift would be explosives such as those that killed his aide.

The awkward movement, unnatural facial expressions, and changed voice of the general, as well as the absence of a statement on his official channels, suggested that the video was not real. That did not stop, however, multiple Russian pro-Kremlin Telegram channels from amplifying the video. As of November 16, the video had received 233,000 views and been shared 1,500 times, according to TGStat, a Telegram analytics tool. While the original video had a disclaimer with the channel’s handle “@RadioTruha,” some of the pro-Kremlin channels cut the ending, thus obscuring its satiric origin.

Screenshot of the first deepfake video, as first published by pro-Russian Telegram channel @RadioTruha. (Source: @RadioTruha/archive)
Screenshot of the first deepfake video, as first published by pro-Russian Telegram channel @RadioTruha. (Source: @RadioTruha/archive)

The second fabricated video appeared on November 7, posted by Radio Truha but without the handle watermark at the end of the video. In this video, a fake Zaluzhnyi calls for mutiny, asking soldiers to march on Kyiv and stop listening to the “criminal orders of Zelenskyy.” The Center Countering Disinformation debunked the video and highlighted that it was widely shared on TikTok, X, and Telegram. Here again, Zaluzhnyi’s appearance, voice, and movement seen in the video appeared clearly unnatural.

Screencap of the second deepfake video of Zaluzhnyi, as first posted by @RadioTruha. (Source: RadioTruha/archive)
Screencap of the second deepfake video of Zaluzhnyi, as first posted by @RadioTruha. (Source: RadioTruha/archive)

Radio Truha’s compatriot channel, Truha Barselona, published a third fake video; in this third fabricated video, Zaluzhnyi is seen claiming that, because Zelenskyy owns all Ukrainian media, they “wrongfully claimed that it is a deepfake.” This poorly made video supposedly featuring the commander-in-chief had received almost three hundred thousand views and been shared 7,500 times as of November 16.

Screencap of the Telegram post for the third deepfake video in which the fake Zaluzhnyi declares that claims that the videos are fake are themselves false. (Source: @TruhaBarselona/archive)
Screencap of the Telegram post for the third deepfake video in which the fake Zaluzhnyi declares that claims that the videos are fake are themselves false. (Source: @TruhaBarselona/archive)

While even some pro-Kremlin users acknowledged the clearly fake nature of the video, some shared it without additional comment. Meanwhile, Ukrainians mocked the forgeries with a deepfake of their own, in which a Zaluzhnyi facsimile declares that he and Zelenskyy had argued over which target in occupied territory to hit, implying that they are plentiful.

On November 13, Truha Barselona published a deepfake of Zelenskyy, in which he appears to order Ukrainian troops to leave the Donbas town of Avdiivka and which includes footage of a supposed cemetery of Ukrainian soldiers that had “not evacuated from Bakhmut.” The video features the same telltale signs of inauthenticity as the three fake Zaluzhnyi videos.

It is not the first time that Russian sources have tried to portray a conflict between Ukrainian military and political leadership. In late April, they launched an advertising campaign suggesting Zaluzhnyi had political ambitions. Since then, occasional ads and articles on forged websites have appeared sporadically, making similar claims. In one such instance, ads appeared in early November claiming that Zaluzhnyi would take Zelenskyy’s seat after the former penned a column for the Economist.

Roman Osadchuk, Research Associate

Media investigation finds Ukrainian colonel coordinated Nord Stream pipeline attack

On November 11, the Washington Post and Der Spiegel published a joint investigation arguing that, according to anonymous sources in Ukraine and Europe, the explosion of three lines of the Nord Stream gas pipeline on September 26, 2022, were coordinated by Roman Chervinsky, a former commander of the Ukrainian Special Operations Forces. According to the Washington Post and Der Spiegel interlocutors, Chervinsky managed a team of six people who rented the vessel and, using deep-sea diving equipment, installed explosive devices on the pipelines.

The two media outlets also reported that, according to their sources, Chervinsky had not acted alone and that he was obeying orders from high-ranking Ukrainian officials, including Major General of the Armed Forces of Ukraine Viktor Hanushchak, who reports to Commander-in-Chief of the Armed Forces of Ukraine Valerii Zaluzhnyi. However, Ukrainian authorities denied the involvement of the Ukrainian Armed Forces in the pipeline explosion, and Chervinsky himself also denied having any role in the attack. The Washington Post and Der Spiegel also clarified that there was no evidence that Zelenskyy had approved this attack and that Chervinsky’s involvement in this case revealed internal tensions within the Ukrainian government, specifically between the country’s intelligence and military establishment and the political leadership.

Chervinsky has been in custody since April 2023 in Ukraine, where he is accused of abusing his power in a failed special operation aimed at recruiting a Russian pilot in 2022. According to Ukrainian Security Services, Chervinsky acted without permission and, in doing so, gave away coordinates of a Ukrainian airbase in Kanatove, which then became a target of Russian missile attacks in July 2022, killing the commander of the base’s military unit and wounding seventeen others. According to the Washington Post and Der Spiegel, Chervinsky also coordinated a complex operation in 2020, which attempted to trick Wagner mercenary group fighters into entering Ukraine from Belarus in order to bring them to justice. While Chervinsky failed in this operation as well, Belarusian authorities instead arrested thirty-three Wagner fighters near the country’s capital, Minsk, charging them with trying to overthrow the government around the 2020 presidential elections. Minsk subsequently handed them over to Russia in August 2020.

Givi Gigitashvili, Research Associate, Warsaw, Poland

Russia gets gradually closer to blocking VPNs in 2024

Content that is unfavorable or problematic in the Kremlin’s eyes is still available online for Russian internet users using circumvention tools such as virtual private networks (VPNs) to access it. Russia has already blocked several VPNs, but the move was not total or system-wide. It seems that a more comprehensive crackdown on VPNs will come about in 2024, however.

In early September, Russia’s Ministry of Digital Development introduced a draft government resolution expanding the powers of internet regulator Roskomnadzor in terms of blocking information about or access to prohibited online resources in Russia. In the same period, Digital Development Minister Maksut Shadayev stated that authorities would not penalize Russians for using VPNs—technology that helps Russians to circumvent government blocks to access restricted information.

Later in September, Roskomnadzor developed criteria for blocking information that provides tips on how to bypass censorship, describes the advantages of such tools, or urges their purchase. Reportedly, the restrictions will not be applied to scientific, technical, and statistical information on ways of bypassing the blockage. The proposal, if approved, would come into force on March 1, 2024, and be valid through September 1, 2029.

According to digital rights organization Roskomsvoboda, the proposal would not only violate digital rights but also possibly the right to privacy. There would be a high risk of “getting blocked for any publication about the capabilities of VPNs, proxies, anonymizers, Tor” and out-of-court decision making would deprive website owners and authors the right to defend themselves, the organization noted.

In October, Artem Sheikin, a member of Russia’s Federation Council Committee on Constitutional Legislation and State Building, stated that, starting in 2024, the country’s internet regulator will be able to block all VPN services available in app stores that provide access to prohibited websites.

In November, the Ministry of Digital Development “clarified” that Russian authorities would only block specific VPN services that “a commission of experts identify as a threat to the security of the internet.” In his Telegram post, head of the State Duma Committee on Information Policy Alexander Khinshtein wrote that “VPN services pose a threat to users, as some of them collect their personal data and activity history” and leaks of databases of public services with real IP addresses of users have recently begun to occur more and more often.”

In 2023, Russia’s internet restrictions reached previously unheard-of levels. According to Freedom House, Russia’s freedom on the net score decreased from the previous year in 2023. Russian outlet Kommersant reported Roskomnadzor’s estimation that the number of blocked resources in Russia had increased by 85 percent from 2022 through the middle of 2023.

Eto Buziashvili, Research Associate, Tbilisi, Georgia

The post Russian War Report: Desperate for recruits, Russia offers one million rubles to join its military appeared first on Atlantic Council.

]]>
AI governance on a global stage: Key themes from the biggest week in AI policy https://www.atlanticcouncil.org/blogs/geotech-cues/ai-governance-on-a-global-stage-key-themes-from-the-biggest-week-in-ai-policy/ Thu, 16 Nov 2023 14:09:05 +0000 https://www.atlanticcouncil.org/?p=703805 The week of October 30, 2023 was a monumental week for artificial intelligence (AI) policy globally. As a quick recap: In the United States, one of the longest Executive Orders (EO) in history was signed by President Biden, aimed at harnessing the opportunities of AI while also seeking to address potential risks that may be […]

The post AI governance on a global stage: Key themes from the biggest week in AI policy appeared first on Atlantic Council.

]]>
The week of October 30, 2023 was a monumental week for artificial intelligence (AI) policy globally. As a quick recap: In the United States, one of the longest Executive Orders (EO) in history was signed by President Biden, aimed at harnessing the opportunities of AI while also seeking to address potential risks that may be presented by future evolutions of the technology. In the United Kingdom, international stakeholders came together to discuss risks at the “frontier” of AI and how best to mitigate them. Twenty-nine countries signed on to the Bletchley Park Declaration (“Declaration”). In the midst of all of this, the Hiroshima AI Process launched by Japan under the Group of Seven (G7) released its International Guiding Principles for Organizations Developing Advanced AI Systems (“G7 Principles”) as well as a voluntary International Code of Conduct for Organizations Developing Advanced AI Systems.

In light of what was arguably one of the busiest (and perhaps the most impactful) weeks in AI policy since the public release of ChatGPT thrust AI into the spotlight almost a year ago, there’s a lot to unpack. Below are some key themes that emerged from the conversation and items that will be increasingly relevant to pay attention to as efforts to govern the technology progress globally.

A commitment to taking a risk-based approach to regulation of AI technology

Across all of the activities of last week, one of the themes that came through was the continued emphasis on a risk-based approach, as these authors highlighted in their piece on transatlantic cooperation.

While some efforts more directly called this out than others, it was a throughput that should rightfully remain top of mind for international policymakers moving forward. For example, the chapeau of the G7 Principles calls on organizations to follow the guidelines set forth in the Principles “in line with a risk-based approach,” and the theme is reiterated in several places throughout the rest of the document. In the Declaration, countries agreed to pursue “risk-based policies…to ensure safety in light of such risks.” The Executive Order was a bit less direct in its commitment to maintaining a risk-based approach, though it seems to suggest that this was its intent in laying out obligations for “dual-use foundation model providers” in Section 4.1. The application of requirements for this set of models appears to indicate that the Administration sees heightened risk associated with this sort of model, though moving forward a clear articulation of why these obligations are the most appropriate approach to managing risk will be critical.

In digesting all of the activities last week, a central theme to note is that the global conversation seems to be moving away from an approach focused solely on regulating uses of AI but is now also seeking to regulate the technology itself. Indeed, all of the major efforts last week discussed risks inherent to “frontier models” and/or “advanced AI systems,” suggesting that there are model-level risks that might require regulation, in addition to context-specific, use-case based governance.

What to look out for:

How the term “frontier models” is formally defined, including whether international counterparts are able to come to agreement on the technical parameters of a “frontier model”

  • The Declaration discusses ‘frontier models’ as “those highly capable general-purpose AI models, including foundation models, that could perform a wide variety of tasks—as well as relevant specific narrow AI that could exhibit capabilities that cause harm—which match or exceed the capabilities present in today’s most advanced models” while the Executive Order provides an initial definition of a “dual-use foundation model” as “(i) any model that was trained using a quantity of computing power greater than 1026 integer or floating-point operations, or using primarily biological sequence data and using a quantity of computing power greater than 1023 integer or floating-point operations; and (ii) any computing cluster that has a set of machines physically co-located in a single datacenter, transitively connected by data center networking of over 100 Gbit/s, and having a theoretical maximum computing capacity of 1020 integer or floating-point operations per second for training AI”. The G7 Principles merely discuss “advanced AI systems” as a concept, using “the most advanced foundation models” and “generative AI systems” as illustrative types of these systems.
  • With that being said, it will be interesting to see how definitions and technical parameters are established moving forward, particularly because using floating point operations per second seems to be the way the conversation is currently trending but is not a particularly future-proof metric.

Continued conversation about what the right approach is to govern risks related to “frontier” AI systems

  • With the introduction of both voluntary agreements (e.g., in the Declaration and in the G7 Code of Conduct) as well as specific obligations (e.g., in Section 4.2 and 4.3 of the Executive Order), there is sure to be additional discussion about what the right approach is to managing risk related to these models. In particular, keep an eye out for conversations about what the right regulatory approach might be, including how responsibilities are allocated between developers and deployers.

Whether specific risks related to these models are clearly articulated by policymakers moving forward

  • In some regard, it seems to be a foregone conclusion that “frontier” AI systems will need to be regulated because they present a unique or different set of risks than those AI systems that already exist. However, in setting out regulatory approaches, it is important to clearly define the risk that said regulation is seeking to address, demonstrating why that approach is the most appropriate one. While the EO seems to indicate that the US government has concerns about these AI models amplifying biosecurity and cybersecurity related risks, clearly explaining why the proposed obligations are the right one for the task is going to be critical. Also, there continues to be some tension between those who are focused on “existential” risks associated with these systems and those that are focused on addressing “short-term” risks.

A major focus on the role of red-teaming in AI risk management

Conversations over the last week focused on red-teaming as a key component of AI risk management. Of course, this was not the first time red-teaming has been highlighted as a method to manage AI risk, but it came through particularly clearly in the EO, the G7 Principles, and in the Declaration as a tool of choice to manage AI risk. To be sure, Section 4 of the AI EO directs the National Institute for Standards and Technology (NIST) to develop red-teaming guidelines and requires providers of “dual-use foundation models” to provide information, including results of red-teaming tests performed, to the US government. Principle 1 in the G7 Principles discusses the importance of managing risk throughout the AI lifecycle and references red-teaming as one of the methods to discover and mitigate identified risks and vulnerabilities. The Declaration doesn’t use the term “red-teaming” in particular but talks about the role of “safety testing” in mitigating risk (though it is not clear from the statement what exactly this testing will look like).

One of the interesting things to note is that in the context of AI systems, the term “red-teaming” seems to indicate a broader set of practices than just attacking and/or hacking a system in an attempt to gain access and involves testing for flaws and vulnerabilities of an AI system in general. This is a departure from how red-teaming is generally understood in the cybersecurity context, likely because there is an ongoing discussion around what tools are most appropriate to test for and mitigate a broader range of risks beyond those related to security and red-teaming presents a useful construct for such testing.

Despite red-teaming being a significant focus of conversations as of late, it will be critical for policymakers to avoid overemphasizing on red-teaming. Red-teaming is one way to mitigate risk but is not the only way. It should be undertaken in conjunction with other tools and techniques – like disclosures, impact assessments, and data input controls, to ensure a holistic and proportionate approach to AI risk management.

What to look out for:

If and how different jurisdictions define “red-teaming” for AI systems moving forward, and whether a common understanding can be reached. Will the definition remain expansive and encapsulate all types of testing and evaluation or will it be tailored to a more specific set of practices?

How red-teaming is incorporated into regulatory efforts moving forward

  • While the events of the last week made clear that policymakers are focused on red-teaming as a means by which to pressure test AI systems, the extent to which such requirements are incorporated into regulation remains to be seen. The Executive Order, with its requirement to share the results of red-teaming processes, is perhaps the toothiest obligation coming out of the events of the past week, but as other jurisdictions begin to contemplate their approaches, don’t be surprised if red-teaming takes on a larger role.

How the institutes announced during the UK Safety Summit (the US AI Safety Institute and the UK AI Safety Institute) will collaborate with each other

  • The United States announced the establishment of the AI Safety Institute, which will be charged with developing measurement and evaluation standards to advance trustworthy and responsible AI. As Section 4.1 tasks NIST with developing standards to underpin the red-teaming required by Section 4.2 of the Executive Order, this Institute, and its work with other similarly situated organizations around the world, will be key to implementation of the practices outlined in the EO and beyond.

An emphasis on the importance of relying upon and integrating international standards

A welcome theme that emerged is the essential role that international technical standards and international technical standards organizations play in advancing AI policy. Section 11 of the AI Executive Order, focused on advancing US leadership abroad, advocates for the United States to collaborate with its partners to develop and implement technical standards and specifically directs the Commerce Department to establish a global engagement plan for promoting and developing international standards. Principle 10 of the G7 Principles also emphasizes the importance of advancing and adopting international standards. The Declaration highlights the need to develop “evaluation metrics” and “tools for testing.”

International technical standards will be key to advancing interoperable approaches to AI, especially because we are seeing different jurisdictions contemplate different governance frameworks. They can help provide a consistent framework for developers and deployers to operate within, provide a common way to approach different AI risk management activities, and allow companies to build their products for a global marketplace, reducing the risk of fragmentation.

What to look out for:

Which standards efforts are prioritized by nations moving forward

  • As mentioned above, the United States and the United Kingdom both announced their respective Safety Institutes during last week’s Summit. The UK’s Institute is tasked with focusing on technical tools to bolster AI safety, while NIST is tasked with a wide-range of standards activities in the Executive Order, including developing guidelines for red-teaming, AI system evaluation and auditing, secure software development, and content authentication and provenance.
  • Given the plethora of standards that are needed to support the implementation of various risk management practices, which standards nations choose to prioritize is an indicator of how they are thinking about risks related to AI systems, their impact on society, and regulatory efforts more broadly. In general, nations appear to be coalescing around the need to advance standards to support the testing and evaluation of capabilities of advanced AI systems/frontier AI systems/dual-use foundation models.

How individual efforts are mapped to or otherwise brought to international standards development organizations

  • In addition to the activities taking place within national standards bodies, there are also standardization activities taking place at the international level. For example, International Standards Organization/International Electrotechnical Commission Joint Technical Committee 1 Subcommittee 42 has been hard at work on a variety of standards to help support testing of AI systems and recently completed ISO 42001. As such, mapping activities are helpful for fostering consistency and for allowing organizations to understand how one standard relates to another.
  • Participating in and/or bringing national standards, guidelines, and best practices to international standards bodies helps to create buy-in, facilitate interoperability, and allows for alignment. As individual nations continue to consider how best to approach implementation of various risk management practices, continuing to prioritize participation in these efforts will be crucial to a truly international approach.

The events of the last week helped to spotlight several areas that will remain relevant to the global AI policy conversation moving forward. In many ways, this is only the beginning of the conversation, and these efforts offer an initial look at how international collaboration might progress, and in what areas we may see additional discussion in the coming weeks and months.

Logo of the Commission on the Geopolitical Impacts of New Technologies and Data. Includes an 8-point compass rose in the middle with the words "Be Bold. Be Brave. Be Benevolent" at the bottom.

GeoTech Center

Championing positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post AI governance on a global stage: Key themes from the biggest week in AI policy appeared first on Atlantic Council.

]]>
Lipsky quoted and CBDC tracker cited by Politico’s Morning Money newsletter on motivations for pursuing a CBDC https://www.atlanticcouncil.org/insight-impact/in-the-news/lipsky-quoted-and-cbdc-tracker-cited-by-politicos-morning-money-newsletter-on-motivations-for-pursuing-a-cbdc/ Wed, 15 Nov 2023 20:08:31 +0000 https://www.atlanticcouncil.org/?p=704853 Read the full article here.

The post Lipsky quoted and CBDC tracker cited by Politico’s Morning Money newsletter on motivations for pursuing a CBDC appeared first on Atlantic Council.

]]>
Read the full article here.

The post Lipsky quoted and CBDC tracker cited by Politico’s Morning Money newsletter on motivations for pursuing a CBDC appeared first on Atlantic Council.

]]>
Digital discrimination: Addressing ageism in design and use of new and emerging technologies https://www.atlanticcouncil.org/blogs/geotech-cues/digital-discrimination-addressing-ageism-in-design-and-use-of-new-and-emerging-technologies/ Tue, 07 Nov 2023 20:20:40 +0000 https://www.atlanticcouncil.org/?p=699957 This article originally appeared in the 2023 edition of AARP’s The Journal. To attract, retain, and support a more diverse workforce, companies will need to be deliberate and equitable in creating inclusive working conditions and lifelong learning opportunities to maintain digital literacy. Digital technology is becoming increasingly integrated into everyday life, but aging populations have […]

The post Digital discrimination: Addressing ageism in design and use of new and emerging technologies appeared first on Atlantic Council.

]]>
This article originally appeared in the 2023 edition of AARP’s The Journal.

To attract, retain, and support a more diverse workforce, companies will need to be deliberate and equitable in creating inclusive working conditions and lifelong learning opportunities to maintain digital literacy.

Digital technology is becoming increasingly integrated into everyday life, but aging populations have not fully participated in this technology revolution or benefited fully from today’s connected and data-rich world—disparities characterized as the digital divide and data divide, respectively. According to research by FP Analytics (with support from AARP),1 although 60 percent of the world’s population is connected to the Internet, access to digital services is unevenly distributed, especially for older adults and people in low- and middle-income countries. Even within an advanced economy like the United States, 15 percent of adults age 50 or older do not have Internet access and 60 percent say the cost of high-speed Internet is a barrier to access.2 Lack of digital access kept about 40 percent of older US adults from getting much- needed online services at home during the COVID-19 pandemic. This divide is deeper for women, who in developed nations are 21 percent less likely to be online and in developing countries 52 percent less likely to be online than men.3 No or slow Internet access is just one of multiple barriers preventing many seniors from accessing or fully benefiting from digital services, which are rarely designed or provided with aging populations in mind or made accessible to people who may have limited physical and/or cognitive abilities.

The need to bridge the divides facing older individuals will only grow over time if patterns of digital discrimination1 are allowed to persist. Not only are digital services and data applications becoming more prevalent, but the proportion of older adults is increasing due to changing demographics. Globally, there will be 1.4 billion people age 60 years old or older by 2030.4 Within the United States, by 2034 the aging population is set to outpace its youth with a projected 77 million people age 65-plus compared with the projected 76.5 million people under 18.5 At the same time, the working-age population is shrinking and is projected to decrease from 60 percent in 2020 to 54 percent by 2080.6 As older populations grow, it is imperative that societies take steps to ensure that new and emerging technologies bring benefits to all people and do not deepen the digital divide: technology and data must be more accessible and digital fluency improved for everyone.

The Atlantic Council’s GeoTech Center is working to identify and communicate what is required so that emerging technologies can enter use widely across the globe for public benefit while also identifying and mitigating potential risks, including to the aging population and underserved communities, globally. The Center thereby is an essential bridge between technologists and national and international policy makers, bringing together subject matter experts, thought leaders, and decision makers through purposeful convenings to consider the broader societal, economic, and geopolitical implications of new and emerging technologies; leverage technology to solve global challenges; and develop actionable tech policy, partnerships, and programs.

As discussed in a recent report,7 the GeoTech Center shares AARP’s concerns about the growing digital and data divides. The data divide can be reduced only if there is optimization in data processing, monitoring, and evaluation of the policies and programs from major stakeholders and alignment of public–private partnerships for social good. Monitoring the growth of digital skills and access to data is especially critical for tracking progress, yet a 2021 study found that of the 150 most influential technology companies, only 12 published impact assessments.8 Key recommendations for stakeholders—including private-sector firms, governments, and civil society organizations—are the need to train a more inclusive generation of professionals; create new governance structures; and ensure equitable access, tracking, and control over data across society. These recommendations are especially important for aging adults and other demographic groups historically left offline and left behind in the rush to introduce new technologies and services into society.

As seniors become a larger component of the workforce and the importance of digital tools continues to grow, private-sector stakeholders who want to retain and benefit from the value such experienced workers can bring will need to double down on digital upskilling and reskilling for their employees. Moreover, as the proportion of the conventional working- age population declines, seniors and other underrepresented sectors of society will become an increasingly important segment of the workforce. To attract, retain, and support a more diverse workforce, companies will need to be deliberate and equitable in creating inclusive working conditions and lifelong learning opportunities to maintain digital literacy.9

It is also important to note that just offering digital literacy lessons is not enough; for the training sessions to be effective, older adults must be engaged and enjoy them. Digital training for older adults works best when they are delivered by institutions that seniors trust and have experience working with. These institutions can range from libraries to religious networks. Additionally, the learning programs and instructors themselves must be compatible with the needs of the users. Older adults tend to engage better with instructors who have shared their experiences or are seniors themselves. They also tend to learn better with one-on-one instruction, which can be more personalized than automated training sessions.3

Although a range of ongoing activities exist across the public and private sector to bridge the digital and data divides associated with current technology, all sectors need to proactively work together to ensure that future technologies benefit aging populations and do not deepen those divides. For example, as discussed in a 2019 White House report, various emerging technologies have significant potential to assist older adults with successfully aging in place.10 For these and other technologies to enter into use in ways that achieve that potential, the knowledge, skills, and abilities of seniors (and others historically left behind by technology) must be considered throughout the design process and product life cycle.

Among the many distinct needs and preferences to be considered are trust; privacy; and physical abilities including vision, hearing, and dexterity.

Finally, beyond simply considering consumer needs, technologists should include the aging population, caregivers, and others directly in the development process. Having a more inclusive, user-centered design process for a range of technologies should become common procedure—both for technologies used at home and for those essential for success in the future workplace. For technologies to support aging in place, it is important to include older individuals themselves and not just caregivers, recognizing that not all people will have access to caregivers or expensive care resources. Given that most technology is developed with younger customers in mind, achieving this vision of inclusive development will require additional public–private partnerships that can further bridge the gap between a more diverse set of users and developers. Bridging this gap would not only make technologies more effective but also provide increased economic opportunity. People with disabilities, many of whom are seniors, have a total spending power of approximately $6 trillion. Including this population in the design process could encourage them to become future consumers, therefore creating economic value for technology companies.3 The establishment of additional smart partnerships will be crucial in the next decade if we are to prevent age from being a barrier to benefiting from new and emerging technologies in society and the future of work.


1 Expanding Digital Inclusion for Aging Populations. 2022. FP Analytics and AARP. https://fpanalytics.foreignpolicy.com/wp-content/uploads/sites/5/2022/09/Expanding-Digital-Inclusion-Aging-Populations-AARP.pdf.

2 “AARP Urges Older Americans Struggling to Access and Afford High-Speed Internet to Enroll in New Emergency Broadband Benefit Program.” 2021. MediaRoom. https://press.aarp.org/2021-5-12-AARP-Urges-Older-Americans-Struggling-to-Access-and-Afford-High-Speed-Internet-to-Enroll-in-New-Emergency-Broadband-Benefit-Program#:~:text=According%20to%20the%20study%2C%2015.

3 Digital Inclusion for All: Ensuring Access for Older Adults in the Digital Age. 2023. FP Analytics and AARP. https://www.aarpinternational.org/file%20library/resources/2023-a-fpa-aarp-digital-inclusion-final.pdf.

4 WHO. 2022. “Ageing and Health.” World Health Organization. October 1, 2022. https://www.who.int/news-room/fact-sheets/detail/ageing-and-health.

5 Rogers, Luke, and Kristie Wilder. 2020. Shift in Working-Age Population Relative to Older and Younger Americans. United States Census Bureau , June. https://www.census.gov/library/stories/2020/06/working-age-population-not-keeping-pace-with-growth-in-older-americans.html.

6 Rogers, Luke, and Kristie Wilder. 2020. Shift in Working-Age Population Relative to Older and Younger Americans. United States Census Bureau , June. https://www.census.gov/library/stories/2020/06/working-age-population-not-keeping-pace-with-growth-in-older-americans.html.

7 Wise, Solomon, and Joseph T. Bonivel. 2022. The Data Divide: How Emerging Technology and Its Stakeholders Can Influence the Fourth Industrial Revolution. Atlantic Council . https://www.atlanticcouncil.org/in-depth-research-reports/report/the-data-divide-how-emerging-technology-and-its-stakeholders-can-influence-the-fourth-industrial-revolution/.

8 Digital Inclusion Benchmark. 2023. World Benchmarking Alliance. https://www.worldbenchmarkingalliance.org/publication/digital-inclusion/.

9 See, for example, a discussion of artificial intelligence in the context of building human capacity and preparing for labor market transitions in the age of automation at https://www.atlanticcouncil.org/programs/geotech-center/ai-connect/ai-connect-webinar-7/

10 Emerging Technologies to Support and Aging Population. 2019. The White House. https://trumpwhitehouse.archives.gov/wp-content/upload s/2019/03/Emerging-Tech-to-Support-Aging-2019.pdf.

Logo of the Commission on the Geopolitical Impacts of New Technologies and Data. Includes an 8-point compass rose in the middle with the words "Be Bold. Be Brave. Be Benevolent" at the bottom.

GeoTech Center

Championing positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Digital discrimination: Addressing ageism in design and use of new and emerging technologies appeared first on Atlantic Council.

]]>
Decoding artificial intelligence https://www.atlanticcouncil.org/commentary/video/decoding-artificial-intelligence/ Thu, 02 Nov 2023 13:56:58 +0000 https://www.atlanticcouncil.org/?p=698080 Watch Philip L. Frana, professor in the Interdisciplinary Liberal Studies and Independent Scholars programs at James Madison University, decode artificial intelligence.

The post Decoding artificial intelligence appeared first on Atlantic Council.

]]>

What is artificial intelligence (AI), and how does it work? Philip L. Frana, Professor of Interdisciplinary Liberal Studies and Independent Scholars at James Madison University, unravels the inner workings of AI. He explains the technology behind this hot-button policy issue and its many applications. Turns out it may be a help, and not a grave menace to humanity in the near future.


The post Decoding artificial intelligence appeared first on Atlantic Council.

]]>