GeoTech Cues - Atlantic Council https://www.atlanticcouncil.org/category/blogs/geotech-cues/ Shaping the global future together Thu, 15 Aug 2024 05:50:47 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://www.atlanticcouncil.org/wp-content/uploads/2019/09/favicon-150x150.png GeoTech Cues - Atlantic Council https://www.atlanticcouncil.org/category/blogs/geotech-cues/ 32 32 The Great IT Outage of 2024 is a wake-up call about digital public infrastructure https://www.atlanticcouncil.org/blogs/new-atlanticist/the-great-it-outage-of-2024-is-a-wake-up-call-about-digital-public-infrastructure/ Tue, 06 Aug 2024 17:24:12 +0000 https://www.atlanticcouncil.org/?p=784093 The July 19 outage serves as a symbolic outcry for solution-oriented policies and accountability to stave off future disruptions.

The post The Great IT Outage of 2024 is a wake-up call about digital public infrastructure appeared first on Atlantic Council.

]]>
On July 19, the world experienced its largest global IT outage to date, affecting 8.5 million Microsoft Windows devices. Thousands of flights were grounded. Surgeries were canceled. Users of certain online banks could not access their accounts. Even operators of 911 lines could not respond to emergencies.

The cause? One mere faulty section of code in a software update.

The update came from CrowdStrike, a cybersecurity firm whose Falcon Sensor software many Windows users employ against cyber breaches. Instead of providing improvements, the update caused devices to shut down and enter an endless reboot cycle, driving a global outage. Reports suggest that insufficient testing at CrowdStrike was likely the cause.

However, this outage is not just a technology error. It also reveals a hidden world of digital public infrastructure (DPI) that deserves more attention from policymakers.

What is digital public infrastructure?

DPI, while an evolving concept, is broadly defined by the United Nations (UN) as a combination of “networked open technology standards built for public interest, [which] enables governance and [serves] a community of innovative and competitive market players working to drive innovation, especially across public programmes.” This definition refers to DPI as essential digital systems that support critical societal functions, like how physical infrastructure—including roads, bridges, and power grids—are essential for everyday activities.

Microsoft Windows, which runs CrowdStrike’s Falcon Sensor software, is a form of DPI. And other examples of DPI within the UN definition include digital health systems, payment systems, and e-governance portals.

As the world scrambles to fix their Windows systems, policymakers need to pay particular attention to the core DPI issues that underpin the outage.

The problem of invisibility

DPI, such as Microsoft Windows, is ubiquitous but also largely invisible, which is a significant challenge when it comes to managing risks associated with it. Unlike physical infrastructure, which is tangible and visible, DPI powers essential digital services without drawing public awareness. Consequently, the potential risks posed by DPI failures—whether stemming from software bugs or cybersecurity breaches—tend to be underappreciated and underestimated by the public.

The lack of a clear definition of DPI exacerbates the issue of its invisibility. Not all digital technologies are public infrastructure: Companies build technology to generate revenue, but many of them do not directly offer critical services for the public. For instance, Fitbit, a tech company that creates fitness and health tracking devices, is not a provider of DPI. Though it utilizes technology and data services to enhance user experience, it does not provide essential infrastructure such as internet services, cloud computing platforms, or large-scale data centers that support public and business digital needs. That said, Fitbit’s new owner, Google, known for its widely used browser, popular cloud computing services, and efforts to expand digital connectivity, can be considered a provider of DPI.

Other companies that do not start out as DPI may become integral to public infrastructure by dint of becoming indispensable. Facebook, for example, started out as a social network, but it and other social media platforms have become a crucial aspect of civil discourse surrounding many elections. Regulating social media platforms as a simple technology product could potentially ignore their role as public infrastructure, which often deserve extra scrutiny to mitigate potential detrimental effects on the public.

The recent Microsoft outage, from which airlines, hospitals, and other companies are still recovering, should now sharpen the focus on the company as a provider of DPI. However, the invisibility of DPI and the absence of appropriate policy guidelines for measuring and managing its risks result in two complications. First, most users who interact with DPI often do not recognize it as a form of DPI. Second, this invisibility leads to a misplaced trust in major technology companies, as users fail to recognize how high the collective stakes of a failure in this DPI might be. Market dominance and effective advertising have helped major technology companies publicize their systems as benchmarks of reliability and resiliency. As a result, the public often perceives these systems as infallible, assuming they are more secure than they are—until a failure occurs. At the same time, an overabundance of public trust and comfort with familiar systems can foster complacency within organizations, which can lead to inadequate internal scrutiny and security audits.

How to prevent future disruptions

The Great IT Outage of 2024 revealed just how essential DPI is to societies across the globe. In many ways, the outage serves as a symbolic outcry for solution-oriented policies and accountability to stave off future disruptions.

To address DPI invisibility and misplaced trust in technology companies, US policymakers should first define DPI clearly and holistically while accounting for its status as an evolving concept. It is equally crucial to distinguish which companies are currently providers of DPI, and to educate leaders, policymakers, and the public about what that means. Such an initiative should provide a clear definition of DPI, its technical characteristics, and its various forms, while highlighting how commonly used software such as Microsoft Windows is a form of DPI. A silver lining of the recent Microsoft/CrowdStrike outage is that it offers a practical, recent case study to present to the public as real-world context for understanding the risks when DPI fails.

Finally, Microsoft has outlined technical next steps to prevent another outage, including extensive testing frameworks and backup systems to prevent the same kind of outage from happening again. However, while industry-driven self-regulation is crucial, regulation that enforces and standardizes backup systems, not just with Microsoft, but also for other technology companies that may also become providers of DPI, is also necessary. Doing so will help prevent future outages, ensuring the reliability of infrastructure which, just like roads and bridges, props up the world.


Saba Weatherspoon is a young global professional with the Atlantic Council’s Geotech Center.

Zhenwei Gao is a young global professional with the Cyber Statecraft Initiative, part of the Atlantic Council Technology Programs.

The post The Great IT Outage of 2024 is a wake-up call about digital public infrastructure appeared first on Atlantic Council.

]]>
A policymaker’s guide to ensuring that AI-powered health tech operates ethically https://www.atlanticcouncil.org/blogs/geotech-cues/a-policymakers-guide-to-ensuring-that-ai-powered-health-tech-operates-ethically/ Mon, 29 Jul 2024 20:00:57 +0000 https://www.atlanticcouncil.org/?p=782140 The private sector is moving quickly with the development of AI tools. The public sector will need to keep up with new strategies, standards, and regulations around the deployment and use of such tools in the healthcare sector.

The post A policymaker’s guide to ensuring that AI-powered health tech operates ethically appeared first on Atlantic Council.

]]>
The healthcare landscape is undergoing a profound transformation thanks to artificial intelligence (AI) and big data. However, with this transformation come complex challenges surrounding data collection, algorithmic decision-making, transparency, and workforce readiness.

That was a topic of a recent roundtable hosted by the GeoTech Center and Syntropy, a platform that works with healthcare, government, and other groups to collaborate on data in a single ecosystem geared toward informing healthcare research.

At the roundtable, experts from the public and private sectors discussed the complex challenges that arise with the transformation of the healthcare sector, arguing that these challenges lie not only in the development of the technology but also in the implementation and use of it.

As AI becomes more and more integrated with healthcare, policymakers must lay the groundwork for a future in which AI augments, rather than replaces, human expertise in the pursuit of better health outcomes for all. Below are the roundtable participants’ recommendations for policymakers, focusing on building strong data foundations, setting guidelines for algorithm testing and maintenance, fostering trust and transparency, and supporting a strong workforce.

1. Building strong data foundations

Data sets in the healthcare sector can be messy, small in scale, and lacking in diversity, leading to inherent biases that can skew the outcomes of AI-driven analyses—and decisions made following such analyses. Moreover, these biases are not always apparent and often require extensive work to identify. Thus, it is important at the outset to ensure the integrity, quality, and diversity of the data with which AI systems are trained.

The ability to do so will in part depend on the strength of the workforce and the infrastructure that collects and manages data. For example, hospitals—from large, well-funded facilities to smaller community-based hospitals with fewer resources—play an important role in collecting data.

A strong foundation for data is one that protects data. In an ideal world, all individuals (regardless of socioeconomic status or geographic location) can benefit from AI-driven healthcare technologies. With that come concerns about the protection of health data, particularly in countries with fragile democracies and low regulatory standards. The potential misuse of health data by governments around the world poses significant risks to individual privacy and autonomy, highlighting the need for robust legal and ethical frameworks to safeguard against such abuses.

To address such challenges with data collection and management, policymakers can begin by implementing the following:

  • Establishing a foundational data strategy for healthcare data that will improve patient equity by setting standards for inclusive data sets.
  • Allocating more resources and support for community hospitals to ensure that the data collected in such facilities is high quality and diverse.
  • Encouraging the development of robust data systems that allow for better data sharing, collaboration, and interoperability.
  • Optimizing patient benefits by providing transparency about not only the healthcare providers but also about anyone else participating in data sharing.

2. Establishing guidelines for algorithm testing and maintenance by healthcare-technology companies

While building an algorithm may be a complex process, understanding and testing its performance over time is even more challenging. The dynamic nature of the healthcare industry demands ongoing adaptation and refinement of algorithms to account for evolving patient needs, technological advancements, and regulatory requirements.

In addition to continuous testing, it’s important to recognize that the same algorithms may exhibit different risk profiles when deployed in different contexts. Factors such as patient demographics, disease prevalence, and healthcare infrastructure can all influence the performance and safety of AI algorithms. A one-size-fits-all approach to AI deployment in healthcare is neither practical nor advisable.

To ensure that algorithms are constantly tested and maintained, policymakers should consider the following:

  • Developing guidelines that inform developers, testers, data scientists, regulators, and clinicians about their shared responsibility of maintaining algorithms.
  • Instituting an oversight authority to continuously monitor the risks associated with decisions that have been made based on AI to ensure the algorithms remain accurate, reliable, and safe for clinical settings.

3. Fostering patient trust and transparency

As technology continues to impact the healthcare industry, and as patients often find themselves unaware of the integration of AI technologies into their care processes, it becomes more difficult for those patients to give informed consent. This lack of transparency undermines patient autonomy and raises profound ethical questions about patients’ right to be informed and participate in health-related decisions. A lack of awareness about the integration of AI technologies is just one layer to the problem; even if a patient knows that AI is playing a role in their care, they may not know about who sponsors such technologies. Sponsors pay for the testing and maintenance of these systems, and they may also have access to the patient’s data.

When AI technologies are involved in care processes, it is still important to achieve the right balance between human interaction and AI-driven solutions. While AI technologies hold great promise for improving efficiency and accuracy in clinical decision-making, they must be integrated seamlessly into existing workflows and complement (rather than replace) human expertise and judgment.

The willingness to accept AI in healthcare varies significantly among patients and healthcare professionals. To bridge this gap in acceptance and address other challenges with trust and transparency, policymakers should consider the following:

  • Providing transparent information about the capabilities, limitations, and ethical considerations of AI technologies.
  • Encouraging companies to use particular design methods that ensure that tools and practices align with privacy values and protect patient autonomy.
  • Producing guiding principles for hospitals to promote a deep understanding of the implications of AI and proactively addressing concerns related to workforce dynamics and patient care.
  • Developing strategies to strengthen institutional trust to encourage patients to share data, avoiding algorithms that develop in silos.
  • Awarding organizations with an integrity badge for transparency, responsible use, and testing.

4. Supporting a strong workforce

The integration of AI tools into healthcare workflows is challenging, particularly because of the changes in processes, job roles, patient-provider interactions, and organizational culture such implementation creates. It will be necessary to support the hospital workforce with strategies to manage this change and also with comprehensive education and training initiatives. While the focus here is on humans rather than technology, such support is just as integral to realizing the full potential of these innovations in improving patient outcomes and healthcare delivery.

Many hospitals lack the necessary capabilities to effectively leverage AI technologies to their fullest potential, but supporting technical assistance training and infrastructure could help in the successful deployment of AI technologies.

To navigate the changes that AI tools would bring to the workplace, policymakers should consider the following:

  • Releasing guidance to healthcare companies to anticipate change management, education, training, and governance.
  • Incentivizing private-sector technical assistance training and infrastructure to provide services to communities with fewer resources.
  • Creating training programs tailored to the specific needs of healthcare organizations so that stakeholders can ensure AI implementations are both effective and sustainable in the long run.

The private sector is moving quickly with the development of AI tools. The public sector will need to keep up with new strategies, standards, and regulations around the deployment and use of such tools in the healthcare sector.


Coley Felt is a program assistant at the GeoTech Center.

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post A policymaker’s guide to ensuring that AI-powered health tech operates ethically appeared first on Atlantic Council.

]]>
The sovereignty trap https://www.atlanticcouncil.org/blogs/geotech-cues/the-sovereignty-trap/ Fri, 26 Jul 2024 19:11:47 +0000 https://www.atlanticcouncil.org/?p=781286 When sovereignty is invoked in digital contexts without an understanding of the broader political environment, several traps can be triggered.

The post The sovereignty trap appeared first on Atlantic Council.

]]>
This piece was originally published on DFRLab.org.

On February 28, 2024, a blog post entitled “What is Sovereign AI?” appeared on the website of NVIDIA, a chip designer and one of the world’s most valuable companies. The post defined the term as a country’s ability to produce artificial intelligence (AI) using its own “infrastructure, data, workforce and business networks.” Later, in its May 2024 earnings report, NVIDIA outlined how sovereign AI has become one of its “multibillion dollar” verticals, as it seeks to deliver AI chips and software to countries around the world.

On its face, “sovereign AI” as a concept is focused on enabling states to mitigate potential downsides of relying on foreign-made large AI models. Sovereign AI is NVIDIA’s attempt to turn this growing demand from governments into a new market, as the company seeks to offer governments computational resources that can aid them in ensuring that AI systems are tailored to local conditions. By invoking sovereignty, however, NVIDIA is weighing into a complex existing geopolitical context. The broader push from governments for AI sovereignty will have important consequences for the digital ecosystem on the whole and could undermine internet freedom. NVIDIA is seeking to respond to demand from countries that are eager for more indigenous options for developing compute capacity and AI systems. However, sovereign AI can create “sovereignty traps” that unintentionally grant momentum to authoritarian governments’ efforts to undermine multistakeholder governance of digital technologies. This piece outlines the broader geopolitical context behind digital sovereignty and identifies several potential sovereignty traps associated with sovereign AI.1

Background

Since its inception, the internet has been managed through a multistakeholder system that, while not without its flaws, sought to uphold a global, open, and interoperable internet. Maintaining this inherent interconnectedness is the foundation by which the multistakeholder community of technical experts, civil society organizations, and industry representatives have operated for years.

One of the early instantiations of digital sovereignty was introduced by China in its 2010 White Paper called “The State of China’s Internet.” In it, Beijing defined the internet as “key national infrastructure,” and as such it fell under the scope of the country’s sovereign jurisdiction. In the same breath, Chinese authorities also made explicit the centrality of internet security to digital sovereignty. In China’s case, the government aimed to address internet security risks related to the dissemination of information and data—including public opinion—that could pose a risk to the political security of the Chinese Communist Party (CCP). As a result, foreign social media platforms like X (formerly Twitter) and Facebook have been banned in China since around 2009. It is no coincidence that the remit of China’s main internet regulator, the Central Cyberspace Affairs Commission, has evolved from developing and enforcing censorship standards for online content to becoming a key policy body for regulating privacy, data security, and cybersecurity.

This emphasis on state control over the internet—now commonly referred to by China as “network sovereignty” or “cyber sovereignty” (网络主权), also characterizes China’s approach to the global digital ecosystem. Following the publication of its White Paper in 2010, in September of the following year, China, Russia, Tajikistan, and Uzbekistan jointly submitted an “International Code of Conduct for Information Security” to the United Nations General Assembly, which held that control over policies related to the governance of the internet is “the sovereign right of states”—and thus should reside squarely under the jurisdiction of the host country.

In line with this view, China has undertaken great efforts in recent years to move the center of gravity of internet governance from multistakeholder to multilateral fora. For example, Beijing has sought to leverage the platform of the Global Digital Compact under the United Nations to engage G-77 countries to support its vision. China has proposed language that would make the internet a more centralized, top-down network over which governments have sole authority, excluding the technical community and expert organizations that have helped shape community governance from the internet’s early days.

Adding to the confusion is the seeming interchangeability of the terms “cyber sovereignty,” used more frequently by China, and “digital sovereignty,” a term used most often by the European Union and its member states. While semantically similar, these terms have vastly different implications for digital policy due to the disparate social contexts in which they are embedded. For example, while the origin of the “cyber sovereignty” concept in China speaks to the CCP’s desire for internet security, some countries view cyber sovereignty as a potential pathway by which to gain more power over the development of their digital economies, thus enabling them to more efficiently deliver public goods to their citizens. There is real demand for this kind of autonomy, especially among Global Majority countries.

Democracies are now trying to find alternative concepts to capture the spirit of self-sufficiency in tech governance without lending credence to the more problematic implications of digital sovereignty. For example, in Denmark’s strategy for tech diplomacy, the government avoids reference to digital sovereignty, instead highlighting the importance of technology in promoting and preserving democratic values and human rights, while assisting in addressing global challenges. The United States’ analogous strategy invokes the concept of “digital solidarity” as a counterpoint, alluding to the importance of respecting fundamental rights in the digital world.

Thus, ideas of sovereignty, as applied to the digital, can have both a positive, rights-affirming connotation, as well as a negative one that leaves the definition of digital rights and duties to the state alone. This can lead to confusion and often obscures the legitimate concerns that Global Majority countries have about technological capacity-building and autonomy in digital governance.

NVIDIA’s addition of the concept of “sovereign AI” further complicates this terrain and may amplify the problems presented by authoritarian pushes for sovereignty in the digital domain. For example, national-level AI governance initiatives that emphasize sovereignty may undermine efforts for collective and collaborative governance of AI, reducing the efficacy of risk mitigations. Over-indexing on sovereignty in the context of technology often cedes important ground in ensuring that transformative technologies like AI are governed in an open, transparent, and rights-respecting manner. Without global governance, the full, uncritical embrace of sovereign AI may make the world less safe, prosperous, and democratic. Below we outline some of the “traps” that can be triggered when sovereignty is invoked in digital contexts without an understanding of the broader political contexts within which such terms are embedded.

Sovereignty trap 1: Sovereign systems are not collaborative

If there is one thing we have learned from the governance of the internet in the past twenty years, it is that collaboration sits at the core of how we should address the complexity and fast-paced nature of technology. AI is no different. It is an ecosystem that is both diverse and complex, which means that no single entity or person should be responsible for allocating its benefits and risks. Just like the internet, AI is full of “wicked problems,” whether regarding the ethics of autonomy or the effects that large language models could have on the climate, given the energy required to build large models. Wicked problems can only be solved through successful collaboration, not with each actor sticking its head in the sand.

Collaboration leads to more transparent governance, and transparency in how AI is governed is essential given the potential for AI systems to be weaponized and cause real-world harm. For example, many of the drones that are being used in the war in Ukraine have AI-enabled guidance or targeting systems, which has had a major impact on the war. Just as closed systems on the internet can be harmful for innovation and competition, as with operating systems or app stores built as “walled gardens,” AI systems that are created in silos and are not subject to a collaborative international governance framework will produce fewer benefits for society.

Legitimate concerns about the misappropriation of AI systems will only worsen if sovereign AI is achieved by imposing harsh restrictions on cross-border data flows. Just like in the case of the internet, data flows are crucial because they ensure access to information that is important for AI development. True collaboration can help level the playing field between stakeholders and address existing gaps, especially in regard to the need for human rights to underlie the creation, deployment, and use of AI systems.

Sovereignty trap 2: Sovereign systems make governments the sole guarantors of rights

Sovereign AI, like its antecedent “digital sovereignty,” means different things to different audiences. On one hand, it denotes reclaiming control of the future from dominant tech companies, usually based in the United States. It is important to note that rallying cries for digital sovereignty stem from real concerns about critical digital infrastructure, including AI infrastructure, being disrupted or shut down unilaterally by the United States. AI researchers have long said that actors in the Global Majority must avoid being relegated to the status of data suppliers and consumers of models, as AI systems that are built and tested in the contexts where they will actually be deployed will generate better outcomes for Global Majority users.

The other connotation of sovereign AI, however, is that the state has the sole authority to define, guarantee, or deny rights. This is particularly worrying in the context of generative AI, which is an inherently centralizing technology due to its lack of interpretability and the immense resources required to build large AI models. If governments choose to pursue sovereign AI by nationalizing data resources, such as by blocking cross-border transfer of datasets that could be used to train large AI models, this could have significant implications for human rights. For instance, governments might increase surveillance to better collect such data or to monitor cross-border transfers. At a more basic level, governments have a more essentialist understanding of national identity than civil society organizations, sociotechnical researchers, or other stakeholders who might curate national datasets, meaning government-backed data initiatives for sovereign AI are still likely to hurt marginalized populations.

Sovereignty trap 3: Sovereign systems can be weaponized

Assessing the risks of sovereign AI systems is critical, but governments lack the capacity and the incentives to do so. The bedrock of any AI system lies in the quality and quantity of the data used to build it. If the data is biased or incomplete, or if the values encoded in the data are nondemocratic or toxic, an AI system’s output will reflect these characteristics. This is akin to the old adage in computer science, “garbage in, garbage out,” emphasizing that the quality of output is determined by the quality of the input.

As countries increasingly rely on AI for digital sovereignty and national security, new challenges and potential risks emerge. Sovereign AI systems, designed to operate within a nation’s own infrastructure and data networks, might inadvertently or intentionally weaponize or exaggerate certain information based on their training data.

For instance, if a national AI system is trained on data that overwhelmingly endorses nondemocratic values or autocratic perspectives, the system may identify certain actions or entities as threats that would not be considered as such in a democratic context. These could include political opposition, civil society activism, or free press. This scenario echoes the concerns about China’s approach to “cyber sovereignty,” where the state exerts control over digital space in several ways to suppress information sources that may present views or information contradicting the official narrative of the Chinese government. This includes blocking access to foreign websites and social media platforms, filtering online content, and monitoring digital communications to prevent the dissemination of dissenting views or information deemed sensitive by the government. Such measures could potentially be reinforced through the use of sovereign AI systems.

Moreover, the legitimacy that comes with sovereign AI projects could be exploited by governments to ensure that state-backed language models endorse a specific ideology or narrative. This is already taking place in China, where the government has succeeded in censoring the outputs of homegrown large language models. This also aligns with China’s push to leverage the Global Digital Compact to reshape internet governance in favor of a more centralized approach. If sovereign AI is used to bolster the position of authoritarian governments, it could further undermine the multistakeholder model of internet and digital governance.

Conclusion

The history of digital sovereignty shows that sovereign AI comes with a number of pitfalls, even as its benefits remain largely untested. The push to wall off the development of AI and other emerging technologies with diminished external involvement and oversight is risky: lack of collaboration, governments as the sole guarantors of rights, and potential weaponization of AI systems are all major potential drawbacks of sovereign AI. The global community should focus on ensuring AI governance is open, collaborative, transparent, and aligned with core values of human rights and democracy. While sovereign AI will undoubtedly boost NVIDIA’s earnings, its impact on democracy is more ambiguous.

Addressing these potential threats is crucial for global stability and security. As AI’s impact on national security grows, it is essential to establish international norms and standards for the development and deployment of state-backed AI systems. This includes ensuring transparency in how these systems are built, maintained, released, and applied, as well as implementing measures to prevent misuse of AI applications. AI governance should seek to ensure that AI enhances security, fosters innovation, and promotes economic growth, rather than exacerbating national security threats or strengthening authoritarian governments. Our goal should be to advance the well-being of ordinary people, not sovereignty for sovereignty’s sake.


Konstantinos Komaitis is a nonresident fellow with the Democracy + Tech Initiative of the Atlantic Council’s Digital Forensic Research Lab.

Esteban Ponce de León is a research associate at the Atlantic Council’s Digital Forensic Research Lab based in Colombia.

Kenton Thibaut is a resident China fellow at the Atlantic Council’s Digital Forensic Research Lab.

Trisha Ray is an associate director and resident fellow at the Atlantic Council’s GeoTech Center.

Kevin Klyman is a visiting fellow at the Atlantic Council’s Digital Forensic Research Lab.

Further Reading

1    A note that countries could pursue sovereign AI in different ways, including by acquiring more AI chips and building more data centers to increase domestic capacity to train and run large AI models, training of fine-tuning national AI models with government support, building datasets of national languages (or images of people from the country) to enable the creation of more representative training datasets, or by blocking foreign firms and countries from accessing domestic resources that might otherwise be used to train their AI models (e.g., critical minerals, data laborers, datasets, or chips). This piece focuses on data, as it has been critical in discussions of digital sovereignty.

The post The sovereignty trap appeared first on Atlantic Council.

]]>
The sustainability questions policymakers should be asking about AI https://www.atlanticcouncil.org/blogs/geotech-cues/the-sustainability-questions-policymakers-should-be-asking-about-ai/ Fri, 21 Jun 2024 20:16:50 +0000 https://www.atlanticcouncil.org/?p=769177 Focusing on the sustainability of the AI industry offers an opportunity to steer entire industries toward contributing to a positive future.

The post The sustainability questions policymakers should be asking about AI appeared first on Atlantic Council.

]]>
Advances in artificial intelligence (AI) promise to achieve efficiency and progress for a variety of applications, including cutting-edge research, business, and whole industries. However, a major gap has opened: the need for transparency around the sustainability of AI initiatives throughout their whole lifecycle.

“Sustainability” is not just an environmental concern. In a broader sense, such as that employed by the United Nations Sustainable Development Goals (SDGs), sustainability requires improving human health, prosperity, and economic growth. And in discussing sustainability in AI, following a framing described by the Sustainable AI Lab at University of Bonn, it is important to discuss not only AI applications for sustainability, but also the sustainability of the AI industry itself.

The Organisation for Economic Co-operation and Development pointed out in November 2022 that it is important to consider both the direct sustainability impacts of computing as well as the indirect impacts of AI applications. However, the sustainability of computing seems to rarely be mentioned in current conversations about the governance of AI development and deployment or in new legislation or guidance such as the European Union (EU) AI Act, United Nations resolution A/78/L.49, Canada’s C27 bill, the Australian government’s interim response report, the White House executive order on AI and follow-on actions, or requirements in various US states. Instead, these and many other conversations around the world focus primarily on the also-critical topics of trustworthy AI, data privacy, alignment, and ethics.

If policymakers close this gap and focus today on the sustainability of the AI industry, they will have the opportunity to steer entire industries toward contributing to a positive future for both people and the planet.

To develop and leverage AI at the scale imagined by researchers, businesses, and governments, significant physical resources will be required for the design and deployment of the requisite computing hardware and software. While all AI approaches merit attention regarding their sustainability, generative AI is particularly resource-intensive: One such AI-powered chatbot is reportedly consuming the energy equivalent of 33,000 homes. (Note that while it is complicated to estimate such equivalences—given variations in operational timescales and details, home location, user numbers, etc.—various calculations have yielded estimated energy use equivalent to that of tens to hundreds of thousands of US homes.)

In addition, new data centers are being designed and built with high demand and at a fast pace, new AI-critical hardware components are being designed and fabricated, and organizations large and small are experiencing urgency in setting their short-term tactics and long-term strategies for AI. Demands on data centers will only continue to grow as AI-powered applications spread through industries and around the world. For example, a recent International Energy Agency report projected an increase in data center energy consumption in 2026 equivalent to the energy consumption of Japan.

Sustainability-focused regulation of AI, if deployed in a timely manner, can incentivize further improvements in the efficiency of data center operation and even the efficiency of software itself. Unfortunately, in the past, similar opportunities to promote the sustainable development of emerging technologies across industries have been missed. Failure to act during the rise of cryptocurrency mining has led to concerns today about the industry’s electricity and water use and to tension—internationally and domestically—around regulation and resource accessibility. For example, blockchain advocates filed a lawsuit against the US Department of Energy after the agency attempted to conduct an emergency survey of energy use by crypto miners, with the advocates arguing that it forced businesses to divulge sensitive information.

More broadly, global digitization and its associated technologies have spurred crises in e-waste, supply-chain fragility, and human rights, to name a few. Early consideration and prioritization of these issues could have prevented harmful patterns from becoming embedded in today’s systems and processes. Crucially, the projected demands on data centers in the coming years due to the rise of AI—in terms of hardware, power, cooling, land and water use, and access to physical infrastructure and network bandwidth (a particular concern in growing urban areas)—are likely to far outstrip demands associated with other technologies. The potential cumulative impacts of the AI revolution, including resource consumption and byproduct production, underscore the urgency of acting today.

Questions for a sustainable industry

In order for policymakers to introduce measures that encourage AI initiatives (and the entire AI industry) to be more sustainable—and to enable consumers to choose sustainable AI tools—there needs to be more transparency around the sustainability of developing, training (including storing data), and deploying AI models, and into the lifecycle of attendant hardware and other infrastructure. Policymakers should require that any new AI initiative, early in planning, complete sustainability reporting that helps estimate a proposed AI initiative’s physical impact on the planet and people, both now and in the future. This transparency is not only necessary for guiding future regulation and consumer choice; it is also a crucial part of fostering a culture that prioritizes developing and regulating technology with the future in mind.

The questions that policymakers should require organizations developing and deploying AI initiatives to answer should, to use a metaphor, address the entire “iceberg.” In other words, these questions should inquire about visible sustainability issues (such as the production of carbon dioxide) as well as less-visible issues below the “waterline” (such as whether the land underlying physical infrastructure could have been used for food production). These questions should cover three overarching categories:

  1. The consumption of readily detectable resources,
  2. The production of byproducts, and
  3. The achievement of broader sustainability goals.

In developing the questions for reporting, policymakers should gather insights from regulators, AI technologists, environmental scientists, businesses, communities near AI infrastructure, and end users. The questions should be useful (easily interpretable and insights from them point to potential areas of improvement), be extensible (applicable across current AI models and for future models), and result in reliable answers (roughly repeatable using distinct tools). Framing questions in a way that results in the reporting of concrete and preferably quantitative answers can set the stage for organizations to implement internal, dashboard-style approaches to sustainable AI development and deployment.

Beyond the wording of such questions, the timing of asking organizations matters as well. Answers to these questions should be reported in the earliest stages of an AI initiative’s planning, as they will help organizations conduct cost/benefit analyses and assess their return on investment. Real-time insights gathered during the operational lifetime of an AI initiative would enable not only monitoring of the project’s sustainability, but also execution of in silico experiments that could reveal novel operational, budgetary, and sustainability benefits. The questions should apply equally to all organizations in the public and private sectors using AI. Finally, policymakers should revisit the questions regularly as AI technologies continue to develop and be deployed—and as user needs and geopolitics change.

To capture these broad considerations in a concise set of questions, policymakers should look to the following key sustainability questions as a starting point.

What resources (inputs) are being consumed, directly and indirectly, throughout the lifecycle of an AI initiative?

  • How much energy is required? What are the sources of this energy? What percentage of this energy is renewable? What is the Power Usage Effectiveness for the initiative?
  • How much water is required, for example for cooling? What are the sources of this water and, for example, is it recycled water? How much of this water could have been suitable for human consumption or agricultural use? What is the Water Usage Effectiveness for the initiative?
  • How much land is required, for example for physical infrastructure? How close is each land parcel to human habitation? How much of this land is appropriate for food production or human habitation? How has local biodiversity been impacted by the use of this land for AI initiatives?
  • What rare metals are used and what are their sources? What are the sources of all metals required for hardware (such as graphics processing units, also known as GPUs)—land, ocean, or recycled? How are local communities and workers, in areas where these metals are procured, engaged or affected?

What byproducts (outputs) are being produced, directly and indirectly, throughout the lifecycle of an AI initiative?

  • How much greenhouse gas (embodied carbon) is produced, in metric tons of carbon dioxide equivalent?
  • What is the projected functional lifetime of each of the top five most abundant hardware components (such as central processing units—also known as CPUs—or GPUs)?
  • How much hardware waste is generated each year? How much of this waste is recycled effectively? How much of this waste will go to the landfill? How much waste pollutes the air and water? How much of this waste is toxic to human health and to the environment?
  • How much wastewater is produced, where does it go, and what can it be used for? Does it require further treatment? Can it be released back into the environment, and how would its release impact the environment (e.g., changing the water temperature of an ecosystem)? Is it used as gray water for other applications?

What broader sustainability opportunities are being harnessed through each AI initiative, using the United Nations’ SDGs as a framework?

  • How resilient is the associated physical infrastructure to earthquakes, floods, droughts, fires, storms, and other disasters? (SDGs 9 and 11)
  • How much of the broader labor force is local to the land and community being used for an AI initiative? How competitive are wages relative to the industry? (SDGs 1 and 8; broader questions around AI and labor disruption are critical but go beyond the scope of the current discussion)
  • How safe and healthy are working conditions for all contributing employees and contractors, both local and remote to the physical infrastructure of the initiative? (SDG 3)
  • How many educational opportunities are being produced by, and contributing to, the AI initiative? (SDG 4)
  • Regarding gender equality and broader inclusivity, what percentage of the workforce, both full-time and contract, identifies as a member of a marginalized group? Are efforts being made to reduce inequality within and between countries that provide AI workforce? (SDGs 5, 10, and 11)

Sticking the landing

Any organization working with AI—whether the organization is using in-house compute resources or external (cloud) service providers to develop and deploy AI models—should report their answers to the above sustainability questions yearly. Several tools and frameworks for reporting and answering some sustainability questions already exist; adopting new policies such as required reporting will spur the development of further tools.

For the time being, transparency obligations should fall on the organizations that are developing and deploying AI models—not on consumers who are only end users of AI models. That may change if large numbers of end users themselves end up training and developing their own models, causing a rapid expansion in AI-associated resource consumption and byproduct production. However, the question about where transparency obligations fall must be revisited regularly as AI technologies continue to develop rapidly and increasingly resource-intensive queries by users become possible. Crucially, hypothetical future affordances of AI must not be factored into the answers to these sustainability questions. For example, if the goal of an AI initiative is to help an end user reduce their carbon emissions, then that hypothetical future reduction must not be factored into the organization’s assessment of the carbon emissions of this AI initiative this year.

Policymakers should promote the monitoring and reporting of accurate information, rather than define “good” answers to these questions and penalize companies that do not meet those benchmarks. The EU’s Sustainable Finance Disclosure Regulation framework, with its emphasis on the power of transparency to shape and amplify market forces, can serve as a model for such an approach. If reported data were gathered in a single, open-access database (perhaps analogous to the European Single Access Point), then regulators, investors, technology companies, nonprofits, and the general public would be able to reward progress toward sustainability goals, over various time horizons, through a variety of mechanisms. It will be important to have external auditors to ensure the credibility of reported data, as they have done for sustainable finance.

Authority to penalize nonreporting should be assigned to a designated agency. For example, for the United States, while the Securities and Exchange Commission and environmental protection agencies at the federal and state levels could be logical candidates for this authority, this environment-centered approach overlooks the larger definitions of sustainability that could be encompassed by regulation. The Office of Science and Technology Policy at the White House may be more appropriate as a centralizing point, given this entity’s mandate to pursue “bold visions” and “unified plans” for US science and technology, as well as its ability to engage with external partners in industry, government, academia, and civil society. The agencies selected to carry out this responsibility should have direct lines of communication with their counterparts in other countries, enabling an agile and coordinated international response to rapid advances in AI.

Critically, international regulators, researchers, businesses, and other developers and users of AI should maintain a collaborative—rather than adversarial—relationship, as doing so could position sustainability as an investment in the future that delivers returns in the near to medium term. Subsidies from federal, state, or local governments could be used to assist small and medium-sized enterprises with the administrative and other financial burdens of this reporting, as mentioned by the EU’s AI Act. To ease the burden on organizations as they comply with potential future reporting and auditing requirements about the sustainability of their AI operations, policymakers should identify metrics and processes that can be used for parallel disclosures. For example, this can be done by requiring data that a single company could use to fulfill their transparency obligations for sustainable AI, sustainable finance, and sustainable corporate reporting such as the EU’s Corporate Sustainability Reporting Directive. Policymakers should also strive to maintain consistency internationally, perhaps following the EU’s lead in sustainability policy to date. Ultimately, the International Organization for Standardization should expand its current AI offerings to include standards for the transparency of AI sustainability (such as the questions suggested above), in alignment with its current standards addressing environmental management, energy management, social responsibility, and more.

A unique moment

The sustainability of AI is an urgent and pressing issue with long-lasting, global impacts. Today, the world still dedicates a great deal of attention to AI; the technology has not yet faded into the background or become ubiquitous and invisible, much like electricity has. However, the current moment—of unprecedented demand for the extraction and deployment of AI-enabling physical resources—is a crucial turning point.

Current and future generations depend on policymakers to steward the world’s resources sustainably, especially as a wave of global resource expenditure—with an anticipated long tail—approaches. In light of this impending growth, the opportunity for action is brief and the need is immediate. Although the scale of the challenge is daunting, international responses to ozone depletion and Antarctic geopolitical tension showcase the power of international collaboration for rapid and high-impact action.

With the framing of key sustainability questions, policymakers can gather the insights they need to adequately build a regulatory framework that encourages responsible resource expenditure and adapts to the inevitable shifts in a nascent industry. Transparency can empower consumers and investors to incentivize sustainable AI development. International cooperation on this effort can foster transparency and inspire collaborative action to build a future that is sustainable in many senses of the word.


Tiffany J. Vora is a nonresident senior fellow at the Atlantic Council’s GeoTech Center. She has a PhD in molecular biology from Princeton University.

Kathryn Thomas is the chief operating officer of Blue Lion. She has a PhD in water quality and monitoring from the University of Waterloo.

Anna Ferré-Mateu is a Ramón y Cajal fellow at the Instituto de Astrofísica de Canarias and an adjunct fellow at the Center of Astronomy and Supercomputing of the Swinburne University of Technology. She has a PhD in astrophysics from the Instituto de Astrofísica de Canarias.

Catherine Lopes is the chief data and AI strategist of Opsdo Analytics. She has a PhD in machine learning from Monash University.

Marissa Giustina is a research scientist and quantum electronics engineer. She has a PhD in physics from the University of Vienna. She conducted the research for this article outside of her employment with Google DeepMind and this article represents her own views and those of her coauthors.

The authors gratefully acknowledge David Rae of EY for fruitful discussions. The authors also acknowledge Homeward Bound Projects, which hosted the initial working session that led to the ideas in this article.

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post The sustainability questions policymakers should be asking about AI appeared first on Atlantic Council.

]]>
One hundred years of energy transitions https://www.atlanticcouncil.org/blogs/geotech-cues/one-hundred-years-of-energy-transitions/ Wed, 08 May 2024 12:09:08 +0000 https://www.atlanticcouncil.org/?p=762424 Thousands of energy leaders, technology developers, and climate advocates gathered in Rotterdam, Netherlands from April 22-25 along the 26th World Energy Congress. Looking back at the first Congress, then called the World Power Congress, in London in 1924, global energy systems looked very different. In 1924, global oil production was around 2.8 million barrels per […]

The post One hundred years of energy transitions appeared first on Atlantic Council.

]]>
Thousands of energy leaders, technology developers, and climate advocates gathered in Rotterdam, Netherlands from April 22-25 along the 26th World Energy Congress. Looking back at the first Congress, then called the World Power Congress, in London in 1924, global energy systems looked very different. In 1924, global oil production was around 2.8 million barrels per day, compared to almost 102 million barrels per day last year. In other words, a country like South Korea or Canada consumes today the same amount of oil that the whole world needed one hundred years ago.

In 1924, there was no nuclear energy in the global energy mix and renewables were probably beyond the imagination of policy makers. By 1975, there were 200 nuclear power reactors in 19 different countries, but solar and wind power were still practically nonexistent. It was not until the early 2000s that solar and wind began to gain traction and break records year after year, contributing 18 percent of the world’s total energy consumption.

Unlike previous energy transitions, the current energy transition is not mainly driven by a more superior technology in terms of energy density and efficiency alone. The current energy transition is mainly responding to the climate impacts related to energy generation and consumption over the last century, or what economists call “externalities”. However, this transition towards more sustainable sources of energy is constrained by the increasing energy demand globally, especially in emerging and less developed economies. This makes the global energy transition measured against three main criteria, commonly referred to as the “energy trilemma”: energy security, energy affordability, and environmental sustainability.

To address the energy trilemma, four critical discussion themes emerged during the centenary World Energy Congress in Rotterdam: 1. Accelerating the deployment of existing solutions, 2. Scaling innovative technologies, 3. The interaction of energy and artificial intelligence (AI), and 4. Humanizing the energy transition.

Accelerating the deployment of existing solutions

There has been great progress on deployment of clean energy technologies over the last 20 years that made most of these technologies cost-competitive with fossil fuels in today’s market even without financial support. However, these technologies are not deployed fast enough to get us on track to meet our climate targets. For example, utility-scale wind and solar projects in the United States can take 4.5 years on average to obtain the necessary permits and navigate necessary environmental reviews for siting and construction.

There is a need for regulatory reforms that can strike the right balance between timely decisions on clean energy and infrastructure projects while preserving thorough environmental reviews. This balance would not leave project developers concerned about fluctuations in equipment costs while they are waiting on permits. The recent delays in offshore wind projects along US coastlines show how the combination of uncertainty, public acceptance, and affordability can impact the pace of the energy transition.

Scaling innovative technologies

In their 2023 World Energy Transitions Outlook, the International Renewable Energy Agency (IRENA) estimated that accelerating the deployment of renewables, energy efficiency, and electrification could achieve 69 percent of global emissions reductions needed to reach net zero by 2050. This would leave almost a third of the needed abatement to innovative, disruptive technologies (e.g., long-duration energy storage, hydrogen, e-fuels, carbon capture utilization and storage (CCUS), carbon dioxide removal) that have not been deployed at a scale large enough to meet our climate targets.

Renewables need long-duration energy storage at scale to ensure that this clean power is available anytime, day or night. Hydrogen and e-fuels will also be needed to support transportation and industrial applications that require liquid fuels, especially when their high heat demands can’t be met with renewables. Although there has been some progress on production of these cleaner fuels with major hydrogen production and infrastructure projects in the United States, Europe, Asia, and the Middle East, there are still gaps around how policies can create large demand signals to scale this market.

After all these mitigation efforts are exhausted, there will still be carbon emissions in the air that need to be subtracted from our limited carbon budget, a clear and direct role for carbon management technologies. CCUS can capture emissions from unabated industrial resources or remaining fossil-based power generation units. Some industries, even if they were completely powered by clean energy, would still emit carbon. For example, almost 60 percent of the emissions from cement production are unavoidable process emissions from the calcination process (i.e., the decomposition of calcium carbonate into calcium oxide and carbon dioxide) rather than energy-related emissions. As the Intergovernmental Panel on Climate Change indicated, climate mitigation efforts to date have been insufficient and there is a need for scaling carbon dioxide removal technologies to reduce the risks of climate overshoot and complement emissions reduction efforts, especially from hard-to-abate sectors.

The interaction of energy and artificial intelligence (AI)

Over the last few years, AI has emerged as an enabler for the energy transition. Generative AI can play an important role in modernization of the electric grid by enabling grid operators to make better, faster decisions and optimize loads. Also, AI can be one of the most effective abatement tools for fugitive methane emissions by using satellite and aerial measurements to quantify, map, and predict methane leaks. This approach can revolutionize fugitive emissions abatement by moving from preventive measures to predictive and even descriptive measures.

A successful emissions abatement strategy relies heavily on accurate measurements which can be challenging for companies with complex operations and supply chains. However, the automation capabilities of AI can drastically reduce the margin of error from manual inputs and provide accurate, real-time data to help companies identify where to focus their emissions reduction activities.

Additionally, AI can be used to improve the performance and increase the output of solar photovoltaic (PV) and concentrated solar power (CSP) systems by predicting solar output, reducing corrective maintenance costs, and providing a more accurate forecast of the capacity available to the grid from electric loads that can be turned on or off depending on the balance between electric demand and generation.

With the increase in electric vehicles (EV) adoption, AI-enabled energy demand forecasts will be critical in avoiding peak charges and reducing the burden on the grid. Although AI has many advantages to enable the energy transition, its huge energy footprint remains a challenge as countries plan for future energy needs. The International Energy Agency (IEA) estimated that electricity consumption from data centers could double by 2026 to be roughly equivalent to the electricity consumption in Japan.

Humanizing the energy transition

Since this is the first energy transition in history that is not driven solely by technology metrics, it is critical to ensure that local communities are involved in climate action plans. With disproportionate impacts of climate change around the world, especially in the global South where most countries have historically contributed far less to global emissions, energy equity and climate justice should be at the center of the energy transition. There is a dire need for bridging the climate finance gap and facilitating the flow of funds to the Global South by de-risking investments and regulatory reforms in developing economies. This would also require institutional reforms in the finance sector to move towards new financing mechanisms (e.g., concessional finance, credit guarantee, grants) rather than the unfair, high-interest loans from multilateral banks that have been the main vehicle for energy infrastructure and development projects for decades.

The world has gone through many energy transitions before, but what probably differentiates the current energy transition is that it encompasses multiple energy transitions happening at the same time. There are transformations across the globe in energy generation processes, infrastructure development, energy policy frameworks, environmental laws, and financing mechanisms. While these transformations do not need to happen at the same pace in every region, countries should ensure that the collective energy transition efforts are sufficient to meet our global climate targets. We have enough tools today to shape the next hundred years of energy.

A hundred years ago, participants at the World Energy Congress were probably not as concerned about energy-related climate impacts. However, we know better today about these impacts and how we can meet global energy needs without compromising environmental integrity. In his masterpiece, One Hundred Years of Solitude, Gabriel Garcia Marquez showed us how it took five generations to decipher the prophecy of the Buendia family and their town of Macondo. We have deciphered the energy trilemma, but global action is imperative to navigate the storm and tackle the climate crisis. If we learned anything from Marquez’s magical realism and the fate of the city of Macondo, we should work together to accelerate the deployment of energy and climate solutions that can shape a brighter future for people and planet.

Logo of the Commission on the Geopolitical Impacts of New Technologies and Data. Includes an 8-point compass rose in the middle with the words "Be Bold. Be Brave. Be Benevolent" at the bottom.

GeoTech Center

Championing positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post One hundred years of energy transitions appeared first on Atlantic Council.

]]>
EU AI Act sets the stage for global AI governance: Implications for US companies and policymakers https://www.atlanticcouncil.org/blogs/geotech-cues/eu-ai-act-sets-the-stage-for-global-ai-governance-implications-for-us-companies-and-policymakers/ Mon, 22 Apr 2024 15:51:29 +0000 https://www.atlanticcouncil.org/?p=757285 The European Union (EU) has made a significant step forward in shaping the future of Artificial Intelligence (AI) with the recent approval of the EU Artificial Intelligence Act (EU AI Act) by the European Parliament. This historic legislation, passed by an overwhelming margin of 523-46 on March 13, 2024, creates the world’s first comprehensive framework […]

The post EU AI Act sets the stage for global AI governance: Implications for US companies and policymakers appeared first on Atlantic Council.

]]>
The European Union (EU) has made a significant step forward in shaping the future of Artificial Intelligence (AI) with the recent approval of the EU Artificial Intelligence Act (EU AI Act) by the European Parliament. This historic legislation, passed by an overwhelming margin of 523-46 on March 13, 2024, creates the world’s first comprehensive framework for AI regulation. The EU will now roll out the new regulation in a phased approach through 2027. The bloc took a risk-based approach to AI governance, strictly prohibiting AI practices that are considered unacceptable, with some AI systems classified as high-risk, while encouraging responsible innovation.

The law is expected to enter into force between May and June after approval from the European Council; its impact is expected to extend far beyond the EU’s borders, reshaping the global AI landscape and establishing a new standard for AI governance around the world.

While reviewing the EU AI Act’s requirements for tech companies, it is critical to distinguish between core obligations that will have the greatest impact on AI development and deployment and those that are more peripheral.

Tech companies should prioritize transparency obligations such as disclosing AI system use, clearly indicating AI-generated content, maintaining detailed technical documentation, and reporting serious incidents or malfunctions. These transparency measures are critical for ensuring AI systems’ trustworthiness, accountability, and explainability, which are the Act’s primary goals.

More peripheral requirements exist, such as registering the classified high-risk AI systems in a public EU database or establishing specific compliance assessment procedures. Prioritizing these key obligations allows tech companies to demonstrate their commitment to responsible AI development while also ensuring compliance with the most important aspects of the EU AI Act.

The Act strictly prohibits certain high-risk AI practices that have been deemed unacceptable. These prohibited practices include using subliminal techniques or exploiting vulnerabilities to materially distort human behavior, which has the potential to cause physical or psychological harm, particularly to vulnerable groups such as children or the elderly. The Act prohibits social scoring systems, which rate individuals or groups based on social behavior and interactions. These systems can be harmful, discriminatory, and racially biased.

Certain AI systems are classified as high-risk under the EU AI Act due to their potential to have a significant or severe impact on people and society. These high-risk AI systems include those used in critical infrastructure like transportation, energy, and water supply, where failures endanger citizens’ lives and health. AI systems used in educational or vocational training that affect access to learning and professional development, such as those used to score exams or evaluate candidates, are also considered high-risk. The Act also classifies AI systems used as safety components in products, such as robot-assisted surgery or autonomous vehicles, as high-risk, as well as those used in employment, worker management, and access to self-employment, such as resume-sorting software for recruitment or employee performance monitoring and evaluation systems.

Furthermore, AI systems used in critical private and public services, such as credit scoring or determining access to public benefits, as well as those used in law enforcement, migration, asylum, border control management, and the administration of justice and democratic processes, are classified as high-risk under the EU AI Act.

The Act set stringent requirements for these systems include thorough risk assessments, high-quality datasets, traceability measures, detailed documentation, human oversight, and robustness standards. Companies running afoul of the new rules could face fines of up to 7 percent of global revenue or $38 million, whichever is higher.

The Act classifies all remote biometric identification systems as high-risk and generally prohibits their use in publicly accessible areas for law enforcement purposes, with only a few exceptions. The national security exemption in the Act has raised concerns among civil society and human rights groups because it creates a double standard between private tech companies and government agencies when it comes to AI systems used for national security, potentially allowing government agencies to use these same technologies without the same oversight and accountability.

The EU AI Act has far-reaching implications for US AI companies and policymakers. Companies developing or deploying AI systems in or for the EU market will have to navigate the Act’s strict requirements, which requires significant changes to their AI development and governance practices. This likely would involve investments to improve risk assessment and mitigation processes, ensure the quality and representativeness of training data, implement comprehensive policies and documentation procedures, and establish strong human oversight mechanisms. Besides significant penalties, noncompliance with the Act’s provisions may result in reputational damage which can be significant and long-lasting, resulting in a severe loss of trust and credibility, as well as widespread public backlash, negative media coverage, customer loss, partnerships, investment opportunities, and boycott calls.

The AI Act’s extraterritorial reach means that US companies will be impacted if their AI systems are used by EU customers. This emphasizes the importance for US AI companies to closely monitor and adapt to the changing regulatory landscape in the EU, regardless of their primary market focus.

As Thierry Breton, the European Commissioner for Internal Market, said on X (formerly Twitter), “Europe is NOW a global standard-setter in AI”. The EU AI Act will likely shape AI legislation in other countries by setting a high-risk-based regulation standard for AI governance. Many countries are already considering the EU AI Act as they formulate their AI policies. François-Philippe Champagne, Canada’s Minister of Innovation, Science, and Industry, has stated that the country is closely following the development of the EU AI Act as it works on its own AI legislation. A partnership that is already strong with the boost of their joint strategic digital partnership to address AI challenges by implementing the EU-Canada Digital Partnership.

Similarly, the Japanese government has expressed an interest in aligning its AI governance framework with the EU’s approach as Japan’s ruling party is expected to push for AI legislation within 2024. As more countries find inspiration in the EU AI Act, similar AI penal provisions are likely to become the de facto global standard for AI regulation.

The impact of the EU AI Act on the technology industry is expected to be significant, as companies developing and deploying AI systems will need to devote resources to compliance measures, which raise costs and slow innovation in the short term, especially for startups. However, the Act’s emphasis on responsible AI development and protecting fundamental rights is the region’s first attempt to set up guardrails and increase public trust in AI technologies, with the overall goal of promoting long-term growth and adoption.

Tech giants, like Bill Gates, Elon Musk, Mark Zuckerberg, and Sam Altman have repeatedly asked governments to regulate AI. Sundar Pichai, CEO of Google and Alphabet, stated last year that “AI is too important not to regulate”, and the EU AI Act is an important step toward ensuring that AI is developed and used in a way that benefits society at large.

As other countries look to the EU AI Act as a model for their own legislation, US policymakers should continue engaging in international dialogues to ensure consistent approaches to AI governance globally, helping to ease regulatory fragmentation.

The EU AI Act is a watershed moment in the global AI governance and regulatory landscape, with far-reaching implications for US AI companies and policymakers. As the Act approaches implementation, it is critical for US stakeholders to proactively engage with the changing regulatory environment, adapt their practices to ensure compliance and contribute to the development of responsible AI governance frameworks that balance innovation, competitiveness, and fundamental rights.

Logo of the Commission on the Geopolitical Impacts of New Technologies and Data. Includes an 8-point compass rose in the middle with the words "Be Bold. Be Brave. Be Benevolent" at the bottom.

GeoTech Center

Championing positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post EU AI Act sets the stage for global AI governance: Implications for US companies and policymakers appeared first on Atlantic Council.

]]>
AI governance on a global stage: Key themes from the biggest week in AI policy https://www.atlanticcouncil.org/blogs/geotech-cues/ai-governance-on-a-global-stage-key-themes-from-the-biggest-week-in-ai-policy/ Thu, 16 Nov 2023 14:09:05 +0000 https://www.atlanticcouncil.org/?p=703805 The week of October 30, 2023 was a monumental week for artificial intelligence (AI) policy globally. As a quick recap: In the United States, one of the longest Executive Orders (EO) in history was signed by President Biden, aimed at harnessing the opportunities of AI while also seeking to address potential risks that may be […]

The post AI governance on a global stage: Key themes from the biggest week in AI policy appeared first on Atlantic Council.

]]>
The week of October 30, 2023 was a monumental week for artificial intelligence (AI) policy globally. As a quick recap: In the United States, one of the longest Executive Orders (EO) in history was signed by President Biden, aimed at harnessing the opportunities of AI while also seeking to address potential risks that may be presented by future evolutions of the technology. In the United Kingdom, international stakeholders came together to discuss risks at the “frontier” of AI and how best to mitigate them. Twenty-nine countries signed on to the Bletchley Park Declaration (“Declaration”). In the midst of all of this, the Hiroshima AI Process launched by Japan under the Group of Seven (G7) released its International Guiding Principles for Organizations Developing Advanced AI Systems (“G7 Principles”) as well as a voluntary International Code of Conduct for Organizations Developing Advanced AI Systems.

In light of what was arguably one of the busiest (and perhaps the most impactful) weeks in AI policy since the public release of ChatGPT thrust AI into the spotlight almost a year ago, there’s a lot to unpack. Below are some key themes that emerged from the conversation and items that will be increasingly relevant to pay attention to as efforts to govern the technology progress globally.

A commitment to taking a risk-based approach to regulation of AI technology

Across all of the activities of last week, one of the themes that came through was the continued emphasis on a risk-based approach, as these authors highlighted in their piece on transatlantic cooperation.

While some efforts more directly called this out than others, it was a throughput that should rightfully remain top of mind for international policymakers moving forward. For example, the chapeau of the G7 Principles calls on organizations to follow the guidelines set forth in the Principles “in line with a risk-based approach,” and the theme is reiterated in several places throughout the rest of the document. In the Declaration, countries agreed to pursue “risk-based policies…to ensure safety in light of such risks.” The Executive Order was a bit less direct in its commitment to maintaining a risk-based approach, though it seems to suggest that this was its intent in laying out obligations for “dual-use foundation model providers” in Section 4.1. The application of requirements for this set of models appears to indicate that the Administration sees heightened risk associated with this sort of model, though moving forward a clear articulation of why these obligations are the most appropriate approach to managing risk will be critical.

In digesting all of the activities last week, a central theme to note is that the global conversation seems to be moving away from an approach focused solely on regulating uses of AI but is now also seeking to regulate the technology itself. Indeed, all of the major efforts last week discussed risks inherent to “frontier models” and/or “advanced AI systems,” suggesting that there are model-level risks that might require regulation, in addition to context-specific, use-case based governance.

What to look out for:

How the term “frontier models” is formally defined, including whether international counterparts are able to come to agreement on the technical parameters of a “frontier model”

  • The Declaration discusses ‘frontier models’ as “those highly capable general-purpose AI models, including foundation models, that could perform a wide variety of tasks—as well as relevant specific narrow AI that could exhibit capabilities that cause harm—which match or exceed the capabilities present in today’s most advanced models” while the Executive Order provides an initial definition of a “dual-use foundation model” as “(i) any model that was trained using a quantity of computing power greater than 1026 integer or floating-point operations, or using primarily biological sequence data and using a quantity of computing power greater than 1023 integer or floating-point operations; and (ii) any computing cluster that has a set of machines physically co-located in a single datacenter, transitively connected by data center networking of over 100 Gbit/s, and having a theoretical maximum computing capacity of 1020 integer or floating-point operations per second for training AI”. The G7 Principles merely discuss “advanced AI systems” as a concept, using “the most advanced foundation models” and “generative AI systems” as illustrative types of these systems.
  • With that being said, it will be interesting to see how definitions and technical parameters are established moving forward, particularly because using floating point operations per second seems to be the way the conversation is currently trending but is not a particularly future-proof metric.

Continued conversation about what the right approach is to govern risks related to “frontier” AI systems

  • With the introduction of both voluntary agreements (e.g., in the Declaration and in the G7 Code of Conduct) as well as specific obligations (e.g., in Section 4.2 and 4.3 of the Executive Order), there is sure to be additional discussion about what the right approach is to managing risk related to these models. In particular, keep an eye out for conversations about what the right regulatory approach might be, including how responsibilities are allocated between developers and deployers.

Whether specific risks related to these models are clearly articulated by policymakers moving forward

  • In some regard, it seems to be a foregone conclusion that “frontier” AI systems will need to be regulated because they present a unique or different set of risks than those AI systems that already exist. However, in setting out regulatory approaches, it is important to clearly define the risk that said regulation is seeking to address, demonstrating why that approach is the most appropriate one. While the EO seems to indicate that the US government has concerns about these AI models amplifying biosecurity and cybersecurity related risks, clearly explaining why the proposed obligations are the right one for the task is going to be critical. Also, there continues to be some tension between those who are focused on “existential” risks associated with these systems and those that are focused on addressing “short-term” risks.

A major focus on the role of red-teaming in AI risk management

Conversations over the last week focused on red-teaming as a key component of AI risk management. Of course, this was not the first time red-teaming has been highlighted as a method to manage AI risk, but it came through particularly clearly in the EO, the G7 Principles, and in the Declaration as a tool of choice to manage AI risk. To be sure, Section 4 of the AI EO directs the National Institute for Standards and Technology (NIST) to develop red-teaming guidelines and requires providers of “dual-use foundation models” to provide information, including results of red-teaming tests performed, to the US government. Principle 1 in the G7 Principles discusses the importance of managing risk throughout the AI lifecycle and references red-teaming as one of the methods to discover and mitigate identified risks and vulnerabilities. The Declaration doesn’t use the term “red-teaming” in particular but talks about the role of “safety testing” in mitigating risk (though it is not clear from the statement what exactly this testing will look like).

One of the interesting things to note is that in the context of AI systems, the term “red-teaming” seems to indicate a broader set of practices than just attacking and/or hacking a system in an attempt to gain access and involves testing for flaws and vulnerabilities of an AI system in general. This is a departure from how red-teaming is generally understood in the cybersecurity context, likely because there is an ongoing discussion around what tools are most appropriate to test for and mitigate a broader range of risks beyond those related to security and red-teaming presents a useful construct for such testing.

Despite red-teaming being a significant focus of conversations as of late, it will be critical for policymakers to avoid overemphasizing on red-teaming. Red-teaming is one way to mitigate risk but is not the only way. It should be undertaken in conjunction with other tools and techniques – like disclosures, impact assessments, and data input controls, to ensure a holistic and proportionate approach to AI risk management.

What to look out for:

If and how different jurisdictions define “red-teaming” for AI systems moving forward, and whether a common understanding can be reached. Will the definition remain expansive and encapsulate all types of testing and evaluation or will it be tailored to a more specific set of practices?

How red-teaming is incorporated into regulatory efforts moving forward

  • While the events of the last week made clear that policymakers are focused on red-teaming as a means by which to pressure test AI systems, the extent to which such requirements are incorporated into regulation remains to be seen. The Executive Order, with its requirement to share the results of red-teaming processes, is perhaps the toothiest obligation coming out of the events of the past week, but as other jurisdictions begin to contemplate their approaches, don’t be surprised if red-teaming takes on a larger role.

How the institutes announced during the UK Safety Summit (the US AI Safety Institute and the UK AI Safety Institute) will collaborate with each other

  • The United States announced the establishment of the AI Safety Institute, which will be charged with developing measurement and evaluation standards to advance trustworthy and responsible AI. As Section 4.1 tasks NIST with developing standards to underpin the red-teaming required by Section 4.2 of the Executive Order, this Institute, and its work with other similarly situated organizations around the world, will be key to implementation of the practices outlined in the EO and beyond.

An emphasis on the importance of relying upon and integrating international standards

A welcome theme that emerged is the essential role that international technical standards and international technical standards organizations play in advancing AI policy. Section 11 of the AI Executive Order, focused on advancing US leadership abroad, advocates for the United States to collaborate with its partners to develop and implement technical standards and specifically directs the Commerce Department to establish a global engagement plan for promoting and developing international standards. Principle 10 of the G7 Principles also emphasizes the importance of advancing and adopting international standards. The Declaration highlights the need to develop “evaluation metrics” and “tools for testing.”

International technical standards will be key to advancing interoperable approaches to AI, especially because we are seeing different jurisdictions contemplate different governance frameworks. They can help provide a consistent framework for developers and deployers to operate within, provide a common way to approach different AI risk management activities, and allow companies to build their products for a global marketplace, reducing the risk of fragmentation.

What to look out for:

Which standards efforts are prioritized by nations moving forward

  • As mentioned above, the United States and the United Kingdom both announced their respective Safety Institutes during last week’s Summit. The UK’s Institute is tasked with focusing on technical tools to bolster AI safety, while NIST is tasked with a wide-range of standards activities in the Executive Order, including developing guidelines for red-teaming, AI system evaluation and auditing, secure software development, and content authentication and provenance.
  • Given the plethora of standards that are needed to support the implementation of various risk management practices, which standards nations choose to prioritize is an indicator of how they are thinking about risks related to AI systems, their impact on society, and regulatory efforts more broadly. In general, nations appear to be coalescing around the need to advance standards to support the testing and evaluation of capabilities of advanced AI systems/frontier AI systems/dual-use foundation models.

How individual efforts are mapped to or otherwise brought to international standards development organizations

  • In addition to the activities taking place within national standards bodies, there are also standardization activities taking place at the international level. For example, International Standards Organization/International Electrotechnical Commission Joint Technical Committee 1 Subcommittee 42 has been hard at work on a variety of standards to help support testing of AI systems and recently completed ISO 42001. As such, mapping activities are helpful for fostering consistency and for allowing organizations to understand how one standard relates to another.
  • Participating in and/or bringing national standards, guidelines, and best practices to international standards bodies helps to create buy-in, facilitate interoperability, and allows for alignment. As individual nations continue to consider how best to approach implementation of various risk management practices, continuing to prioritize participation in these efforts will be crucial to a truly international approach.

The events of the last week helped to spotlight several areas that will remain relevant to the global AI policy conversation moving forward. In many ways, this is only the beginning of the conversation, and these efforts offer an initial look at how international collaboration might progress, and in what areas we may see additional discussion in the coming weeks and months.

Logo of the Commission on the Geopolitical Impacts of New Technologies and Data. Includes an 8-point compass rose in the middle with the words "Be Bold. Be Brave. Be Benevolent" at the bottom.

GeoTech Center

Championing positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post AI governance on a global stage: Key themes from the biggest week in AI policy appeared first on Atlantic Council.

]]>
Digital discrimination: Addressing ageism in design and use of new and emerging technologies https://www.atlanticcouncil.org/blogs/geotech-cues/digital-discrimination-addressing-ageism-in-design-and-use-of-new-and-emerging-technologies/ Tue, 07 Nov 2023 20:20:40 +0000 https://www.atlanticcouncil.org/?p=699957 This article originally appeared in the 2023 edition of AARP’s The Journal. To attract, retain, and support a more diverse workforce, companies will need to be deliberate and equitable in creating inclusive working conditions and lifelong learning opportunities to maintain digital literacy. Digital technology is becoming increasingly integrated into everyday life, but aging populations have […]

The post Digital discrimination: Addressing ageism in design and use of new and emerging technologies appeared first on Atlantic Council.

]]>
This article originally appeared in the 2023 edition of AARP’s The Journal.

To attract, retain, and support a more diverse workforce, companies will need to be deliberate and equitable in creating inclusive working conditions and lifelong learning opportunities to maintain digital literacy.

Digital technology is becoming increasingly integrated into everyday life, but aging populations have not fully participated in this technology revolution or benefited fully from today’s connected and data-rich world—disparities characterized as the digital divide and data divide, respectively. According to research by FP Analytics (with support from AARP),1 although 60 percent of the world’s population is connected to the Internet, access to digital services is unevenly distributed, especially for older adults and people in low- and middle-income countries. Even within an advanced economy like the United States, 15 percent of adults age 50 or older do not have Internet access and 60 percent say the cost of high-speed Internet is a barrier to access.2 Lack of digital access kept about 40 percent of older US adults from getting much- needed online services at home during the COVID-19 pandemic. This divide is deeper for women, who in developed nations are 21 percent less likely to be online and in developing countries 52 percent less likely to be online than men.3 No or slow Internet access is just one of multiple barriers preventing many seniors from accessing or fully benefiting from digital services, which are rarely designed or provided with aging populations in mind or made accessible to people who may have limited physical and/or cognitive abilities.

The need to bridge the divides facing older individuals will only grow over time if patterns of digital discrimination1 are allowed to persist. Not only are digital services and data applications becoming more prevalent, but the proportion of older adults is increasing due to changing demographics. Globally, there will be 1.4 billion people age 60 years old or older by 2030.4 Within the United States, by 2034 the aging population is set to outpace its youth with a projected 77 million people age 65-plus compared with the projected 76.5 million people under 18.5 At the same time, the working-age population is shrinking and is projected to decrease from 60 percent in 2020 to 54 percent by 2080.6 As older populations grow, it is imperative that societies take steps to ensure that new and emerging technologies bring benefits to all people and do not deepen the digital divide: technology and data must be more accessible and digital fluency improved for everyone.

The Atlantic Council’s GeoTech Center is working to identify and communicate what is required so that emerging technologies can enter use widely across the globe for public benefit while also identifying and mitigating potential risks, including to the aging population and underserved communities, globally. The Center thereby is an essential bridge between technologists and national and international policy makers, bringing together subject matter experts, thought leaders, and decision makers through purposeful convenings to consider the broader societal, economic, and geopolitical implications of new and emerging technologies; leverage technology to solve global challenges; and develop actionable tech policy, partnerships, and programs.

As discussed in a recent report,7 the GeoTech Center shares AARP’s concerns about the growing digital and data divides. The data divide can be reduced only if there is optimization in data processing, monitoring, and evaluation of the policies and programs from major stakeholders and alignment of public–private partnerships for social good. Monitoring the growth of digital skills and access to data is especially critical for tracking progress, yet a 2021 study found that of the 150 most influential technology companies, only 12 published impact assessments.8 Key recommendations for stakeholders—including private-sector firms, governments, and civil society organizations—are the need to train a more inclusive generation of professionals; create new governance structures; and ensure equitable access, tracking, and control over data across society. These recommendations are especially important for aging adults and other demographic groups historically left offline and left behind in the rush to introduce new technologies and services into society.

As seniors become a larger component of the workforce and the importance of digital tools continues to grow, private-sector stakeholders who want to retain and benefit from the value such experienced workers can bring will need to double down on digital upskilling and reskilling for their employees. Moreover, as the proportion of the conventional working- age population declines, seniors and other underrepresented sectors of society will become an increasingly important segment of the workforce. To attract, retain, and support a more diverse workforce, companies will need to be deliberate and equitable in creating inclusive working conditions and lifelong learning opportunities to maintain digital literacy.9

It is also important to note that just offering digital literacy lessons is not enough; for the training sessions to be effective, older adults must be engaged and enjoy them. Digital training for older adults works best when they are delivered by institutions that seniors trust and have experience working with. These institutions can range from libraries to religious networks. Additionally, the learning programs and instructors themselves must be compatible with the needs of the users. Older adults tend to engage better with instructors who have shared their experiences or are seniors themselves. They also tend to learn better with one-on-one instruction, which can be more personalized than automated training sessions.3

Although a range of ongoing activities exist across the public and private sector to bridge the digital and data divides associated with current technology, all sectors need to proactively work together to ensure that future technologies benefit aging populations and do not deepen those divides. For example, as discussed in a 2019 White House report, various emerging technologies have significant potential to assist older adults with successfully aging in place.10 For these and other technologies to enter into use in ways that achieve that potential, the knowledge, skills, and abilities of seniors (and others historically left behind by technology) must be considered throughout the design process and product life cycle.

Among the many distinct needs and preferences to be considered are trust; privacy; and physical abilities including vision, hearing, and dexterity.

Finally, beyond simply considering consumer needs, technologists should include the aging population, caregivers, and others directly in the development process. Having a more inclusive, user-centered design process for a range of technologies should become common procedure—both for technologies used at home and for those essential for success in the future workplace. For technologies to support aging in place, it is important to include older individuals themselves and not just caregivers, recognizing that not all people will have access to caregivers or expensive care resources. Given that most technology is developed with younger customers in mind, achieving this vision of inclusive development will require additional public–private partnerships that can further bridge the gap between a more diverse set of users and developers. Bridging this gap would not only make technologies more effective but also provide increased economic opportunity. People with disabilities, many of whom are seniors, have a total spending power of approximately $6 trillion. Including this population in the design process could encourage them to become future consumers, therefore creating economic value for technology companies.3 The establishment of additional smart partnerships will be crucial in the next decade if we are to prevent age from being a barrier to benefiting from new and emerging technologies in society and the future of work.


1 Expanding Digital Inclusion for Aging Populations. 2022. FP Analytics and AARP. https://fpanalytics.foreignpolicy.com/wp-content/uploads/sites/5/2022/09/Expanding-Digital-Inclusion-Aging-Populations-AARP.pdf.

2 “AARP Urges Older Americans Struggling to Access and Afford High-Speed Internet to Enroll in New Emergency Broadband Benefit Program.” 2021. MediaRoom. https://press.aarp.org/2021-5-12-AARP-Urges-Older-Americans-Struggling-to-Access-and-Afford-High-Speed-Internet-to-Enroll-in-New-Emergency-Broadband-Benefit-Program#:~:text=According%20to%20the%20study%2C%2015.

3 Digital Inclusion for All: Ensuring Access for Older Adults in the Digital Age. 2023. FP Analytics and AARP. https://www.aarpinternational.org/file%20library/resources/2023-a-fpa-aarp-digital-inclusion-final.pdf.

4 WHO. 2022. “Ageing and Health.” World Health Organization. October 1, 2022. https://www.who.int/news-room/fact-sheets/detail/ageing-and-health.

5 Rogers, Luke, and Kristie Wilder. 2020. Shift in Working-Age Population Relative to Older and Younger Americans. United States Census Bureau , June. https://www.census.gov/library/stories/2020/06/working-age-population-not-keeping-pace-with-growth-in-older-americans.html.

6 Rogers, Luke, and Kristie Wilder. 2020. Shift in Working-Age Population Relative to Older and Younger Americans. United States Census Bureau , June. https://www.census.gov/library/stories/2020/06/working-age-population-not-keeping-pace-with-growth-in-older-americans.html.

7 Wise, Solomon, and Joseph T. Bonivel. 2022. The Data Divide: How Emerging Technology and Its Stakeholders Can Influence the Fourth Industrial Revolution. Atlantic Council . https://www.atlanticcouncil.org/in-depth-research-reports/report/the-data-divide-how-emerging-technology-and-its-stakeholders-can-influence-the-fourth-industrial-revolution/.

8 Digital Inclusion Benchmark. 2023. World Benchmarking Alliance. https://www.worldbenchmarkingalliance.org/publication/digital-inclusion/.

9 See, for example, a discussion of artificial intelligence in the context of building human capacity and preparing for labor market transitions in the age of automation at https://www.atlanticcouncil.org/programs/geotech-center/ai-connect/ai-connect-webinar-7/

10 Emerging Technologies to Support and Aging Population. 2019. The White House. https://trumpwhitehouse.archives.gov/wp-content/upload s/2019/03/Emerging-Tech-to-Support-Aging-2019.pdf.

Logo of the Commission on the Geopolitical Impacts of New Technologies and Data. Includes an 8-point compass rose in the middle with the words "Be Bold. Be Brave. Be Benevolent" at the bottom.

GeoTech Center

Championing positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Digital discrimination: Addressing ageism in design and use of new and emerging technologies appeared first on Atlantic Council.

]]>
Mobilizing public science priorities through the American commercial space industry https://www.atlanticcouncil.org/blogs/geotech-cues/mobilizing-public-science-priorities-through-the-american-commercial-space-industry/ Fri, 13 Oct 2023 17:48:58 +0000 https://www.atlanticcouncil.org/?p=691540 The next ten years stand to be transformative for improving life on our planet while simultaneously achieving a sustainable and thriving human presence beyond Earth.

The post Mobilizing public science priorities through the American commercial space industry appeared first on Atlantic Council.

]]>
Every ten years, for a variety of disciplines, the National Academy of Sciences, Engineering, and Medicine (NASEM) is responsible for providing consensus recommendations, designated as “decadal surveys”, on behalf of the scientific community to shape national research programs. On September 12, 2023, NASEM released “Thriving in Space—Ensuring the Future of Biological and Physical Sciences Research: A Decadal Survey for 2023-2032”, the second such volume to guide the research priorities and respective programs of the Biological and Physical Sciences (BPS) Science Mission Directorate at the National Aeronautics and Space Administration (NASA).

With an energetic, but considerably demanding, national space policy led by the White House’s 2021 Space Priorities Framework, the United States is committed to supporting a robust scientific and technological-development ecosystem to advance scientific discovery, address the needs of human and robotic space exploration, and deliver terrestrial benefits. However, these goals cannot be realized by the public or private sectors without comprehensive guidance and strategic investment in these three research priorities. The new survey provides an opportunity to actualize these goals through parallel mobilization in national research programs and the commercial space industry.

Investing in the future through BPS research

The report poses eleven key scientific questions (KSQs) pertaining to three cross-cutting themes in basic BPS research: how biological and psychological mechanisms adapt to space; the scientific principles—such as chemical, physical, and biological processes in extraterrestrial environments—that must be considered as humans live and travel in space; and understanding phenomena hidden by gravity or terrestrial limitations that become accessible in space. In addition, the report lays out two ambitious research campaigns that, by 2033, can deliver transformative contributions to NASA’s Moon to Mars program and the growing space economy while simultaneously benefitting life on Earth (including for climate adaptation), should they be funded adequately.

Crucially, in order to “retire” these KSQs in the next decade, the committee recommended that NASA increase funding to the BPS program tenfold above current levels before 2030. The report cites a 2023 BPS budget of only $85 million (of the $100 million originally requested by NASA). During the Space Shuttle era, NASA’s allocated budget was regularly more than two percent of US spending; in 2023, it was 0.44 percent. The committee’s recommendation for an order of magnitude increase in NASA’s funding of BPS therefore seeks to close the gap between today’s spending and tomorrow’s ambitious goals. With a Congress likely reluctant to grant nearly a billion more dollars to the NASA budget, the private sector will be key to achieving the goals of the decadal survey.

Amplifying public efforts with commercial space

Growing commercial activity in and for outer space offers a timely opportunity for the government to leverage the capabilities of the private sector to not only conduct transformative research, but also to support the subsequent phases of technology development and engineering, to facilitate deployment to space missions (public or private), and to more strongly establish business cases for wider economic activity in space. Conversely, while the report is intended to inform NASA’s direction of public investment in specific research priorities, it should also serve to guide industry to strategically align its investment priorities with the national interest.

Since the 1950s, national space agencies have been the earliest—and often only—funders of the science and engineering of space exploration. Research activities supported by the Science Mission Directorate (including BPS) are crucial for applications that will be used by other NASA Mission Directorates, particularly for crewed missions. While a NASA-centric approach previously delivered strong returns on investment for scientific advances and benefits to the American economy, today’s innovators in space technology frequently seek alternatives to NASA’s often cumbersome and restrictive mechanisms for funding. To date, early public investment in small and medium enterprises have enabled entrepreneurial space start-ups to gain footing in a highly competitive, high barrier-to-entry market that the majority of risk-averse investors avoid. Continuing, and even increasing, such investment will accelerate progress toward closing the vast gaps in our basic understanding of space and the universe that have been highlighted by the latest decadal survey.

Notably, the current private American space industry is an asset with more capital than most national civil space programs. In the decade since the previous survey was released, the gross output of the US space economy grew from $180.6 billion to $211.6 billion (nominal dollars), per the Bureau of Economic Analysis. Supporting 360,000 private industry jobs and $51.1 billion of private industry compensation in 2021, the US commercial space sector has become a potent force in the international economy. Meanwhile, the United States spent 60 percent of all global government space spending in 2021, affirming the country’s commitment to civil space leadership. NASA’s role in pioneering the space industry should not be understated nor undervalued; however, NASA’s efforts must be complemented by the decentralized models of funding and project management of the private sector if the KSQs and overarching themes of the 2023 report are to be achieved by 2033.

Enabling transformative advances through the coming decades

The International Space Station National Laboratory has served as a stable research station in low-Earth orbit (LEO) for over two decades. With its decommission on the horizon, researchers need a new research platform in LEO to continue to conduct critical microgravity research. If private entities are to assume this responsibility through commercial LEO destinations (CLDs)—and profit thereby, thus supporting a vibrant space economy and feeding further research and development in the private space sector—then public-private coordination today is essential to ensure that orbital laboratories and other infrastructure have reliable customers in NASA and other public organizations. During these conversations, it is also important to engage international partners to ensure that the global scientific community is supported in future efforts for in-space research, including the sovereign research platforms of other countries. (The importance of harnessing allied space capabilities for exploration, security, and commerce was the subject of a recent three-part issue brief from The Atlantic Council.)

A much larger concern with this public-private integration, which involves CLDs along with all privately developed technology in space, involves the science-design requirements considered (or overlooked) by industry in their business models. While many companies conducting basic BPS research are contracted by NASA to do so, many companies are independently developing capabilities (including for research) that could concurrently enhance public efforts if opportunities for collaboration were prominently available from the start of the research lifecycle, as recommended by the decadal survey. Such engagement would also promote interoperability between public and private systems, which will become increasingly critical as the space economy expands. To support the national research ecosystem, American companies should identify potential areas of public collaboration early in order to ensure that business models are attractive to government customers and that research platforms maintain interoperability between public and private users.

Overall, the recommendations of the 2023 BPS decadal survey are a crucial component of the path to ensuring that the 21st century yields continuous decades of scientific advancement. In particular, the next ten years stand to be transformative for improving life on our planet while simultaneously achieving a sustainable and thriving human presence beyond Earth. By strengthening the links between the public and private sectors, the very best minds and a diversity of stakeholders can be drawn into taxpayer-funded space research through a steady cadence of funding opportunities, as well as into the broader space workforce. For that reason, the goals presented in the decadal survey should be used to anchor the research priorities and business opportunities of NASA, private and public researchers, and the private space industry, with appropriate and committed support from government.

Logo of the Commission on the Geopolitical Impacts of New Technologies and Data. Includes an 8-point compass rose in the middle with the words "Be Bold. Be Brave. Be Benevolent" at the bottom.

GeoTech Center

Championing positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Mobilizing public science priorities through the American commercial space industry appeared first on Atlantic Council.

]]>
Data strategies for an AI-powered government https://www.atlanticcouncil.org/blogs/geotech-cues/data-strategies-for-an-ai-powered-government/ Wed, 11 Oct 2023 17:50:55 +0000 https://www.atlanticcouncil.org/?p=688288 Recommendations for the federal enterprise from planning to piloting to procurement.

The post Data strategies for an AI-powered government appeared first on Atlantic Council.

]]>

The public sector’s increasing demand for tools that can apply artificial intelligence (AI) to government data poses significant challenges for federal chief information officers (CIOs), chief data officers (CDOs), and other information technology (IT) stakeholders in the data ecosystem. The technical applications of AI built on federal data are extensive, including hyper-personalization of information and service delivery, predictive analytics, autonomous systems, pattern and anomaly detection, and more.

This community must simultaneously manage growing data lakes (on premises and cloud-based), ensure they follow best practices in governing and stewarding their data, and address demand from both within and outside government for equitable and secure access to data, while maintaining strong privacy protections.

These demands require each data owner to have a data infrastructure appropriate for AI applications. However, many federal IT systems do not yet have that infrastructure to support such applications—or a strategy to establish one—and many stakeholders may not yet recognize what data infrastructure and resources are required or whom to ask for help developing strategies and plans to make AI and machine-learning (ML) applications possible. Moreover, the resources needed are regularly not controlled by the CIO/CDOs or are often undervalued and overlooked by those who set budgets. Finally, not all agencies have the workforce with the skills necessary to build, maintain, and apply an AI/ML-ready data mesh and data fabric.

In two private webinars, the GeoTech Center explored:

  • Maximizing the value of data through AI and how that capacity can be expanded.
  • The importance of infrastructure, resources, and workforce skills needed to create an AI/ML-ready data mesh and data fabric.
  • The challenges that agencies face to create these data infrastructures along with effective strategies, tactics, approaches, best practices, and lessons learned.

Key findings, to date, can be structured into four categories:

  1. Establishing human capital and an “AI-ready” culture
  2. Planning and developing data-centric AI applications
  3. Piloting data-centric AI applications
  4. Procuring and/or scaling data-centric AI applications

1. Establishing human capital and an “AI-ready” culture

Human capital and workforce challenges are foundational: it is critically important to integrate humans into the AI and data management process across the ecosystem and application lifecycles and obtain leadership buy-in on strategic approaches to leveraging data that balance other concerns such as security. Solutions include creating cross-functional task forces and working groups, embedding technology with operational users for immediate feedback, and rewarding (limited) risk-taking on AI projects.

There is a broad need to improve AI literacy across the enterprise, especially at the leadership level, to have meaningful conversations on how to move forward. With ML being at the forefront there is a tendency, especially out in the field, to confuse ML as the only form of AI that exists currently. To improve AI literacy, agencies need to focus on human and organizational behavior; for example, incentivizing actual uptake of a training course and making it part of everyone’s job description to learn about AI. It is also important to develop more acceptance of risk related to AI applications; users are not inherently accepting of automated systems with the potential to take on large significant aspects of their work. But they will find value in tools that augment their capabilities but do not take over their decision making.

For organizations that have not routinely leveraged data for analysis or policy insights (with or without AI), identifying and socializing mission-specific needs and insights that can be addressed helps establish an initial stakeholder community—for example, priority and/or long-standing personnel, financial, operational, or policy questions where existing or new data and AI might reveal actionable insights.

Agencies should consider:

  • Creating cross-functional task forces and working groups around getting data AI-ready–the solution is at least as much about organizational adaptation as it is about technological change. Such groups can also be tasked with identifying key questions where data and AI might reveal actionable insights.
  • Rewarding (limited) risk-taking on AI projects, balancing ‘misuse versus missed use’ and encouraging an approach of ‘yes, unless’ for data sharing.
  • Examining roadblocks within the organization to move the use-case forward and ensure the organization has an adequate workforce needed given the scale of each problem.
  • Sending clear demand signals and, explaining the value proposition and scalability of data-centric AI applications, making clear the return on investment and measures of effectiveness.
  • Working with service providers to understand how they use AI and how to use AI through their services.

2. Planning and developing data-centric AI applications

Federal agencies maintain and/or have access to an overwhelming quantity of data—structured and unstructured, qualitative and quantitative, inputs and outputs—that create unique data governance challenges. Data is often poorly structured and not organized in a way amenable to equity assessments or application/use by AI tools. Therefore, it is important to consider up front the data management pipeline, including how to efficiently obtain, clean, organize, and deploy data sets; i.e., getting the data “right” before using it in an AI application. Similarly, when possible, proactively consider what applications might arise from a data set before collection, which will improve the subsequent usability of that data and reduce ‘application drift’ (changes in use and scope beyond the original intention).

The pipeline includes not just the technical aspects of data management but also the need to treat data management as a business problem. Moreover, data is often siloed and generally inaccessible to those outside of the organization in which it was created, preventing its use in machine learning applications outside of this closed ecosystem. Data may also be separated between networks, locations, and classifications. These silos hamper the efficient use of information.

AI relies on data, but senior leaders tend to look at AI as a capability rather than a technology that can create a capability when applied to the right data and/or problem—if agencies don’t have an application in mind, they need to start thinking about getting their data AI-ready—including thinking about getting their infrastructure ready. Digital modernization across the US government is an ongoing challenge, so infrastructure is often not being built fast enough or is being outsourced to the private sector, creating additional challenges, including privacy and security.

It is important to consider the value of curated or specialized data and the tension between quantity and quality. The challenge lies in choosing between high-precision, function-specific applications and more generalized data that can be applied to a broader range of solutions.

The White House Office of Science and Technology Policy (OSTP) is working to help agencies turn data into action by collecting data purposefully in such a way that they can more easily parse it and achieve equitable outcomes. OSTP views equitable data as data that allows for the rigorous assessment of the extent to which government programs yield fair, just outcomes for all individuals.

Some agencies are finding value in AI-generated synthetic data, that can be higher quality and more representative than human-labeled data for selected ML applications while addressing concerns about protecting privacy associated with real data (even when anonymized). However, recursive use of synthetic data—i.e., using information generated from synthetic data in repeated cycles of training—should be avoided as it leads to spurious output.

In the health sector, a major challenge continues to be the need to convert images (such as faxes, which are still widely used) into structured data suitable for AI applications.

Agencies should consider:

  • Operationalizing data repositories into a data fabric, allowing for organization-wide access to data resources.
  • Establishing a dedicated point of contact within agencies for data repository requests.
  • Ensuring that customers know where their data is and who owns it.
  • Treating data as a product that requires trust and continually seek feedback on how the data is being made available and used.
  • Balancing data push (collecting data for an application) vs. data pull (using data for an application) by evaluating what applications can be done with existing data rather than collecting new data.
  • Proactively considering what applications might arise from new data before collecting that data.
  • Working across the interagency to create common tags for fair data and shared test data sets.
  • Integrating privacy principles from the start in projects, including through privacy impact assessments, using appropriate types of encryption everywhere it is required, along with appropriate access controls.
  • Stratifying applications based on risk, which will enable a graded approach, where lower risk applications can be pursued with relatively fewer restrictions, and higher risk applications would require rigorous testbed deployments and sufficient human oversight.

3. Piloting data-centric AI applications

As for the planning stage, managing and maintaining the data pipeline is key, from getting the data, cleaning and organizing the data, to deploying the data. Treating data as a business problem is just as important as treating it as a hardware/infrastructure problem. Ontology is very important to get data right and must evolve as the uses of the data evolve. Once there is a common ontology, the data can be released to model trainers and industry partners. The order of the workflow is as follows: getting data ‘right’…then deploying models utilizing that data. However, it is difficult to get program managers to think strategically about data up front, resulting in myriad challenges down the road. “Think about data first!

When it comes to more specialized or narrowly focused data sets, one must prioritize quality over quantity. There is a tension between solving a particular problem with high precision vs a general problem with many solutions. Quantity may be a quality all on its own that can be addressed separately. There may be pressure to “go big” or “not at all”.

During pilots it is important to integrate the application with human systems, getting it into the hands of users and continuously obtaining feedback, reexamining the data, and updating the software in real time.

Agencies should consider:

  • Embedding the technology with operational users as quickly as possible for immediate feedback and to identify unanticipated problems through extensive testing, including infrastructure and data challenges. To maximize this feedback loop, organizations may need to rethink where humans and machines interact and be willing to expose the user (or at least early users) to some level of complexity that may in the end be hidden.
  • Being flexible, agile, and forward leaning with people on the “forward tip of the spear” and embedding data professionals in projects who understand both the data and the mission.
  • Picking key anchor projects that have high leverage potential. Find an anchor tenant and build it so to quickly start interacting with data, managing access control, and optimizing the data platforms.
  • Identifying applications and pilots that could be expanded across sectors/organizations, for example, by adding additional data repositories into a data fabric, creating agency-wide models, and/or making data resources available across the enterprise.

4. Procuring and/or scaling data-centric AI applications

It is common in the US government to consider the commercial sector ahead of the government in adopting new technology, including AI. Although AI-enabled applications have matured enough to be readily adopted for US government applications, commercial providers require data of sufficient quality to engender trust in the insights or outputs from deployed applications. Partnerships with the private sector are needed to move the needle across the US government—the current attention and momentum in commercial and government sectors are data and AI are exciting.

A promising area to scale is leveraging large language models (LLMs) ability to write code and find bugs. ChatGPT and other LLMs can now be as effective as previous bespoke tools. (A common non-result for ChatGPT is a request for more information, which when available can lead to useful results.) These technologies will help produce tools to find and fix bugs quickly—even when only applied to “easier/shallower” bugs, this application would be a huge win.

A challenge to scaling LLMs/generative AI at this time is that hallucination rates can approach 30 percent—this rate needs to be brought down before widespread use. Although the capabilities of these systems will ultimately lead to valuable applications, getting hallucination rates down will be difficult. The promise is great, but we have not yet reached the full potential, technology-wise.

Generative AI also introduces new threats that must be acknowledged and rapidly addressed, especially for misinformation from sound/video/image production. Moreover, AI agents will be connected to the Internet—and therefore the physical world. In combination with reinforcement learning such agents could be capable of autonomously causing harm in the physical world.

Agencies should consider:

  • Involving downstream users from the start of the transition process and making sure they know what’s coming and can provide feedback.
  • Developing a culture and messaging strategy that makes clear that the agency is not deploying AI without considering broader applications and future scale, and strongly encourages partnerships while still maintaining focus on the priority projects.
  • Identifying solutions in the start-up community that can be shaped for different applications along with emerging capabilities that could be useful soon (and get the US government ready to adopt).
  • Moving to contract using industry best-of-breed design principles and flexible acquisition authorities when available.
  • Building transparency, testing and evaluation, privacy safeguards and other elements of responsible AI into the planning and procurement process.

Acknowledgements

These findings and recommendations were produced by the Atlantic Council GeoTech Center following private discussions with IT, data science, and AI leaders and experts in both the public and private sectors. This effort has been made possible through the generous support of Accenture Federal Services and Amazon Web Services.

Logo of the Commission on the Geopolitical Impacts of New Technologies and Data. Includes an 8-point compass rose in the middle with the words "Be Bold. Be Brave. Be Benevolent" at the bottom.

GeoTech Center

Championing positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Data strategies for an AI-powered government appeared first on Atlantic Council.

]]>
Consensus standards and measurement methods will be critical to mitigating climate change and fostering sustainability https://www.atlanticcouncil.org/blogs/geotech-cues/consensus-standards-and-measurement-methods-will-be-critical-to-mitigating-climate-change-and-fostering-sustainability/ Sun, 13 Aug 2023 16:23:58 +0000 https://www.atlanticcouncil.org/?p=672338 The green transition agenda—a shift toward clean energy and sustainable growth—is a top priority for many countries worldwide. Focus on this transition is increasing rapidly as new data presents a somber indication of how the world is being affected by the extent and pace of climate change, especially as emissions continue to increase. Last week, […]

The post Consensus standards and measurement methods will be critical to mitigating climate change and fostering sustainability appeared first on Atlantic Council.

]]>
The green transition agenda—a shift toward clean energy and sustainable growth—is a top priority for many countries worldwide. Focus on this transition is increasing rapidly as new data presents a somber indication of how the world is being affected by the extent and pace of climate change, especially as emissions continue to increase. Last week, the US Department of Energy announced it will spend up to $1.2 billion for the first large-scale facilities in the United States for carbon dioxide removal (CDR), “to address legacy carbon dioxide pollution and complement rapid emissions reductions.” Whether one is working across the transition to monitor emissions or quantify the effectiveness of mitigation measurements, a key and often overlooked issue is the need for global consensus standards and measurement methods.

In the United States, the Council on Environmental Quality has released updated guidance that calls for federal agencies to take a much broader look at the climate change impacts on major new infrastructure projects, government policies and federal decisions. In January, the Biden administration outlined a blueprint for using billions in public dollars to expand the use of electric vehicles and low-carbon fuels to help put the United States on a course to eliminate carbon emissions from the transportation sector by 2050. In Europe, the European Commission has adopted a set of proposals to make the climate, energy, transport, and taxation policies fit for reducing net greenhouse gas emissions by at least 55 percent by 2030, compared to 1990 levels.

Climate change-related policy discussions are also taking place at the World Trade Organization, at Group of Seven Ministers meetings, within the Indo-Pacific Economic Framework’s Clean Economy Pillar, in the Asia-Pacific Economic Cooperation, and elsewhere. A core question in these discussions is how to make real, measurable progress in addressing the effects of climate change.

The new investment in CDR highlights the importance, in particular, of reliably tracking greenhouse gases (GHGs) in the atmosphere, with atmospheric carbon dioxide being the primary human source of climate change. Atmospheric carbon dioxide concentrations are now higher than at any time in at least two million years. CDR and other “negative emissions” technologies are secondary to the main goal of reducing emissions and reaching net-zero as quickly as possible, which will require concomitant economic, social, and technological change.

What tools are in the policy “toolbox” to achieve shared climate goals at the depth and speed required by the rapidly changing climate? Achieving climate neutrality and energy independence will require the accelerated diffusion of existing technologies, further cost reductions, as well as innovation in new technologies—all of which will need to be supported by globally-adopted standards and measurements.

National measurement institutes, universities, and non-governmental organizations are working globally to amass the needed data to support accurate measurements and monitoring of greenhouse gas emissions. Some measurement methods currently in use have relatively low accuracy, resulting in both over- and under-reporting of emissions. There is a clear need for equitable access to high-quality greenhouse gas monitoring systems to standardize, aggregate, and expand the measurements—and, therefore, the data—that inform decision-making. It will also be important for both private sector and government stakeholders to agree on what data is needed, how that data is measured, and how that data is reported.

High quality, standardized data is crucial to demonstrate the effectiveness of the various carbon capture and carbon conversion and storage solutions that are being deployed across the globe by both governments and businesses. Validated tools, methods, and data dissemination will enable the identification of the most efficient and economically viable approaches for emissions reduction. Application of these tools before and after deployment of energy efficient or alternative energy solutions can authenticate their effectiveness. Their application also has the potential for improved monitoring tools to enable validation and verification of the impacts of measures and policies that have historically been difficult to measure.

In the standards space, the International Organization for Standardization (ISO) has embraced the United Nation’s Sustainable Development Goals (UN SDGs), which include taking urgent action to combat climate change and its impacts. ISO published guidance to provide standards developers with a systematic approach to addressing sustainability issues in a coherent and consistent manner related to the objective and scope of the standard being developed or revised. Many other standards developing organizations, including ASTM International and UL Standards and Engagement, have mapped their standardization projects to one or more of the UN SDGs related to climate change.

On a practical level, standards intended to address climate change effects and to foster sustainability must be comprehensive, technically robust, and cover the entire range of emission sources, manufacturers, and applications. Work ongoing in ISO Technical Committee 207 on environmental management includes standards for life cycle assessment, environmental auditing, and environmental labelling. ASTM’s portfolio includes standards for steel decarbonization as well as broader sustainability standards. New mechanisms to ensure compliance and a framework to assess life-cycle emissions will also be required.

There is a role for regulation to provide additional leverage to expand implementation of voluntary standards but also a role for the private sector and academia working through standards development organizations to fill identified needs. Challenges include defining the baseline for a GHG inventory, lack of agreement on environmental product declarations and life cycle assessments, and fragmented standards in some sectors, such as steel.

Assuring the right discussions are happening across government and the standards community is important given the cross-cutting nature of climate solutions. Opportunities for progress include agreed emissions intensity performance thresholds, broadly accepted product level standards on lifecycle assessment and carbon footprint, and greater collaboration among stakeholders on product category rules and environmental product declarations to facilitate transmitting information across supply chains and better meet market needs. Finally, discussions should ensure that those most impacted by climate change help shape both the underlying standards and related policy proposals.

Logo of the Commission on the Geopolitical Impacts of New Technologies and Data. Includes an 8-point compass rose in the middle with the words "Be Bold. Be Brave. Be Benevolent" at the bottom.

GeoTech Center

Championing positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Consensus standards and measurement methods will be critical to mitigating climate change and fostering sustainability appeared first on Atlantic Council.

]]>
The regulators are coming for your AI https://www.atlanticcouncil.org/blogs/geotech-cues/the-regulators-are-coming-for-your-ai/ Mon, 22 May 2023 21:06:10 +0000 https://www.atlanticcouncil.org/?p=648528 The Group of Seven (G7) has lobbed the latest of three notable salvos in signaling that governments around the globe are focused on regulating Generative Artificial Intelligence (AI). The G7 ministers have established the Hiroshima AI Process, an inclusive effort for governments to collaborate on AI governance, IP rights (including copyright), transparency, mis/disinformation, and responsible […]

The post The regulators are coming for your AI appeared first on Atlantic Council.

]]>
The Group of Seven (G7) has lobbed the latest of three notable salvos in signaling that governments around the globe are focused on regulating Generative Artificial Intelligence (AI). The G7 ministers have established the Hiroshima AI Process, an inclusive effort for governments to collaborate on AI governance, IP rights (including copyright), transparency, mis/disinformation, and responsible use. Earlier in the week, testimony in the United States highlighted the grave concerns governments have and why these discussions are necessary.

“Loss of jobs, invasion of personal privacy at a scale never seen before, manipulation of personal behavior, manipulation of personal opinions, and potentially the degradation of free elections in America.” These are the downsides, harms, and risks of Generative AI as Senator Josh Hawley (R-MO) recapped after the Senate Judiciary Committee hearing on May 16 saying, “this is quite a list.”

Just last week, the European Union (EU) AI Act moved forward, paving the way for a plenary vote in mid-June on its path to becoming law.

Make no mistake, regulation is coming.

While the EU is indexing their regulation from the risk associated with the activities AI is affecting, with ratings of low/minimal, limited, high, and unacceptable. In doing so, the EU is signaling that the higher the risk, the more regulation–—where those activities with unacceptable risk are banned outright (e.g., real-time biometric identification in public spaces, including for uses such as social credit scoring and certain aspects of predictive policing). Specifically responding to the latest developments in Generative AI, the EU is also looking to require organizations to be more responsible by assessing the environmental damage of training Large Language Models (LLMs), which are quite energy/compute-intensive, and forcing model makers to disclose “the use of training data protected under copyright law.” Another provision calls for the creation of a database to catalog where, when, and how models in the two mid-tiers of risk are being deployed in the EU. The devil is in the details…and the details haven’t been solidified yet.

At the May 16, 2023 Judiciary Committee hearing in the US, lawmakers sent strong signals of support for an entirely new agency to regulate the use of AI. Surfaced in testimony from Sam Altman (OpenAI), Christina Montgomery (IBM), and Gary Marcus (NYU), were calls for licensing systems that are capable of certain tasks (e.g., technology that can persuade, manipulate, or influence human behavior or beliefs, or create novel biological agents) or require a certain amount of compute/memory to train/operate; while this risk-based approach is similar to the current EU AI Act, it differs by suggesting a regulator could require pre-review and licensing in certain situations. This license could be taken away when compliance with yet-to-be-defined safety standards falls short (e.g., if models can self-replicate or self-exfiltrate into the wild). Commensurate with pre- and post-review of deployed AI systems, the testimony uniformly made calls for some form of impact assessments and/or audits.

Both governments have recognized the unique needs for competition and suggest that their regulatory regimes will be significantly less onerous on small innovators and startups, simultaneously encouraging innovation while stifling the ability of AI innovation at scale to cause harm to humanity.

Perhaps legislators have learned a lesson from the blanket protections Section 230 of the Communications Decency Act has provided to social media companies for decades, shielding them from liability for the content that people share on their services. These protections were recently sidestepped, and thus upheld, in a May 18, 2023 Supreme Court decision where the justices said they, “decline to address the application of Section 230 to a complaint that appears to state little, if any, plausible claim for relief.” It’s clear the Court is calling on Congress to amend the laws, especially when taken in context of Justice Elena Kagan’s comments during oral arguments where she said, “every other industry must internalize the cost of its conduct. Why is it that the tech industry gets a pass? We’re a court, we really don’t know about these things. These are not like the nine greatest experts on the Internet.”

Given the interest in legislating for and regulating the tech industry, new sub-sectors within the tech industry should be paying attention. Over the last fifteen years, the demand for regulatory reform has been focused on social media companies that host user-generated content, but with Generative AI, the focus will quickly shift. With strong signals from European and US regulators, it won’t be long until social media companies are in the minority of all the tech companies staring down the barrel of regulation.

During the Judiciary Committee hearing, the spotlight was solely focused on Generative AI. Based on suggestions put forth in testimony for regulation, hyperscalers and infrastructure companies could see regulation sooner than social media companies. For example, if systems require a certain amount of compute to be licensed, then hyperscalers and infrastructure companies may have to provide this data to regulators and be subjected to audit and governance controls. The implications expand as the use cases for Generative AI continue to proliferate and the promise that these real-time technologies will yield real-world outcomes for humanity grows by the day.

Already, consumer use of Generative AI is growing at an order of magnitude faster pace than any consumer technology in history. For this growth to transfer to the enterprise and scale to augment global workforces, foundational models will need to be fine-tuned for specific domains and research and development funds invested to reduce the costs associated with training and executing generative models. The portfolio of solutions that emerges will mean that every company must become a Real-time AI company in order to compete and thrive. When time-sensitive, contextual, and low-latency response times are critical to business and consumer success, there will be no other option than Generative AI solutions delivered in real-time.

While professionals across industries are scrambling to understand how Generative AI can help their organizations to enter new markets and disrupt existing ones, their service providers–big and small–are likely to have an increasingly important role to play with regulatory compliance. Will your infrastructure as a service provider be a help or a hindrance to your organization’s ability to thrive in the era of widespread, and regulated real-time AI?

Steven Tiell is a nonresident senior fellow with the Atlantic Council’s GeoTech Center. He is a strategy executive with wide technology expertise and particular depth in data ethics and responsible innovation for artificial intelligence.

Logo of the Commission on the Geopolitical Impacts of New Technologies and Data. Includes an 8-point compass rose in the middle with the words "Be Bold. Be Brave. Be Benevolent" at the bottom.

GeoTech Center

Championing positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post The regulators are coming for your AI appeared first on Atlantic Council.

]]>
Why US technology multinationals are looking to Africa for AI and other emerging technologies: Scaling tropical-tolerant R&D innovations https://www.atlanticcouncil.org/blogs/geotech-cues/why-us-technology-multinationals-are-looking-to-africa-for-ai-and-other-emerging-technologies/ Thu, 27 Apr 2023 17:46:59 +0000 https://www.atlanticcouncil.org/?p=632366 The African continent is emerging as a crucial player in the drive for innovation as technology continues to transform every industry. Due to its potential as a center for ground-breaking research and development (R&D) in artificial intelligence (AI) and other emerging technologies, US technology corporations are increasingly focusing on Africa.

The post Why US technology multinationals are looking to Africa for AI and other emerging technologies: Scaling tropical-tolerant R&D innovations appeared first on Atlantic Council.

]]>
The African continent is emerging as a crucial player in the drive for innovation as technology continues to transform every industry. Due to its potential as a center for ground-breaking research and development (R&D) in artificial intelligence (AI) and other emerging technologies, US technology corporations are increasingly focusing on Africa. But beyond its tech talent, what draws these tech juggernauts to Africa? Is it the 2.5 billion consumers who will exist by 2050 or is it because by 2050, Africa will have the youngest population? The following will examine and describe the opportunities and challenges in AI and emerging technology, with a focus on how Africa’s distinctively diversified and tropical ecosystems offer an unrivaled potential for scaling up R&D breakthroughs that can resist extreme weather conditions.

The intersection of tropical-tolerant research and demographic growth

US technology multinationals are increasingly looking to Africa for AI and other emerging technologies for a number of reasons. First, technology corporations such as IBM, Google, and others have created R&D labs in Africa (often run by African diaspora professionals). Second, building a base in Africa gives technology corporations access to innovative ideas, cutting edge startups, AI researchers and more. Additionally, the opportunity to scale R&D innovations attuned to the needs of the region is particularly important since it has a fast-growing youth population. One in five people on the planet will be African in thirty years, so if a business wants to be first to market, a local African presence is imperative.

One key reason is the opportunity to scale R&D innovations attuned to the needs of the region. Google opened its first African AI lab in Ghana. Why did Google do this? A few technology corporations, like IBM, Google, and others, have created R&D labs in Africa as they begin to understand the landscape of African research and innovation. Due to their knowledge and experience on both the local and global scales, some have hired African diaspora professionals to run these labs. IBM maintains research facilities in Kenya and South Africa, while Google operates an AI facility in Ghana. Why did they decide to reside there? They undoubtedly want to find out what fresh ideas startups, AI researchers, and other organizations are working on, as well as new trends that can be made into products. This kind of foresight is wise for business, but it will be necessary moving forward. One in five people on the planet will be African in thirty years, so if a business wants to be first to market, a local Africa presence will be essential to cater to this growing demographic market.

Utilizing AI and emerging technologies in healthcare and medicine

The tropical regions of Africa are a hotbed for many emerging diseases that pose a threat to global health. These regions also have a high incidence of poverty, which limits access to quality healthcare. As a result, there is a great need for new medical technologies that can be used to prevent, diagnose, and treat existing and upcoming diseases. AI and other emerging technologies have the potential to transform healthcare in Africa by providing early detection of disease outbreaks, developing more effective treatments, and improving access to quality care. Additionally, these technologies can help reduce the cost of healthcare delivery by automating tasks and improving efficiency.

US technology multinationals are investing in AI and other emerging technologies because they recognize the potential impact these technologies can have on global health. By commercializing and scaling R&D innovations from Africa, the private sector, technology multinationals, and academic institutions are partnering to improve the lives of millions of people across the continent and other frontier markets. Zindi, a start-up based in Cape Town, South Africa, called upon African data scientists to develop solutions to address the COVID-19 crisis when it was at its peak. Similarly, Christian Happi—a Professor of Molecular Biology and Genomics, as well as director of the ACEGID at Redeemer’s University in Nigeria—has assembled a team of data scientists who are utilizing AI and other advanced technologies to sequence the SARS-CoV-2 virus.

The opportunity for US technology multinationals in Africa

Due to the continent’s variety of entrepreneurial ecosystems, particularly tech incubators, accelerators, and co-working spaces, US technology multinationals are turning to Africa and other frontier markets as a proving ground to test AI and other emerging technologies solutions. These entrepreneurial ecosystems are starting to serve as a testing ground for new ideas that, for a variety of reasons, such as a lack of base-load power or the high cost of Internet data, would not take off in a developed market. The continent also has a growing population of young people who are embracing digital technologies, which presents a significant opportunity for companies that are looking to expand their customer base and broaden their workforce.

African critical minerals used in emerging technologies

Several critical minerals, such as cobalt, lithium, graphite, platinum metals, etc., serve as essential materials for everyday tech such as consumer electronic products—thus far an untapped market for powering the global emerging technology ecosystem. Additionally, Africa’s natural resources also offer the opportunity to make the African continent a leading player in the global green transition to electric vehicles that run on batteries. This is a market that China has largely cornered, however, due to African governments realizing the potential to generate more revenue from these critical minerals, African countries have recently started to ban unprocessed raw commodities being used in emerging technologies. For instance, Zimbabwe recently instituted a raw lithium ban and other African countries are starting to realize the opportunity to navigate geopolitical competition between the world’s industrial powers to capitalize on their critical minerals for their own development–foreign direct investment, value-addition, and increased job creation.

The challenge of scaling R&D innovations in Africa

The challenge of scaling R&D innovations is that they require significant investments of time and money to bring to market, and these investments are often riskier than traditional businesses. For US technology multinationals, the opportunity to scale their R&D investments in Africa is an attractive proposition. The continent has a vast population with a growing middle class, and its resources are largely untapped. However, doing business in Africa comes with its own set of challenges, including infrastructure constraints and political instability. Nevertheless, for companies willing to invest in the continent, the rewards could be significant.

There are significant challenges associated with scaling R&D innovations in Africa, including:

Infrastructure: Many African countries do not have the basic infrastructure required to support large-scale R&D operations. Challenges include unreliable base-load power, telecommunications, and transportation.
Skilled labor: In many African countries, education levels are low and there is a lack of trained personnel who can work in R&D facilities.
Political instability: There are political risks associated with doing business in Africa. These risks include instability, corruption, and government interference.

The benefits of an Afro-centric R&D innovation strategy

There are many benefits to pursuing an Afro-centric R&D innovation strategy, including the ability to scale innovations more effectively across frontier and emerging markets. By focusing on developing technologies that can be adapted to work in tropical climates, US technology multinationals can gain a first mover advantage in the African market and tap into a vast untapped customer base. Additionally, this strategy can help build long-term relationships with local partners and suppliers, which is essential for successful business operations in Africa. The natural environment in Africa, which includes semi-arid temperatures, deserts, and tropical climates, can also be a suitable testing ground for innovations that could thrive in developed markets. Moreover, US companies can position themselves as global leaders in the race to develop impactful innovations for Africa by investing in R&D of technologies relevant to the continent’s needs. Finally, by 2030 African youth are expected to constitute forty two percent of the global youth population. This is an enormous demographic who will be tech savvy, ambitious, and hungry for economic opportunity.

Conclusion

US technology multinationals have recognized the potential for scaling up Afro-centric R&D innovations in Africa. With access to a large and growing digitized population, as well as an abundance of data resources untapped for AI, this continent offers enormous opportunities for responsibly advancing AI and other emerging technologies. By leveraging local knowledge and expertise, US technology companies can develop new products and services designed specifically for African markets while also contributing to the development of innovative solutions applicable globally in emerging and developed markets.

Logo of the Commission on the Geopolitical Impacts of New Technologies and Data. Includes an 8-point compass rose in the middle with the words "Be Bold. Be Brave. Be Benevolent" at the bottom.

GeoTech Center

Championing positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Why US technology multinationals are looking to Africa for AI and other emerging technologies: Scaling tropical-tolerant R&D innovations appeared first on Atlantic Council.

]]>
AI generates new policy risk https://www.atlanticcouncil.org/blogs/geotech-cues/ai-generates-new-policy-risk/ Wed, 25 Jan 2023 15:44:20 +0000 https://www.atlanticcouncil.org/?p=605119 New AI tools will have a huge impact on how people work and live and can support innovation and productivity across sectors. But it is also important to be mindful of its potential for misuse. Governments, policymakers, and other stakeholders must be proactive to ensure that these tools are not exploited to cause harm.

The post AI generates new policy risk appeared first on Atlantic Council.

]]>
Over the last few months, there has been a surge of interest in artificial intelligence (AI) as a plethora of new tools have been released. Social media feeds have been awash with generative art produced by DALL.E 2 and Shakespearean sonnets written by ChatGPT. But these tools were not developed overnight. 

In 2015, Elon Musk, Sam Altman, and other investors including Reid Hoffman and Peter Thiel created OpenAI – a research company that would make its AI research open to the public. A major paper was published in 2018 which led to the first version of Generative Pre-trained Transformer (GPT) software. These language models are based around text prediction – a prompt is inserted, and the algorithm generates what it thinks should come next having been trained on a massive data set. GPT-1 was released in 2018, GPT-2 in 2019, and GPT-3 in 2020. GPT-4 is due to be released later this year and is expected to be an even bigger leap forward. OpenAI is reportedly in talks with investors that would value the company at $29bn, making it one of the most valuable startups in the world.

There are good reasons for the excitement. GPT-3 and similar models (Google, Facebook, and others all have teams working on similar projects) are incredibly powerful and are being used in increasingly creative ways. 

For instance, ChatGPT was released by OpenAI in November 2022 and enables users to generate paragraphs of text or code in a conversational style. The tool went viral with rapid user adoption: it took Netflix 3.5 years to reach 1m users, Facebook 10 months, the iPhone 74 days, and ChatGPT just 5 days. This advance has also led to the creation of new word processors like Lex, which integrates this software and generates text based on what has been written previously, as well as tools like Feather AI, which sends summaries of podcasts or videos to your inbox. 

This ability to extrapolate text is encroaching into the search market. It has been reported that Microsoft (an investor in OpenAI) is embedding ChatGPT into Bing, its search engine, which has put Google on red alert. But there are also more bespoke search engines being produced – for example, Metaphor is a general search engine designed to create links rather than text; Elicit is designed for academic research and provides summaries of research papers; and PubMed has been developed for biomedicine. 

Beyond text, tools like DALL.E 2, another application from OpenAI, as well as Stable Diffusion, Hugging Face, and Runway are focussed on generating both images and video. 

The applications that these tools are enabling are also of interest. For instance, automating email replies; creating presentations from text prompts; or writing or debugging code. Those building blocks are enabling even more creative outputs, like computer games, animations, and music, while companies like Cradle Bio are already exploring how this technology can be leveraged to improve scientific research, in their case with respect to proteins. 

Some of these tools are also inevitably being used in ways that are more problematic. Like generating clickbait New York Times-style articles from Twitter posts – ‘GPT Times’, creating synthetic reality, and for cybercrime

All these applications have already been, or are in the process of being, built with existing technology. GPT-4, the next iteration of OpenAI’s model, is expected to be released later this year, with a similar step change in functionality. But even from where we are now, it’s easy to start extrapolating some implications.

For one thing, creative jobs are going to look very different, while these AI tools are going to augment most of what we do online – an ‘autocomplete for everything’. But it will also become far more difficult to differentiate between what is authentic online and what’s not, and tools will be used for nefarious ends, including imitation, scams, and hacks. 

The second order implications are more difficult to predict but will impact how we work, how politics and campaigning operate, how our institutions function, and what issues and resources are fought over by nation states. And beyond core issues around ‘AI safety’, these are the sorts of issues that policymakers are going to have to grapple with, and in some cases, try to regulate. To take a few examples:

  1. If it is possible to replicate the voice and face of someone in real time, what does that mean for security, or the tools built to do Know Your Customer (KYC) checks using biometric data?
  2. How is copyright going to work? There are already issues with the models being used to train these AI models drawing on artists’ work, without them being compensated. The lawsuits are already starting. But what happens when it is possible to ‘create’ a song in the style of Taylor Swift recorded in Abbey Road Studios in less than a minute?
  3. Who is going to control the rents from these new and potentially vast markets, and what are the implications for inequality, as well as competition/anti-trust policy? 
  4. How will AI tools disrupt education systems beyond just automated essay writing – how can they be harnessed for delivering more tailored teaching, and how will the sort of education we need change as a result of an economy with these tools embedded? 
  5. Content moderation and misinformation are going to become even more complicated. While tools like ChatGPT return answers to prompts that appear as if they are truthful, in practice, and at present, they are largely not (see this paper for details). And they have also been found to include gender and race biases too. 
  6. Our security systems are going to be challenged. If it is going to be possible to ask GPT-4 to find people that work in a particular building that might be open to manipulation, then it is going to present profound challenges for the security services. 
  7. What systems should be put in place to ensure that the models themselves are robust and resistant to cyber-attacks? It will be important to ensure that there is confidence in the robustness of a model that is being deployed as the autocomplete for everything, perhaps leveraging tools like Advai
  8. Political campaigning will also become more of a science with rapid automated testing of arguments and narratives, and customized messaging based on individual characteristics. How will this application be regulated and managed to avoid abuse? 
  9. AI is already an increasingly important part of warfare. Companies like Anduril, Modern Intelligence, Shield AI, Helsing, and SAAB are all building in this space, including next-generation autonomous weapons, while companies like Palantir and Recorded Future are supporting Ukraine on the front line.
  10. The battle for control of semiconductor chips will only intensify, as more of the world becomes dependent on these models (and the chips that enable the models to run). Future control of the internet, and the people that spend increasing amounts of time on it, will depend on compute power and the hardware that enables it.

Policymakers, to their credit, have been thinking about these issues for many years. But the viral nature of the latest tools, and the potential power of GPT-4 and its successors, have added new urgency to these challenges. Existing government reports and strategies are a good starting point, including the UK’s National AI strategy; the EU’s AI Act (handy slide summary here); the US White House’s Blueprint for an AI Bill of Rights, and even NATO’s AI strategy. But these documents are just catching up with the status quo or setting out principles upon which future work can build, as opposed to being hard coded legislation (the EU’s plans are the most serious to date).

There is a very difficult balance to strike. These new AI tools will have a huge impact on how people work and live, and it is surely right to embrace this technology as a powerful primitive that will support innovation and productivity in a wide variety of sectors. But it is also going to be important to be mindful of its potential for misuse. Governments, policymakers, and other stakeholders must be proactive to ensure that these tools are not exploited for the wrong reasons. 

Whatever one’s views on the technology, the genie is out of the bottle. Everyone should prepare to spend far more of their time thinking about AI in the future. 

Jonno Evans OBE was Private Secretary to two Prime Ministers in 10 Downing Street as well as a British diplomat in Washington DC. He now advises technology companies at Epsilon Advisory Partners.

Logo of the Commission on the Geopolitical Impacts of New Technologies and Data. Includes an 8-point compass rose in the middle with the words "Be Bold. Be Brave. Be Benevolent" at the bottom.

GeoTech Center

Championing positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post AI generates new policy risk appeared first on Atlantic Council.

]]>
Beyond CHIPS: Prioritizing standardization is critical for US competitiveness https://www.atlanticcouncil.org/blogs/geotech-cues/beyond-chips-prioritizing-standardization-is-critical-for-u-s-competitiveness/ Mon, 22 Aug 2022 11:00:00 +0000 https://www.atlanticcouncil.org/?p=558019 The CHIPS and Science Act, signed into law by President Biden on August 9, 2022, sends a strong message in support of a market-led standards system to bolster domestic technology innovation and competitiveness. In addition to nearly $53 billion in funding to encourage domestic manufacture of semiconductor chips, the CHIPS Act includes some $11 billion for […]

The post Beyond CHIPS: Prioritizing standardization is critical for US competitiveness appeared first on Atlantic Council.

]]>
The CHIPS and Science Act, signed into law by President Biden on August 9, 2022, sends a strong message in support of a market-led standards system to bolster domestic technology innovation and competitiveness. In addition to nearly $53 billion in funding to encourage domestic manufacture of semiconductor chips, the CHIPS Act includes some $11 billion for scientific research to maintain the United States’ technological edge in the global economy. Section 10245, “International Standards Development,” aims to boost the US’ edge by focusing on US engagement in international standards development and is especially relevant given the geopolitical complexities that have recently arisen around technology standards. In recognizing that global standards enable faster commercialization of emerging technologies and enhance the ability of US innovators to access global markets, the CHIPS Act takes a key step towards boosting US standards infrastructure.

When China released its China Standards 2035 strategy in 2021, many diplomats in the United States and the West sounded alarm bells, citing concerns that the plan signaled an imminent Chinese effort to bias standards development organizations (SDOs) in their favor at the expense of existing leaders in international standardization (notably, the US and Germany). Though the China Standards 2035 plan has not destabilized the international standards ecosystem as some feared, it has sparked renewed interest in the standards space.

The most recent example of this trend can be seen in the European Standardization Strategy, which lays out a “new approach to enable global leadership of EU standards.” While the Strategy clearly demonstrates that standards are a priority for the EU, it also states a preference for European standards and foresees limiting foreign participation in the development of these standards. Many standards experts are concerned that this strategy will lead to the exclusion of representatives of non-EU-headquartered companies in global standards development, particularly in areas where standards work moves from the European Committee for Electrotechnical Standardization (CEN/CENELEC) to the International Standards Organization (ISO) and International Electrotechnical Commission (IEC), or where joint work takes place.

Government-sponsored efforts to limit foreign participation in standards development activities may lead to the balkanization of standards and far-reaching economic impacts on multinational technology companies. Given these threats and the fact that international standards-setting is increasingly an arena for geopolitical competition, the United States risks losing ground with respect to technology leadership by delaying action on policies that support domestic innovation and US participation in standards activities.

Section 10245 of the CHIPS Act will help to bolster the United States’ position in the international standards-setting arena. The legislation highlights the importance of the Department of Commerce, National Institute of Standards and Technology (NIST)’s leadership role in coordinating federal participation in standards related to critical technologies, and it will support NIST in partnering with the private sector to enhance US standards leadership and capacity to participate effectively in the development of standards. The bill has been received positively by the SDO world and by private stakeholders eager to take advantage of the new support for private-sector innovation.

It is important to note, however, that the Act will not have immediate effect on standards-setting—it still has to cross the authorization-to-appropriation funding hurdle. While the Act authorizes funding for the full range of NIST research and standards priorities, NIST’s ability to deliver on these priorities will depend to a great extent on the timely appropriation of funds during the upcoming fiscal year 2023, which begins October 1, 2022.

In addition, although the new bill contains important provisions in support of standardization, there is still significant room for improvement in overall US standards policy. Improving federal coordination and engagement should be a top priority, along with expanded training and education programs to support effective participation in standards activities. Both of these activities should be undertaken in partnership with the private sector and should leverage both public- and private-sector resources. In addition, support for small business and other stakeholder participation in standards activities is valuable, recognizing that in some new technology areas, companies—both large and small—may be more focused on advancing the technology and protecting their innovative ideas than on standards work early in the technology life cycle. Federal experts may need to assume a greater portion of the workload early in the cycle, streamlining the inclusion of US technology in standards.

The CHIPS Act constitutes a necessary first step in reinforcing the American private-sector-led standards system amid rising geopolitical tensions, but opportunities still abound for the federal government to support and engage with standards-setting stakeholders. In order to maintain the competitiveness of the US technology sector, policymakers will need to continue to develop new standards policies that support innovation, build public-private relationships, and strengthen SDOs.

Mary Saunders is Vice President, Government Relations and Public Policy at the American National Standards Institute (ANSI), where she serves as a liaison between ANSI and federal, state, and local government agencies and Congressional staff. She is also a Nonresident Senior Fellow with the Atlantic Council GeoTech Center.

Giulia Neaher is an Assistant Director at the Atlantic Council GeoTech Center, where she contributes to analyses and convenings related to technology standards, artificial intelligence, and data policy.

Logo of the Commission on the Geopolitical Impacts of New Technologies and Data. Includes an 8-point compass rose in the middle with the words "Be Bold. Be Brave. Be Benevolent" at the bottom.

GeoTech Center

Championing positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Beyond CHIPS: Prioritizing standardization is critical for US competitiveness appeared first on Atlantic Council.

]]>
Health challenges are intimately linked to climate change. How will we prepare? https://www.atlanticcouncil.org/blogs/geotech-cues/health-challenges-are-intimately-linked-to-climate-change-how-will-we-prepare/ Fri, 22 Jul 2022 22:31:06 +0000 https://www.atlanticcouncil.org/?p=548665 Introduction In 2018, a report from the Lancet Countdown firmly established that rising temperatures and extreme weather events are accelerating health risks all over the world; the 2021 report from the same group described the situation as “code red.” Fortunately, accelerating innovation in technology is delivering the opportunity to radically transform the future of health—while […]

The post Health challenges are intimately linked to climate change. How will we prepare? appeared first on Atlantic Council.

]]>

Introduction

In 2018, a report from the Lancet Countdown firmly established that rising temperatures and extreme weather events are accelerating health risks all over the world; the 2021 report from the same group described the situation as “code red.” Fortunately, accelerating innovation in technology is delivering the opportunity to radically transform the future of health—while addressing environmental sustainability, inequities, and more. As emphasized by the Lancet Countdown, “the form and pace of the world’s response to climate change will shape the health of nations for centuries to come.”

Digitizing the health impacts of climate change

Health-impacting events arising from climate change will take a variety of forms, will affect species beyond humans, and will change in frequency, severity, and location over time. It is therefore critical to support technological innovation focused on digitization and decentralization, so that individuals and local communities—no matter how well or poorly resourced—are empowered to monitor, test, treat, contain, and prevent a dynamic spectrum of health threats.

To identify meaningful digital biomarkers of climate-driven health impacts, strong digital infrastructure should rest on a mindset shift toward “connectedness” that seeks to harness the variety of information embedded in the environment, humans, and animals. Health “connectedness” is an acknowledgment of the dense and interconnected networks by which materials and signals are exchanged between the human body and the world around it.

One well-established connectedness perspective is the One Health framework, a collaborative, interdisciplinary, and intersectional set of approaches that have been in development for 200+ years. One Health spans the fields of basic research, human public health, veterinary work, agricultural health, and environmental monitoring and mitigation. Infectious diseases (including zoonoses, which are diseases that have made the evolutionary jump from an animal species to humans), antimicrobial resistance, and food security—all of which are influenced by climate change—are also encompassed by the One Health umbrella. Crucially, One Health integrates multistakeholder knowledge for local, national, and global levels of policy.

For example, a connectedness perspective would have predicted that intensive use of antibiotics in food animals would lead to the rise of antimicrobial resistance genes in the bacteria living in those animals. Those bacteria and genes would enter wastewater and be flushed into soil and other bodies of water, spreading the antimicrobial resistance to other microbes, other plant and animal hosts, and eventually to humans through our food and environment.

Each point in this network contains signals about the presence and dynamics of the spread of antimicrobial resistance. These signals can be digitized through clinical data in electronic health records as well as through biotechnologies such as DNA sequencing and other -omics technologies. If monitoring approaches focus on antimicrobial resistance in human patients alone, then potentially informative signals in places like barns, food processing plants, and sewers are overlooked. An even broader perspective would capture indirect signals with health relevance, such as changes in rainfall and temperature that would indicate a forthcoming geographical redistribution of agricultural sites and therefore new routes of transmission of antimicrobial resistance.

Machine learning and artificial intelligence are delivering deeper insights for epidemiology and public health from complex data sources. These computing approaches will also be increasingly useful for forecasting changes in public health due to climate change and modeling the impacts of potential interventions in silico before their deployment. Publicly available and interoperable data—across a broad range of human and environmental signals—will be key to both public and private efforts to harness artificial intelligence and machine learning for this nascent innovation space. To achieve these goals, it will also be important to bridge the data divides between sectors and societies.

Overall, in the coming years, innovators, policymakers, and funders should be pursuing digitized and decentralized approaches to combat the health effects of extreme heat/cold, storms, drought/flooding, pollution, fire, malnutrition, infectious diseases, and more. This strategy crucially empowers citizens and organizations to act as on-the-ground “sensors”, delivering key insights into the “jobs to be done” as well as potential solutions suitable for the local context.

Dangers affiliated with shifting climate zones

Adequate digitization of environmental, human, and non-human signals (as has been proposed for COVID-19)—plus subsequent monitoring, data integration, and evaluation—could underlie critical early warning systems for global and local health. Shifting climate zones will have large impacts on food and water security, the livability of our cities, infectious diseases, pollution, and other crucial elements of life. Fortunately, many of these elements are being targeted by climate adaptation strategies. Nonetheless, much more work is needed and many opportunities are still being missed due to a lack of resources, focus, and political and popular will.

Infectious disease is an important example of the impact of shifting climate zones. The global rise of temperatures is leading to expanding tropical zones, habitats in which disease-carrying mosquitos thrive. Such mosquitoes often spread diseases like malaria, zika, dengue, and Chikungunya, all of which are dangerous and in some cases fatal to humans. Other disease vectors are also expanding their ranges. Regions that have never faced the threats associated with tropical climates will begin to; other regions may eventually become too hot for these vectors and pathogens.

Climate change will also increase the danger of zoonotic events by shifting wildlife-livestock-human interfaces, where people come into close contact with wild and domesticated animals. These interfaces occur where cities encroach on former wild spaces, in areas where rural agriculture is dominant, and where humans are interacting with and/or eating non-domesticated species. The importance of these interfaces to global human and economic health has been illustrated all too effectively by the COVID-19 pandemic. Development also causes these interfaces to shift over time; climate change is likely to become a major accelerant of human migration and construction. Therefore, the local and geopolitical decisions that we make today will critically impact the future health of citizens around the world.

Even seemingly mild temperature increases can combine with other environmental stressors such as agricultural pollution to drive events like harmful algal blooms (HABs), which can directly endanger the health of humans, pets, livestock, and other animals and plants. Although the number of recorded HABs has increased in recent decades, it is difficult to disentangle the effects of climate change from the effects of increased ecological monitoring. Nonetheless, the danger of paralytic or even fatal poisoning by toxins in shellfish harvested from a HAB persists for wild fishing as well as aquaculture, with potential negative socioeconomic effects and mass plant and/or animal mortality.

Encouragingly, diverse signals have been evaluated and, increasingly, integrated to monitor HABs and their impact on health. In the United States, the CDC has established a One Health framework for voluntary reporting of HABs and associated cases of human and animal illness. An open-access database of global marine biodiversity (including DNA and ecological data) and a UNESCO database of harmful algal events provide further signals. Additional alarms were sounded after the catastrophic death of 1,100 Florida manatees in 2021 (out of an estimated population of 6,000), many of which appear to have starved due to the devastation of Florida’s seagrass, their primary food source, by HABs, as revealed by aerial imagery. Nonetheless, legislation intended to act on subsequent recommendations of Florida’s Blue-Green Algae Task Force died in appropriations.

This case study of HABs illustrates the integration of a diversity of data types and sources to surface links between changing climate and changing health threats. It reminds us that data are necessary but not sufficient to prompt action. This case study also highlights the urgency of expanding the amount and types of publicly available and interoperable data that can prepare health systems for emerging threats and inform policymakers about effective interventions.

Overall, local and central governments should be preparing today for shifting disease burdens over short-, medium-, and long-term horizons. Regulation, monitoring (particularly through digital markers), testing, treatment, containment, and vaccination are all weapons in our arsenal, but they require financing, R&D, manufacturing, and distribution. Perhaps most importantly, they require political commitment—from policymakers and citizens—to gathering a diversity of data types, making them available, and acting on insights from those data.

Looking Forward

Evolution is a relentless and merciless experimenter—and infectious disease is certainly not the only threat to human health arising from climate change. If we draw a laser focus on coronaviruses today, we are almost certainly ignoring the next, potentially preventable human-health disaster to come. In contrast, by committing to investment in people-centered, forward-looking technologies, systems, and mindsets, we have the chance to safeguard our health and our planet against tomorrow’s challenges, be they natural or man-made. Critically, on a global level, the IPCC predicts that water security and food security will be stressed—perhaps even compromised—as well, further exacerbating human health challenges.

The “connected” mindset of the One Health framework—the ability to recognize the value of signals that are not obviously immediately connected to human health—is a crucial element of this investment. How can citizens be empowered to act as detectors, innovators, and agents of change on the front lines of climate change and health? Through reliable digital infrastructure, rigorous data analysis and transparency, public storytelling, and education. Curricula from primary school through higher and professional education (for example in medical and veterinary schools) could serve as common touchpoints for this mindset shift across society.

Platform approaches to innovative solutions are especially powerful because they can be repurposed to a variety of challenges; note that work began on the basic research underlying mRNA vaccines decades before COVID-19. Similarly, developing and resourcing approaches to supply chain resilience, decentralized and nimble production, and physical and digital monitoring will empower citizens and governments to build better for the future of health in the face of climate change—for infectious disease and beyond.

Opinions expressed by Non Resident (Senior) Fellows do not necessarily reflect the opinion of the Atlantic Council GeoTech Center.

The post Health challenges are intimately linked to climate change. How will we prepare? appeared first on Atlantic Council.

]]>
The next phase of US-China economic and technological decoupling https://www.atlanticcouncil.org/blogs/geotech-cues/the-next-phase-of-us-china-economic-and-tech-decoupling/ Fri, 17 Jun 2022 19:04:46 +0000 https://www.atlanticcouncil.org/?p=538666 The Rebuttable Presumption: President Joe Biden signed the UFLPA into law in December 2021, and enforcement of the Act begins on June 21st, 2022. The Act bans the import of goods or commodities from China produced with forced labor through a “rebuttable presumption,” which states that all goods produced in Xinjiang and/or supply chains connected […]

The post The next phase of US-China economic and technological decoupling appeared first on Atlantic Council.

]]>

The Rebuttable Presumption:

President Joe Biden signed the UFLPA into law in December 2021, and enforcement of the Act begins on June 21st, 2022. The Act bans the import of goods or commodities from China produced with forced labor through a “rebuttable presumption,” which states that all goods produced in Xinjiang and/or supply chains connected to Xinjiang are presumed to have used forced labor and are therefore banned from importation into the United States. In other words, the Act’s rebuttable presumption assumes that all US importers with supply chains connected to Xinjiang in any way ultimately used forced labor unless the importer can prove their supply chains are free from forced labor via a complicated, costly public review process

The impact of global supply chains:

Billions of dollars’ worth of raw materials, minerals, and products are exported from Xinjiang each year, including 40% of the global production of polysilicon (a critical material for solar energy production), 20% of the world’s cotton, 20% of calcium carbide, and 5% of global aluminum production. By banning all of these items from import into the US, the Act will further decouple the American and Chinese economies by forcing multinational companies operating in the United States to source the same materials from other countries, likely at higher prices. This will pose significant challenges to already fragile global supply chains for green energy products, rare earth minerals, food items, and pharmaceutical precursors. 

A new tool for economic statecraft:

Policymakers and legislators may consider broader utilization of the rebuttable presumption for national security authorities and legislation. For example, the rebuttable presumption fits a missing regulatory and enforcement gap in existing authorities for export controls. Currently, the US government struggles to enforce export control regulations at scale due to the “knowledge requirement,” which states that companies must “know” they are exporting controlled technology to military end-users in China, Russia, and other high-risk jurisdictions. Without such “knowledge,” companies may avoid civil and criminal penalties by simply pleading ignorance. 

The UFLPA’s rebuttable presumption offers a unique tool for policymakers and legislators to close this export control loophole by simplifying the regulation to state any entity in China is presumed to support military modernization if they meet a specific requirement (e.g., state-owned enterprises, etc.). If appropriately implemented, a rebuttable presumption for export controls could simplify an archaic and complicated export control system, lower compliance costs for industry, and further protect critical dual-use and emerging US technology. 

Opinions expressed by non-resident (senior) fellows do not necessarily reflect the opinion of the Atlantic Council GeoTech Center.

The post The next phase of US-China economic and technological decoupling appeared first on Atlantic Council.

]]>
At the nexus of technology and security: Biometrics at the border https://www.atlanticcouncil.org/blogs/geotech-cues/at-the-nexus-of-technology-and-security-biometrics-at-the-border/ Tue, 15 Feb 2022 20:39:38 +0000 https://www.atlanticcouncil.org/?p=485572 In November 2020, Customs and Border Protection (CBP) published a proposed rule to expand biometric processing to all non-US citizens and remove port limitations on the use of biometrics in the exit environment. The proposal has drawn a flurry of comments, both positive and negative with multiple privacy and immigrant-advocacy organizations raising objections to the continuation of CBP’s use of facial biometrics.

The post At the nexus of technology and security: Biometrics at the border appeared first on Atlantic Council.

]]>
This page provides an excerpt from Forward Defense’s latest issue brief, sponsored by SAIC, which provides an overview of the opportunities and challenges to employing biometric technology at US ports of entry. To read the full paper, please visit here.

Introduction

Under the traditional [travel] system, travelers boarding a plane departing the United States show their passports to airline personnel, who then look at them and electronically scan the documents before allowing the travelers onboard. TVS [Traveler Verification Service] automates that process by, instead, taking a digital photo of the traveler before boarding and using a high-performing facial-recognition algorithm to instantaneously compare it to a database of existing passport or visa photos of all travelers on that flight’s manifest. In some airports (those where airlines employ an “e-gate”), if a traveler’s photo matches, the boarding gate opens automatically. In others, the traveler gets the green light from a totem camera, which signals the traveler to walk onto the plane. Either way, the identity comparison and verification are automated and instantaneous—and some airlines have chosen to expedite things even further by using the TVS process not only to obviate the need for manual passport checks, but to do the same for boarding passes. If the photo does not match, however, things revert to the old system, with CBP or airline personnel performing a manual identity check of the traveler’s passport. For US citizens, the TVS process is entirely voluntary; they can always choose to have their passports reviewed manually, in the old-school style

For travelers entering the United States, CBP [Customs and Border Protection] utilizes “Simplified Arrival,” a primary processing application that leverages the TVS facial-comparison system.

Concerns Raised about the Use of Biometrics at the Border

In November 2020, CBP published a proposed rule to expand biometric processing to all non-US citizens and remove port limitations on the use of biometrics in the exit environment.[1] The proposal has drawn a flurry of comments, both pro and con, and the Joseph Biden administration—after extending the comment period to March 2021—is still considering whether to issue a final rule.[2] A number of privacy and immigrant-advocacy organizations—including the Electronic Privacy Information Center (EPIC), the Center for Democracy and Technology (CDT), the American Civil Liberties Union (ACLU), and others—have raised objections to the continuation of CBP’s use of facial biometrics. [3]

1.      Fear of a Surveillance State

The broadest objection is that facial recognition is an “inherently dangerous technology,” and that CBP’s use of it could be the beginning of a slippery slope that could lead to more generalized tracking of both Americans and non-US persons, not only at the borders, but also within the United States—raising the specter of a Minority Report-style surveillance state.[4] Some also say that the use of such technology at the border dangerously singles out immigrants, given that most non-US persons cannot opt out of CBP using it to process their entry.[5] Photos of in-scope non-US travelers are enrolled and retained in IDENT for up to seventy-five years. (CBP deletes its copies of all photos, within twelve hours for US citizens and fourteen days for all others.)  Objectors express the fear that such images might be shared with US or foreign law enforcement.

These are serious concerns, but CBP is utilizing this technology in relation to crossings of the US border, where the US Supreme Court has consistently recognized that “the Government’s interest in preventing the entry of unwanted persons and effects is at its zenith,” that the government has “plenary power to make rules for the admission of aliens,” and that CBP has broad authority under the Fourth Amendment to search and question all seeking admission or return to the United States.[6] Moreover, DHS has been collecting biometrics—both fingerprints and photographs—from non-US persons for many years through the US-VISIT system. The State Department already issues passports to US citizens and machine-readable visas for non-US citizens, both of which now include biometric photographs. And, all federal law-enforcement agencies, including CBP, regularly cooperate with foreign, state, local, and tribal authorities by sharing biographic and biometric data on individuals—including photos—where there is good cause and it is permitted by law. Fundamentally, the use of TVS does not change or add much to the information already possessed by the government. It takes one additional photo and compares it to information that already exists in government databases, all pursuant to a long-standing congressional mandate and consistent with broad border authorities recognized by the Supreme Court for more than a century.

The question of whether facial comparison is an “inherently dangerous” technology is a debatable one—especially given its ubiquity (look at your iPhone or Android). But, its use by repressive, authoritarian regimes demonstrates the risks, so careful safeguards governing how CBP uses facial-comparison technology or shares images are clearly appropriate—and many already exist. As required by law, CBP has published a Privacy Impact Assessment discussing the program in great detail, and it has provided notice of how it shares data in the various System of Records Notices (SORNs) it also publishes, as well as in the proposed rule.[7] Additionally, CBP provides notice to travelers through message boards or signs, as well as verbal announcements in some cases, to inform the public that CBP or a stakeholder will be taking photos for identity-verification purposes. In addition to CBP’s own internal oversight and officer-training protocols, DHS also provides oversight through its Offices of Civil Rights and Civil Liberties (CRCL) and Privacy, as does the Privacy and Civil Liberties Oversight Board (PCLOB). That said, more safeguards could and should be put in place. In 2020, the Biometrics Subcommittee of the Homeland Security Advisory Council (HSAC) issued a report analyzing DHS biometrics programs and recommending the creation of a DHS Biometrics Oversight and Coordination Council (BOCC), chaired by the DHS deputy secretary, as well as empowering the DHS Office of Strategy, Policy and Plans to lead the development of DHS-wide policies on biometrics, including on such issues as retention and sharing.[8] The HSAC’s recommendations regarding additional oversight structures are sensible, and should be implemented.

Ideally, current limits on CBP broadening use of the technology or sharing facial-biometric data should not be waivable by executive action alone. This is an area in which congressional action can provide additional checks against the misuse of data or technology.

2.     Data Protection

Others have argued that CBP’s use of facial biometrics should be terminated because CBP will be unable to protect the biometric data from cyber hacks—citing the 2015 example of the Office of Personnel Management being unable to protect its information from Chinese exfiltration.[9] But, for the most part, this argument is not specific to CBP. Instead, it argues that the federal government should not collect personal data at all because it cannot protect it with certainty. Protecting databases from cyber hacks requires adequate resources, oversight, accountability, and expertise, but it is not an impossible task—and restricting government agencies (or private-sector entities) from collecting personal data required to perform their functions is an obvious non-starter. Strong governance and oversight are a more sensible position, and the HSAC report’s recommendation of a DHS BOCC providing strong, senior-level oversight for TVS and other DHS biometrics programs is a good one, as is the HSAC’s additional recommendation that the Cybersecurity and Infrastructure Security Agency (CISA) play a key role in the protection of data. Furthermore, nothing comes for free, so Congress needs to ensure that federal agencies have the cybersecurity resources, personnel, and authorities to do the job.

3.     Accuracy and Bias

Finally, some assert that the 98–99-percent accuracy rate for CBP’s Biometric Facial Comparison Technology is not good enough, and that—given the huge volume of travelers—many people will suffer from erroneous “no-match” determinations. But, the obvious answer to this is that, at an airport, the consequence of a no-match decision is simply that a CBP officer or airline official will need to perform an old-school manual check of the traveler’s passport or visa. This may cause a minute’s inconvenience, but automated checks that work 98 to 99 percent of the time will significantly reduce the number of travelers whose documents need a manual check. Moreover, CBP has processed more than one hundred and thirty million people through the system since 2017 and, thus far, mistaken “no-match” incidents have not arisen as a major issue. On the contrary, facial-comparison technology has proven more accurate than manual document checks, as evidenced by the more than two thousand imposters the system has enabled CBP to catch.

A related objection is that the use of some facial-recognition technology algorithms has resulted in bias against persons of color. But, the National Institute of Standards and Technology (NIST), which performed the much-reported study indicating that some facial-recognition algorithms can produce biased results, actually found that the facial-recognition algorithm specifically used by CBP for facial comparison in TVS—NEC-3 (developed by NEC Corporation)—is highly accurate.[10] As noted in the NIST study, some facial-recognition algorithms are better than others, and the bad ones are indeed more likely to produce demographically biased results.[11] But, the best ones—like the NEC-3 algorithm used by CBP—are highly accurate and do not “display a significant demographic bias.”[12] CBP officials have also told Congress that “CBP’s operational data demonstrates that there is virtually no measurable differential performance in matching based on demographic factors.”[13] CBP continues to conduct analysis, as well as monitor algorithm performance and technology enhancements, to ensure a high biometric performance.

Nevertheless, CBP is focused on this issue, as it must be, and careful oversight by existing bodies like the Office of Civil Rights and Civil Liberties, and by the new DHS BOCC recommended by the HSAC Biometrics Subcommittee, is vital here, as is full transparency to these institutions, Congress, and the general public.[14] This should provide a measure of confidence that CBP’s algorithms will continue to improve, avoid any appearance of unfair bias, and will become even more accurate over time.

Excerpt of Recommendations:

Recommendation #3: DHS should carefully consider and adopt most of the recommendations of the HSAC Biometrics Subcommittee, particularly the creation of the DHS BOCC, which should be chaired by the DHS deputy secretary. DHS should empower the DHS Office of Strategy, Policy, and Plans to lead the development of DHS-wide policies on biometrics, including on such issues as retention, sharing, and—in conjunction with the DHS Office of Civil Rights and Civil Liberties—the avoidance of unfair bias against communities of color and others.

This page provides an excerpt from Forward Defense’s latest issue brief, sponsored by SAIC, which provides an overview of the opportunities and challenges to employing biometric technology at US ports of entry. To read the full paper, please visit here.

The post At the nexus of technology and security: Biometrics at the border appeared first on Atlantic Council.

]]>
The ecosystemization of Russia’s Big Tech https://www.atlanticcouncil.org/blogs/geotech-cues/the-ecosystemization-of-russias-big-tech/ Wed, 09 Feb 2022 13:00:00 +0000 https://www.atlanticcouncil.org/?p=484125 There is an increasingly visible phenomenon within Russia's Big Tech scene: the pursuit of horizontal monopolization of the internet, or 'ecosystemization.'

The post The ecosystemization of Russia’s Big Tech appeared first on Atlantic Council.

]]>
What do Russia’s largest bank and largest internet company have in common? Sberbank, Russia’s largest bank, is a goliath state-owned enterprise that possesses a third of the country’s total banking assets. Approximately 60 percent of Russians have Sberbank accounts. On the other hand, Russia’s largest internet company, Yandex, blends Russia’s tradition of hard computer science with Silicon Valley’s cool factor. The company is privately owned and originated as a Russian language search engine. At first glance, Sberbank and Yandex occupy two separate universes. Yet they both reflect an increasingly visible phenomenon within Russia’s Big Tech scene: the pursuit of horizontal monopolization of the internet, or ‘ecosystemization.’ This ecosystemization also stunts the country’s startup scene.

Sberbank is Russia’s largest bank and has historically prioritized traditional finance. However, during the tenure of CEO and Putin ally Herman Gref, the bank’s focus changed to include much more technology and data work. Since 2016, the bank has upped the number of ‘big data initiatives,’ such as the use of mass data in automation and digitalization, from 10 to 575. In 2017, Sberbank opened the largest data processing center in Russia at Skolkovo, the country’s flagship tech incubation hub outside Moscow. The company has also trained over 35,000 employees in AI skills and competencies at its Data Academy, a department in the bank’s corporate university. Most recently, in November 2021, Sberbank unveiled its second supercomputer, called Christofari Neo.

Sberbank’s highest-profile change occurred in 2020. In a major unveiling, Mr. Gref announced a top-to-bottom restructuring: Sberbank would transform into Sber and append various nouns signifying its expanding portfolio. This included SberMarket (e-commerce), SberCloud (cloud storage), SberAuto (ride-sharing), SberPrime (entertainment subscription), SberHealth (e-health), SberLogistics (logistics) and SberFood (food delivery). Mr Gref also announced the rollout of ‘Salut,’ akin to Apple’s Siri. Salut uses vision recognition software to recognize user gestures, such as a thumbs up to indicate liking a song; Sberbank will sell the underlying AI software to other firms. The firm also plans to deploy one hundred ‘smart ATMs’ with facial and voice recognition technology across Russia’s capital.

Yandex’ expansion within the tech world was less surprising than Sberbank’s since it was founded in 2000 as a search engine. Since then, the firm has grown to encompass much more, including ride sharing, cloud storage, mail service, maps, e-commerce, food delivery, IT education, fashion and music entertainment. Yandex is now a leader in AI: Alice, the company’s Siri equivalent, controls over 77 percent of the Russian voice assistant market. Its autonomous car project, active in Moscow and the Republic of Tatarstan, is estimated at $7bn by Morgan Stanley. In November 2021, Yandex announced a joint partnership with supermarket operator Majid Al Futtaim to launch robotic delivery services in Dubai. The firm also has an open-source machine learning library, which is used by the European Organization for Nuclear Research.

As these developments show, Sberbank and Yandex have long expanded beyond their original business models to capture more segments of the internet economy; collectively, Sberbank and Yandex are present in fourteen internet submarkets. Both firms integrate big data, machine learning and artificial intelligence into their operations. Sberbank and Yandex showcase a wider phenomenon in which Russian blue chips create all-encompassing horizontal digital ecosystems; users can perform virtually every need on a single mega-platform.

While these tech giants stretch across the digital landscape, Russia’s startup scene experiences relative hardship. As of 2021, Russia had only 274 AI startups, much fewer than China’s 1,226 and the US’s 8,161. More broadly, Russia ranks tenth for share of billion-dollar startup unicorns versus the first-ranked US and second-ranked China. Moscow is the 21st largest region for tech unicorns, right behind Jacksonville, Florida and trailing Silicon Valley, New York, Beijing and London.

This startup scarcity is surprising given Russia’s strong human capital in IT and long history of hard science expertise. In 2017, the country had more job openings for STEM graduates than the OECD average, and Russia’s market openness in digitally deliverable services trade is also higher than the OECD average. While not as plentiful as in the US or China, technology incubators in the country do exist; the most successful is Skolkovo, which invested almost $190m in tech projects in 2019.

Analysts have several explanations for this phenomenon, but two are most common. First, experts point to a ‘brain drain’ out of Russia and into America, the United Kingdom, and Israel. A 2018 survey showed that almost 60 percent of Russian youth under 30 want to work abroad. Potential talent is also attracted abroad because of the global demand for computer scientists, high pay overseas, and the adverse political climate in Russia. Second, startup funding is low. Though cash from venture capital (VC) and angel investors constitute the largest source of funding for Russian startups, VCs invested only $80m annually between 2017 and 2020, compared with the UK’s $30bn in 2021 alone. While state investment has stepped up over the years, it still is relatively insignificant; a 2020 survey revealed that over 50 percent of Russian startups surveyed have not used or do not plan to use government investment funding. In addition, most private and government funders target mature or later stage startups over new companies, further limiting new growth. Another 2020 survey showed that almost two thirds of founders flagged lack of investment or other forms of support as the most salient barriers to growth.

Russian Big Tech also plays a significant but often overlooked role in shaping market conditions for startups. Firstly, Russian Big Tech has expanded into so many fields that there is limited room for startups to gain a foothold – Sberbank and Yandex are either market players or leaders in at least fourteen internet industries. While they may not necessarily fully satisfy consumer demand, they do leave less market share for startups. Secondly, Russian Big Tech companies are more resistant to destabilization from disruptive technologies since they have largely already adopted emerging technologies. Sberbank, for instance, has used big data analytics and machine learning for over half a decade. This means that potential startups cannot rely on new technologies to gain a comparative advantage, differentiate themselves or become more efficient versus entrenched leaders. As a result, startups are less likely to succeed by ‘shaking up’ Russia’s internet marketplace.

Taken together, these dynamics highlight Russia’s unique internet landscape. In it, a handful of Big Tech players populate most corners of the internet. Meanwhile, startups face numerous challenges to success. This reality should inform policymakers and business analysts about Russia’s internet landscape. For the Kremlin, it should flash officials back to a digitized version of Russia’s oligarchic nineties, though Kremlin apparatchiks may collectively breathe a sigh of relief over state-owned Sberbank’s dominant role. In other words, stakeholders across business and policy will need to factor this reality in as Russia’s internet continues to develop.

Maxwell Kushnir is a second year Master of Science in Foreign Service student at Georgetown University and former Young Global Professional with the GeoTech Center. He is interested in emerging technology, the post-Soviet space and political strategy.

Logo of the Commission on the Geopolitical Impacts of New Technologies and Data. Includes an 8-point compass rose in the middle with the words "Be Bold. Be Brave. Be Benevolent" at the bottom.

GeoTech Center

Championing positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post The ecosystemization of Russia’s Big Tech appeared first on Atlantic Council.

]]>
Cybersecurity in service delivery https://www.atlanticcouncil.org/blogs/geotech-cues/cybersecurity-in-service-delivery/ Fri, 28 Jan 2022 20:35:19 +0000 https://www.atlanticcouncil.org/?p=480773 As in any era of exponential growth, the speed at which benefits are created for society is closely followed by potential threats that must be guarded against. Cyber risks pose a threat to the efficient delivery of key services and to the personal information of individual citizens. Service delivery is rapidly becoming more digital on the infrastructure front, but the range of electronic government solutions that have been deployed is even broader.

The post Cybersecurity in service delivery appeared first on Atlantic Council.

]]>

As in any era of exponential growth, the speed at which benefits are created for society is closely followed by potential threats that must be guarded against. Thus, in the Fourth Industrial Revolution, it is not counterproductive to promote eternal pessimism when it comes to progress. Rather it is a call to action that allows society to be equally innovative in mitigating risks. 

Few issues expose the natural risk and reward relationship of technology as the increasing cyber vulnerability of key infrastructure as smart city and smart service delivery solutions become more widespread in the developed and developing world. The benefits for citizens are straightforward and powerful, essentially improving quality of life through more efficient services, freeing up resources for personal and collective growth. The potential threats have taken more time to come to light, and the consequences of not taking them seriously are only recently being understood. 

A clear and timely example is the Colonial Pipeline ransomware attack. Without diving deep into the details of how it happened, it is clear that even the most advanced infrastructure, which delivers one of the most important resources for any economy, is only as reliable as its weakest component. The fact that the attack was initiated in a relatively simple way – through a compromised password of a remote access account – makes the risk all the more evident. 

Up to two decades ago, the primary concern of service delivery companies and governments was securing physical infrastructure with physical countermeasures. Now, with the widespread use of advanced digital tools, which make services more efficient and safer, the digital space is where vulnerabilities abound. Colonial Pipeline is one of the largest and most recent publicly reported cases, but one relatively small warning was the Rye Brook Dam (New York) cyberattack in 2013. Another example can be seen in 2015 when several power outages occurred in Ukraine that left hundreds of thousands without service for hours,  and it was later determined that the cause was a cyberattack through a relatively simple method of email phishing. 

The relative simplicity of these attacks not only emphasizes the need to increase readiness at all levels of service delivery, but also underlines how the potential threat points are multiplying at breakneck speed. It’s estimated that the number of IoT device connections worldwide in 2019 was approximately 10 billion, then reaching approximately 14 billion in 2021, and the number of IoT connections should increase to around 30 billion by 2025.  

Enabling internet connectivity in a wide array of items is one of the cornerstones of community-centered service delivery. As mentioned previously, it’s precisely the access to highly specialized and specific data from each community that will allow for unprecedented deployment of customized solutions, at costs that are accessible to almost all cities. The key challenge is that if these systems are not designed with cybersecurity at their core, each one of those IoT connections is a potential risk to the stability of essential services for millions of people. 

Last, but certainly not least, these fundamental cyber risks not only pose a threat to the efficient delivery of key services, but also to personal information of individual citizens. Service delivery is rapidly becoming more digital on the infrastructure front, but the range of electronic government solutions that have been deployed is even broader. Everything from tax payments to medical information is exchanged with regional and local authorities on a daily basis, and as smart city solutions become more interconnected, so will databases containing citizen data. 

Privacy

External threats to personal information are on the more aggressive side of the risk spectrum, but for many citizens, the implementation of highly digital solutions in public service delivery immediately brings up privacy concerns related to how legitimate authorities handle it. The increasing presence of IoT connected devices and machine learning algorithms at the core of key services opens the door to collecting personal information in very small increments and has already reached a point with which citizens are not comfortable. Progressively it’s less about filling out and sending a digital form with several key pieces of personal information at once, and more about thousands of sensors – and the algorithms behind them – perceiving and analyzing every single interaction, in many cases without the person knowing about it in real time. 

The risks and challenges of privacy in cutting-edge service delivery do not reside only with governments and authorities. A smart city ecosystem is composed of a very broad spectrum of public, private, and public/private providers, all of which commonly share some or all their data. The check and balances dynamic that must exist for privacy to be preserved has to evolve on two fronts: first, through proactive and modern regulations developed in higher levels of government, in order to guarantee a stable playing field across specific cities and regions; second, robust citizen engagement in the development of regulations, and more importantly, in reporting privacy concerns.

But what exactly are the concerns that citizens are bringing up? The main source of unease, because it has become a relatively common occurrence, is data leaks. This can include: passwords, account numbers, tax documents, medical information, addresses, among others. These elements come to light when external, unauthorized actors seize information. Almost equally concerning is the case of information crossing. This happens when data is collected for a specific purpose, by a specific institution, and shared, simultaneously or at a later date, with other institutions that will use it for a completely different purpose. It’s not uncommon for this to happen when the institutions come under the umbrella of a broader organization, such as an entire city, and while it might have been included in the terms and conditions, many times these agreements are not proactively explained. 

The concerns mentioned above are in the realm of data collection, location, and sharing. But there is another equally important area that causes acute unease for many citizens: surveillance. The issue even has geopolitical implications, because it is known that some countries have taken a very proactive stance on collecting information about its citizens simply to know what they are doing at all times, without any relation to service delivery. However, the connection is very real and direct, because in many cases the same technologies that are being used for improving key services can generate data that facilitates surveillance.

On the positive side, there are several measures authorities and companies can implement to mitigate these risks. First and foremost, service delivery technology must prioritize informing customers and/or citizens about when and why their information is being collected. This goes hand in hand with broader and more ambitious digital identification efforts, where citizens have a centralized hub to interact with authorities, and can monitor in which databases their information lives. Also, without revealing any sensitive information about the solution, organizations should inform end-users and regulators how the information is used to generate the final result. Any and all forms of transparency with citizens will generate important levels of goodwill towards a specific technology, and towards city managers in general. 

After ensuring maximum transparency in the collection of data, city managers should put in place detailed and strict policies of data aggregation and anonymization. Any details that could directly or indirectly allow employees of the city, or third parties, to identify a specific citizen should be erased or cloaked. In addition to this, data collection technologies should follow specific guidelines that limit their scope to only the information that is necessary to complete their designated process within the ecosystem. Extra data can not only make processes inefficient but also represents genuine risk. 

On a final note, regarding privacy protection, while there are no perfect solutions and all come with a certain amount of risk, one practice that can add a level of security is only implementing IoT that has the capacity to process the raw data locally, so the transmission to cloud servers does not put personal information at risk. 

Ethics and Machine Learning

One of the most important areas of focus that needs attention in the future is developing frameworks and policies to ensure the highest degree of objectivity and ethics in the decisions that are made using machine learning, which potentially affect millions of citizens at a time. This challenge is not unique to government service delivery – in fact, it is probably one of the top areas of study in all fields of technological advancement nowadays, and will only increase over the next decades. Algorithms are evolving in complexity and range, leading to efficiencies that were considered impossible only a few years ago, but with the side effect of becoming less transparent. 

The advances in complex algorithms for next-generation service delivery are deeply related to another area of growth: big data in urban environments. As mentioned above, the exponential increase in IoT enabled devices that feed information to city managers has created immense databases with information about an unprecedented quantity of citizens, public assets, and processes. The days of narrow samples from where larger conclusions can be extrapolated are rapidly coming to an end. If a city authority wants to measure the amount of solid waste being generated on a monthly basis, it no longer monitors only certain collection points and then calculates a reasonable average. It can place cost-efficient sensors in every collection point and know them in real time. 

Big data allows for urban knowledge on a massive scale. But the way the data is converted into knowledge – and eventually into decisions and policy – is through algorithms that are able to process an almost infinite amount of information, in a short period of time, and represent with metrics that are simple to digest for the people who are responsible. These algorithms are changing on a daily basis, sometimes through improvements that humans include, but most of all through real-time self-improvement as they come in contact with more and more real information. The second case is the one that requires the most attention. Machine learning makes processes more efficient almost on a real-time basis, but the way it’s happening is not necessarily understandable by the officials who are ultimately responsible for the outcomes. This represents an accountability challenge in two ways: first, in a more straightforward sense, it’s difficult to measure the efficiency and impact that a particular city management team is having if the process which they followed is not fully understood; second, without understanding why an algorithm is making certain adjustments, it’s impossible to measure and control potential bias. In essence, the design and implementation of AI have advanced exponentially, and solutions for monitoring it have to catch up.

While transparency and accountability are values that are important across all industries and organizations, it’s safe to assume that there is consensus on their fundamental nature in relation to governance. Few subjects garner more global attention – although it varies from region to region, and culture to culture – than the checks and balances that should exist within government, and in its relationship with the populations they serve. These checks and balances are essentially processes that guarantee that every action of a government must have a reasonable explanation, and that negative actions will be corrected. Combine this essential element of society with the rapidly evolving nature of AI, and the challenge for stakeholders becomes evident: how can we monitor and correct processes which we increasingly don’t understand? Is AI making cities more efficient at the cost of making them less inclusive

As we grapple with all these challenges and questions on how community-centric service delivery is evolving, it’s important to ground the analysis with specific cases and trends that are being used in cities around the world. Some projects and policies are based on true and tried methodologies that are now being augmented by access to big data and machine learning. For example, in Medellin, there is an ecosystem of control centers that have been coexisting and supporting each other for more than a decade. The first is the Integrated Metropolitan Emergency and Security System, from where 10 government agencies respond to emergencies in the city. The second is the Early Warning System, which integrates information from over 100 sensors of different types, to assess in real-time risks of hydrometeorological and air quality. Last, they have a Mobility Control Center that focuses on intelligent transport systems, logistics, and citizen engagement. Although centralized control infrastructure has been in place for decades, they have only recently begun being deployed more broadly, taking advantage of a much wider supply of technology, with lower price points. Also, the way these control centers operate is changing rapidly, going from solutions that focus primarily on maintaining agencies coordinated to solutions that emphasize data analysis and real-time decision making. 

The state of play is not limited to traditional government agencies and investment in technological infrastructure. Private actors are getting involved in public service delivery solutions at an exponential pace, while exploring innovative ways to “interpret” cities as a whole. One such case is Citibeats, an ethical AI company that is capable of analyzing social media interactions – numbered in the hundreds of thousands – to understand the topics of concern for citizens in real time and lead to more robust decision making by authorities, with a clear focus on inclusion. Their most recent deployment – in Panama during the Covid-19 pandemic – allowed them to identify key trends of challenges that the population considered top priorities for the government to resolve, including but not limited to economic reactivation policies, support for more digital transformation in workplaces, and growth programs for SME’s. 


Additionally, great advances are being made not only in the amount of IoT enabled devices that are available and deployed, but especially in the capabilities that each one of these devices has, in order to deliver high-quality and above all, useful data for data driven-governance. Here is where the concept of edge computing becomes a key factor. The concept refers to all the processing that occurs in the device itself, versus what needs to be sent to centralized servers for analysis. Edge computing not only has an effect on strictly technical capabilities like speed and reliability of processing, but also incorporates advantages regarding some of the challenges these networks face on cybersecurity and data privacy. All this together means that government institutions will have almost an infinite amount of data to work with, at a speed that until only a few years ago was considered unreachable. 

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Cybersecurity in service delivery appeared first on Atlantic Council.

]]>
How DNA-reading technologies promise to boost social and economic trust https://www.atlanticcouncil.org/content-series/economy-of-trust-content-series/how-dna-reading-technologies-promise-to-boost-social-and-economic-trust/ Wed, 22 Dec 2021 10:19:00 +0000 https://www.atlanticcouncil.org/?p=472945 The expansion of non-medical uses of DNA-reading technologies promises to unleash the immense benefits of bio-technologies in our societies, while expanding the public’s trust in its capabilities.

The post How DNA-reading technologies promise to boost social and economic trust appeared first on Atlantic Council.

]]>

Editorial

Our societies may have found the ‘Next Big Thing’ – but rather than under, it is in our noses. This is DNA-reading technology and its current most famous application is the COVID-19 PCR test. COVID-19 PCR tests detect the virus by magnifying a speck of a patient’s DNA acquired through a nasal or throat swab. Since 2020, these PCR tests have provided society a proof of concept over DNA-reading’s utility. Now, non-biomedical uses of the technology are coming online – by inserting a speck of DNA onto a product, a machine can identify the origins and characteristics of that product. A 2020 Harvard study showcased this. Scientists attached inactive DNA of a location-specific bacterium onto a product. Even after wind, rain, and vacuuming, scientists could still detect this bacterium with a PCR test.

The expansion of non-medical uses of DNA-reading technologies will boost trust in society and the economy. Consumers can know whether the products they buy come from where they claim to. For example, the UK fish and chip industry has enjoyed success using this technology for a number of years. The Marine Stewardship Council (MSC) certification uses DNA-reading technology to authenticate the origin and type of fish at supermarkets and chippie shops. This has cut the rate of mislabeling to half the global average, boosting British consumer confidence in their fish products. DNA reading technology also promises to benefit developing countries, where consumer product regulations are often weaker. Making DNA-based authentication widespread could raise the quality of consumer products and increase confidence in private sectors.

Another potentially pivotal biotechnology is brought by scientific improvements in synthetic biology. A recent New York Times article argued that synthetic biotechnology “holds the promise of reprogramming biology to be more powerful and then mass-producing turbocharged cells to increase food production, fight disease, generate energy, purify water, and devour carbon dioxide from the atmosphere.” The optimism behind synthetic biology and its underlying technologies (gene sequencing and DNA synthesis) assumes that biology can now largely follow the trajectory of computing, where progress was made possible by the continuous improvement in microchips, with performance doubling and price dropping in half every year or two for decades.

While synthetic biology and DNA-reading technologies have some way to go before widespread use, other emerging technologies are easing the way. Today, storing a megabyte of data should cost $100; however, it currently stands at $1000. A recent blog post by The Economy of Trust Foundation shows how advances in efficient data storage technology promise to make DNA-reading technology more commercially viable, while storing DNA-read materials on the blockchain can reduce the risk of data tampering. What is more, it argues that experimentation with DNA in space has provided an excellent testing ground to improve DNA sample durability against corrosive radiation. Ultimately, developing emerging tech promises to unleash the immense benefits of bio-technologies in our societies, while expanding the public’s trust in its capabilities.

Sincerely,

Pascal Marmier
Economy of Trust Foundation / SICPA
Stephanie Wander
Atlantic Council GeoTech Center
Borja Prado & Maxwell Kushnir
Editors

Get the Economy of Trust newsletter

Sign up to learn about advances in technology and data activities that, through trust and more transparent frameworks, improve nations and sectors alike.

2021 Report Rewind

In this last edition of the year, we flag the GeoTech Center’s top 2021 reports – covering global tech competition and standards-setting, data ethics, and cyber risks among other topics. These reports explore the future of emerging technologies and their impact on geopolitics. They also analyze the state of those technologies, and the public policies needed to address current challenges and potential opportunities.

Latest Reseach & Analysis

The post How DNA-reading technologies promise to boost social and economic trust appeared first on Atlantic Council.

]]>
The next step in community-centric service delivery https://www.atlanticcouncil.org/blogs/geotech-cues/the-next-step-in-community-centric-service-delivery/ Fri, 17 Dec 2021 18:38:35 +0000 https://www.atlanticcouncil.org/?p=466491 Communities are evolving; as is the data they generate, so government service delivery must evolve as well. Richer data comes with a wide array of opportunities and a proportionate number of risks. Therefore, the future of digital government and centric service delivery requires a comprehensive roadmap that takes into account each area’s starting point, resources, and objectives.

The post The next step in community-centric service delivery appeared first on Atlantic Council.

]]>

Humanity is entering an era of unprecedented access to data on every facet of society. Two fundamental forces propel this change: (1) a remarkable increase in the volume and specificity of data, fueled by massive numbers of IoT-enabled devices coming online and high saturation of smartphone use; and (2) increasingly complex and sophisticated machine learning algorithms that allow swifter data analysis than ever before. These changes can be leveraged in two equally important ways. First, from a broad, generalist perspective, new data sources can help develop knowledge on and responses to issues that affect the future of humanity as a whole. This is the case for climate change, global trade, and, more evident now than ever, global health readiness. Second, more sophisticated, frequent, and granular data collection will allow governments to implement policies that adapt accurately to local cultures and geographies. These changes are not just in magnitude but rather the manner in which data is processed has fundamentally changed. Previously, technology was a tool to augment human capacity and efficiency without losing sight of what exactly the technology was doing. Now, humans will be increasingly separated from the intricate analysis process and mainly focus on obtaining new, improved data sets. This difference is challenging governments to completely rethink the way services are delivered to specific populations, not only from a practical perspective but from a legal and ethical one as well. Citizens interact with government-provided services on a daily basis in many areas where data-driven policy can significantly improve delivery: healthcare, education, waste management, transportation, utilities, land administration, and citizen security, for example.

The creation of strategies that tailor service delivery to a target group is often desired but prohibitively expensive and time-consuming. Only very large or very wealthy cities and countries have the necessary resources. Still, with recent advances in data collection and machine learning, significantly improving the portion of a population included in government service delivery is a short-term possibility. These advances will not only enhance the targeting of service delivery but will also improve the processes themselves, creating all sorts of efficiencies within government agencies and freeing up resources for reinvestment and new projects.

The only way to guarantee sustainable modernization of service delivery is to emphasize bottom-up approaches that take into account the different physical circumstances of each population center as well as their particular level of technological education, their openness to data collection and sharing, and their specific policy priorities. Sustainability comes from the disciplined deployment of new technology that boosts data collection and analysis at a local level. If these new tools remain high-end experiments only applied in select situations, then the gap in service quality between communities, even within a country, will continue to be an issue. Federal focus should be less on the specifics of deployment and more on governance, accountability, education, and funding. The future of digital government won’t arrive with the flip of a switch, but through a comprehensive roadmap that takes into account each area’s starting point, resources, and objectives.

The potential benefits of improved data collection and analysis are straightforward and powerful. While all public utilities tend to be essential, a useful example is access to clean drinking water, which is a challenge in developing countries and even in some regions of advanced economies. Community-specific data in real-time would allow better measurement of communities with poor potable water coverage and service history, more accurate supervision of maintenance and repair contractors, real-time analysis of budget inefficiencies, better engagement with concerned citizens, enhanced procurement processes to access more innovative solutions, and refined contingency plans, all based on precise local data. Service delivery at a community level has usually relied heavily on broader service models that are not finely tailored to each instance, and even those data sets require too much human-intensive activity to keep up to date.

The benefits of using targeted data to improve service delivery come with a fair degree of complexity and risk. Any given population generates data in a disproportionate way. For example, children, the elderly, and people living in poverty tend to interact less with the technology that collects data for decision-making today. In the case of clean drinking water, that asymmetry would mean basing policies for expanding coverage and improving existing service on potentially inaccurate data, or, even worse, data that excludes important sectors of the population from one of the most basic standards of living. Just as in the service delivery itself, these new forms of interaction with citizens come with a healthy number of legal and ethical responsibilities. Governments must design frameworks that allow for enhanced quality of service and accountability. At the same time, citizens should also be held accountable for their interactions so as to avoid incentives for efforts that seek to discredit governments without proper process and credibility. These new frameworks should also cover data rights and digital identification.  

Several jurisdictions around the world have experimented with crowdsourcing service delivery data by measuring very specific problems. One example is easy-to-use mobile apps to identify and report road issues. It’s a straightforward, useful way to identify priorities for city managers. However, problems have come up precisely because of asymmetrical reports across neighborhoods. For example, wealthier areas with younger populations have more citizens with cars and smartphones, so they create more data more frequently. The algorithm then generates maintenance orders for the appropriate department based on that data, leading to faster repairs in wealthy neighborhoods than low-income ones. The complexity that city employees face is threefold: the lopsided maintenance efficiency itself; the reaction of citizens in areas with subpar results; and the question of modifying collected data to improve the accuracy of service delivery. The discussion of this last issue is not only practical but ethical. Is the crowdsourced data really objective if it’s modified? Is it objective if it is unmodified? Who decides how to modify it, and what criteria will guide them? How does one guarantee that modifications are fair?

Refined data collection and the use of machine learning algorithms will not only allow exponential improvement in the quality of government services, but also in citizen engagement. Just as services can benefit from being more targeted and personalized, so too can the interactions between local governments and the citizens they serve. In this, the potential benefits of more granular data use fall into two categories. On one hand, it allows improvements in how citizens participate by customizing their service requests, accessing information to make better day-to-day decisions, and having access to a broader range of transaction options. On the other hand, it creates the potential for greater government accountability through citizen action with more accessible data that allows for the discovery of fraud and corruption.

Data-driven, community-centric service delivery will require the following:

  • Efficient, community-centric data collection that takes into account the specific characteristics of a given population;
  • Adequate processes to evaluate the data being collected, decide whether modifications are necessary, and under what criteria they will be made;
  • Targeted, responsive citizen engagement processes allowing for improved quality of service and increased accountability; and
  • Legal and ethical frameworks that protect the citizen from misuse of their data and the government from artificial campaigns of discredit.

Communities are evolving; as is the data they generate, so government service delivery must evolve as well. Richer data comes with a wide array of opportunities and a proportionate number of risks, and finding the right balance could lead to a period of unprecedented range and quality of public services.


The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post The next step in community-centric service delivery appeared first on Atlantic Council.

]]>
Postpandemic letdown and western disarray https://www.atlanticcouncil.org/blogs/geotech-cues/postpandemic-letdown-and-western-disarray/ Wed, 08 Dec 2021 10:30:00 +0000 https://www.atlanticcouncil.org/?p=465909 After a spurt of inclusive growth, in which most segments saw gains, all the prepandemic structural problems resurfaced, particularly the inequalities that had grown worse under the pandemic.

The post Postpandemic letdown and western disarray appeared first on Atlantic Council.

]]>
This page is only an excerpt of a technology foresight report in order to give readers an introduction to the topic and the opportunity to browse through alternative futures. To access all content, please download a digital copy of the paper or return to the main report page.

Hopes were high in the middle of 2021 that the West would pull out of the pandemic and see accelerated growth and a return to relative normalcy after a year of deep recession. Yet after a spurt of inclusive growth, in which most segments saw gains, all the prepandemic structural problems resurfaced, particularly the inequalities that had grown worse under the pandemic.

Believing it is best not to depend too much on the vagaries of human employment, employers raced to automate as much of their business as possible. For the unskilled and semiskilled, whom everyone depended on for basic services during the pandemic, it was a double whammy. Initially, their wages had grown as employers had no choice but to hike pay to attract any workers. Then, without the necessary tech skills, they soon learned they were expendable when firms began to automate their operations. Despite central banks’ monetary-easing efforts, there was no return to prepandemic full employment. Worker participation rates dropped in the advanced economies as many of the low-skilled workers grew frustrated in the search for good-paying jobs. Over time, many of the unskilled and semiskilled dropped out of the workforce or retired early.

The more tech-savvy workers had largely done well and saw their wages improve in the aftermath of the pandemic. That initial improvement was not, however, replicated year after year. Automation was now also impacting the more complex work processes that formerly required skilled humans to operate. Although not all their jobs were made redundant, there was enough disruption that even retained skilled workers felt the pervasive, growing sense of job insecurity. The prepandemic pattern of capital being remunerated much more than labor resumed. Business leaders made the case that productivity gains from automation had boosted GDP in advanced economies above prepandemic levels and government revenues as well, which helped with increased social welfare demands.

Moreover, automation was helping firms deal with China, which was increasingly unfriendly to Western businesses. After being the other large economy that didn’t suffer a severe recession during the pandemic, China’s growth sputtered in the years following the initial outbreak of the coronavirus. Continuing outbreaks from different variants, such as delta or omicron, crippled parts of Chinese industry. Xi Jinping’s data security reforms also hit China’s tech firms hard. Beijing’s efforts to de-Americanize China’s supply chains— part of the Made in China plan—caused more disruption. With tensions increasing, US and Chinese firms sought to avoid any dealings. European businesses were caught in the crosshairs, and some bowed out of the Chinese market for fear of US secondary sanc- tions while others concentrated on doing business with China and sold off their US interests. With the contraction of global supply chains, US and European firms saw an opportunity to eliminate jobs through advanced automation technologies. Chinese businesses were more constrained in investing in automation technologies as the government was worried about higher unemployment. Robotics and 3D printing also took off in the labor-saving effort by Western businesses.

Workers’ Rights and Reforms

Smaller countries fared better than larger ones in stanching the growing societal divisions that grew out of rapid technology changes. To begin with, the income disparities were not as high in the many smaller European countries that had invested in expensive social welfare efforts. There was an understanding that automation could not be stopped—and shouldn’t be for the sake of improved efficiencies and all-around productivity. After all, automation was a godsend for Western societies with low birth rates and rapidly aging societies. Instead, the unskilled should be incentivized to learn new skills. Indeed, the educational systems would have to be completely remade. Everyone had a right to periodic sabbaticals for months of learning new skills. Just as there was a right to healthcare and retirement, all workers had opportunities for lifelong learning. Businesses could see the benefits.

Larger European countries had a harder time coming around to revamping the whole educational system, despite the benefits these smaller countries were achieving. There was pushback by businesses against another set of enhanced worker rights which the private sector would have to shoulder. In these bigger societies, reform had been more difficult for some time, adding to the challenge undertaking these reforms. In France, for example, where the reelected Macron government had been trying to lessen the burdens on employers, there was worry that enhancing the existing training programs and relatively generous social welfare would be too costly. Critics cited the low educational standards in job-deprived and socioeconomically disadvantaged areas as the real culprit for workers not being able to easily upgrade their skills.

In the United States, deep political partisanship combined with a decentralized educational system slowed any reforms. Americans had seen sagging educational standards for some time, which federal government officials felt increasingly powerless to reverse given much of the authority for the educational sector rested with local and state officials. Conservatives decried the growing role of government in the economy and saw the new proposed training-voucher scheme as pushing the country toward socialism and higher taxes. The growing numbers of college and high-school dropouts fueled populism at both ends of the political spectrum—left and right—leading to a political crisis. When the unemployed staged a million-person march on Washington, the National Guard was called out to protect the protesters from armed right-wing militant groups. As it was, the battles between protesters and the radicals resulted in several hundred dead and much of downtown Washington vandalized. Similar riots broke out across the country. At the congressional midterm elections, lawmakers calling for increased training programs and a top-to-bottom reform of the US education system were elected. Businesses also saw that they had gone too far with automation and promised to retrain existing workers for new jobs instead of just firing them.

New Social Model Evolving

Aided by the lessening of fears of a super-competitive China, Western leaders felt they had some maneuvering room to develop a new social model countering what was the fragmenting effect of the new technologies. Just as World War II had been important for spurring a new social peace buttressed with healthcare and pension benefits for all, the postpandemic era ended up redefining social welfare. Educational excellence would no longer be reserved for the privileged who could pay for it. Everyone had a right to having their abilities fully developed with no one being left behind. For decades, teachers in many Western societies had been poorly paid.

That changed along with the importance of providing a good education to everyone. Several big corporate CEOs took the lead in trying to regain the trust of their employees by offering more social benefits—paying for educational and retraining programs—and promising new employment to those whose jobs were eliminated through automation.

With personal dignity being so connected with employment, the concept of work was expanded. Volunteerism was honored and treated as equivalent to paid work. Moreover, with the rapid expansion of the educational sector, many jobs were created that did not exist before. Small and medium-size businesses—not just the big ones—became more adept at retraining and finding new opportunities for their workers. Where young workers once planned to spend only a few years with an employer, they now found the advantages of staying and benefiting from retraining so enticing that many ended up, like their grandparents, staying with one firm for their whole careers.

At times it had looked like some Western societies would be pulled apart and there was no hope of finding a solution to inequalities. Yet there was a deep, popular well of support for inclusiveness. The pandemic had been an eye-opener for many of the deep divisions in society. For the more tech-savvy, younger, and coming-of-age generation, it was intolerable that the unskilled and semiskilled should be “losers” in the latest technological revolution. Older generations—increasingly victims of automation—also began seeing the benefits of a better social safety net. Over time, the fears fueling populism dissipated and centrist politics came back with the maintenance of a social consensus, a broad-based popular expectation for political leaders.

This page is only an excerpt of a technology foresight report in order to give readers an introduction to the topic and the opportunity to browse through alternative futures. To access all content, please download a digital copy of the paper or return to the main report page.

The post Postpandemic letdown and western disarray appeared first on Atlantic Council.

]]>
Europe in a bipolar tech world https://www.atlanticcouncil.org/blogs/geotech-cues/europe-in-a-bipolar-tech-world/ Wed, 08 Dec 2021 10:30:00 +0000 https://www.atlanticcouncil.org/?p=465922 With no sign of Beijing backing down, the US administration lays out a strategy for restructuring NATO to be targeted on Russia and China, combining its allies from Asia and Europe into an enlarged, redefined alliance.

The post Europe in a bipolar tech world appeared first on Atlantic Council.

]]>
This page is only an excerpt of a technology foresight report in order to give readers an introduction to the topic and the opportunity to browse through alternative futures. To access all content, please download a digital copy of the paper or return to the main report page.

In the run-up to the 2020 presidential election, Biden promised to turn the clock back on Trump’s policy changes. When it came to China, however, Biden piled onto Trump’s hostility toward Beijing. US tariffs on Chinese imports have stayed in place despite Beijing’s call for them to be reduced. The Biden administration, in coordination with the EU, has sanctioned China for its ruthless repression of Uighurs in Xinjiang and taken additional measures to punish the country for cyber hacking. Sino-US tensions continued to build in the South China Sea and over Taiwan. With no sign of Beijing backing down, the US administration lays out a strategy for restructuring NATO to be targeted on Russia and China, combining its allies from Asia and Europe into an enlarged, redefined alliance. Neither European nor Asian allies are keen on these US ideas, but temper their criticism to avoid offending the still predominant superpower.

Squeezed by Sino-US escalating tensions

With both Asians and Europeans less than enthusiastic, Washington puts the enlarged NATO idea on the back burner. Yet Europeans are less able to fend off Washington’s idea of resurrecting the Cold War-era Coordinating Committee for Multilateral Export Controls (CoCom), which was used to embargo exports of sensitive materials to communist countries. The US administration believes the competition over emerging technologies is at the heart of the conflict with China. Many in Washington subscribe to the belief that the Asian country has only become the leading tech competitor through its theft of US intellectual property. Besides export controls of cutting-edge tech, decision makers seek to wean Europe off China’s tech exports. Denying the country’s tech giants market access to Europe and the United States would, American strategists believe, curb Chinese innovation.

Increased US extraterritorial measures mean that the EU finds it hard to proceed with its goal of “strategic autonomy” and finding a “third way” without European businesses incurring restrictions on access to US markets. The US administration says it will offset any harsh anti-Chinese measures by offering greater support to the Europeans against Russia. Northern European export-dependent economies are likely to be conflicted and divided in their reactions to such an anti-Chinese push by Washington. The Baltic states, ever mindful of the Russian threat, are an exception and welcome the increased US commitment. At the same time, the Baltic states have been part of the 16+1 format with China, a platform initiated by Beijing to foster cooperation; although they lack deep ties with China, most of them have been hoping (like other Eastern Europeans) for more Chinese investment and trade. Under pressure from Washington, the countries of the region sign on to the US offer, sacrificing the possibility of strong economic ties with the Asian giant.

By contrast, the Scandinavian nations and Germany find the increased hostility toward China under Biden or any other subsequent US president very unwelcome. Berlin’s most important trading partner is China; Finland is the biggest EU investor in China in proportion to the size of its economy, and China is Sweden’s largest trading partner in Asia. Overall, the EU has become the country’s biggest trading partner and the two sides—EU and China—recently signed an upgraded trade deal, expanding the one that was signed and then halted in 2021. Squeezed between the United States and China, the Europeans—particularly Nordic nations and Germany— would pay a stiff economic price for going along with any US strictures against China and would use their diplomatic power to argue for a course change in US foreign policy.

Other EU countries are less economically dependent on China, but resent US interference and push back against US extraterritorial measures while professing their commitment to strong transatlantic ties. The EU tries to walk a fine line and neither offend Beijing nor Washington, finding it increasingly hard to defy American decision-makers on sanctions and tariffs against China without endangering US/NATO security guarantees.

All European governments on edge

At home, the European social model is under increasing pressure. Like the United States, many EU member states instituted new taxes on the wealthy to cover budget shortfalls. While subsiding during the first waves of the coronavirus, populism is on the upsurge again. After the initial economic surge, European economies slow, giving populism a new lease on life. The EU and immigrants are targets for the renewed surges, and nationalists are gaining election victories in multiple member states. There is a growing sentiment in favor of protectionism and the establishment of more border controls. Eastern Europeans even begin refusing entry to European citizens with immigrant backgrounds.

European split on a single foreign policy

Despite initial efforts to find a united middle ground, Europe splits and wavers in the face of US pressure. France and the Baltic, one or two of the Nordic, and several East European states try to temper growing US antagonism, but share Washington’s worries about a “hyperpuissance” in the East. Since Brexit, the United Kingdom has been trying to open new markets in Asia, including in China, but sees no real alternative to the United Sates remaining its closest ally. London remains the first to always accede to US pressure.

The Baltic and East European governments worry that Russia will take advantage of Western weakness and intervene in their countries. Moscow’s strong ties with China are seen as giving Putin more self-confidence despite Russia playing a junior role to Beijing. Germany and some of the Nordic states become even more adamant in their belief that China is their economic lifeline. With Western markets slowing, Asia looks to be the only outlet. Italy and some of the Eastern European states like Hungary are also eager for new Chinese investments, and hedge their bets.

Out with strategic autonomy, in with hedging

The growing split and mutual attacks by the two internal camps paralyze the EU. The initial rescue package that many observers saw as a step toward greater integration is never repeated. The idea of strategic autonomy is forgotten. Enlargement is at a standstill despite renewed calls from Ukraine, Georgia, and others seeking entry. China’s deteriorating human rights record and saber-rattling against Taiwan angers many European publics, sparking a growing popular movement throughout Europe opposed to China. Germany seeks to mediate, going along with some punitive measures against Beijing and Moscow, but diluting others. Berlin and Paris publicly object to US interference in EU affairs.

Europeans in both camps secretly welcome Chinese efforts to invest in developing countries, hoping the economic assistance can help stimulate economic activity and tamper migration even though they fear the Chinese efforts will end up bolstering authoritarianism throughout the world. Yet European countries don’t have the means to engage even in their traditional backyards. Paris has given up its fight against terrorism in the Sahel. Europe watches as Russia and China increasingly call the shots in Africa and the Middle East. Focused on battling China in East Asia, the US administration puts the blame on Europe for these failures, without wanting to intervene itself. The only united effort that all member states can still agree on is beefing up maritime patrols in the Mediterranean to close the EU’s external southern border.

In Washington, there is finger-pointing over who lost Europe. There’s a growing realization that the United States overreached despite its initial effort to rally the West. While in Europe, there is a worry about the future of the European project. Both the United States and the EU seek to paper over differences, but for China, the transatlantic split is further evidence of Western decline, feeding the hardliners’ appetite for more aggressive actions to expand Chinese influence in the region and beyond.

This page is only an excerpt of a technology foresight report in order to give readers an introduction to the topic and the opportunity to browse through alternative futures. To access all content, please download a digital copy of the paper or return to the main report page.

The post Europe in a bipolar tech world appeared first on Atlantic Council.

]]>
Counting the costs of technonationalism and the balkanization of cyberspace https://www.atlanticcouncil.org/blogs/geotech-cues/counting-the-costs-of-technonationalism-and-the-balkanization-of-cyberspace/ Wed, 08 Dec 2021 10:30:00 +0000 https://www.atlanticcouncil.org/?p=465926 While it started as a well-meaning effort to prevent disinformation and propagation of violent extremism, the increasing regulation began to fracture the Internet into at least three largely separate regimes, reinforcing the forces of technonationalism and protectionism.

The post Counting the costs of technonationalism and the balkanization of cyberspace appeared first on Atlantic Council.

]]>
This page is only an excerpt of a technology foresight report in order to give readers an introduction to the topic and the opportunity to browse through alternative futures. To access all content, please download a digital copy of the paper or return to the main report page.

Two trends come together: digital sovereignty and fighting disinformation. At one time, Western democracies were committed to an open, free Internet with minimal government involvement. That was, however, before the social media channels became the arena for hatred and disinformation. The Europeans got angry when the big US tech giants did such a poor job policing it. In the United States, Republican politicians accused the tech companies of being biased, banning Trump and other conservatives from Twitter as well as other outlets. At the same time, many moderate politicians, like their European counterparts, thought Facebook, Google, and others could do a better job eliminating hate speech. Worldwide, “Internet sovereignty” was catching on. Already in 2019, thirty-three governments shut down the Internet 213 times, up from the previous year. Whereas “Internet sovereignty” was once associated just with China’s “Great Firewall” of censorship, it became popular with other governments, such as India, Russia, Turkey, and Indonesia, too.

While there were varying degrees of government control over the Internet, the trend line became clearer and darker as democracies moved in the direction of authoritarianism, believing that liberal markets were no model for the digital age. While they still decried China’s growing repression and use of social media to target dissidents, the Internet was seen as a threat to democracy, too, rather than a bulwark—the way it was originally portrayed. For Western elites, the unregulated digital space was a conveyor belt of disinformation, making it virtually impossible to govern. The French post-pandemic presidential election, for instance, was marred by widespread disinformation campaigns both by domestic as well as international foes of President Macron. The newly elected president blamed his near-defeat (it was only on the recount that he emerged victorious) on the disinformation coming from right-wing extremists. Anti-immigrant groups throughout Europe were active in trying to defeat him and other liberal forces.

The right-wing, Trump-supported attack on the Capitol on January 6, 2021, had been pivotal in persuading lawmakers that there had to be more oversight of social media. For many progressives in the Democratic Party, the tech companies were too big and monopolistic anyway and should be broken up. It was only a half step for them to call for more regulation of the companies to prevent the spread of domestic radicalism. The United States also instituted curbs on Chinese technology, including their apps. The government in Beijing moved to tightly regulate China’s tech companies’ operations abroad, convincing US regulators that those companies could not be trusted with data gathered in the United States. Over time, US tech companies saw their market share dwindle in China and Asia, as more and more US government regulatory curbs encouraged Chinese tech companies to leave the US market, too.

While it started as a well-meaning effort to prevent disinformation and propagation of violent extremism, the increasing regulation began to fracture the Internet into at least three largely separate regimes, reinforcing the forces of technonationalism and protectionism. Because of security fears, the United States and China became highly protected tech markets; Europe has less of a choice, not having tech champions of its own, so both US and Chinese tech companies operated there, but under EU regulatory control. The economic costs of such a fractionalizing of the Internet were staggering. Before all the new regulation, a report by Japan’s Ministry of Economy, Trade, and Industry (METI) had estimated that at least half of all trade in services is ICT-enabled (between 50 and 56 percent); digital commerce would account for 25 percent of global trade by 2025; and that this percentage would likely accelerate by an order of magnitude over the coming decade.

Efforts to negotiate globally agreed standards governing the use of software codes, data sharing, and/or commercialization of private content and storage of data, as well as minimally accepted standards on privacy—vital for the continuing flow of data—broke down or became too complex in view of the proliferation of national requirements. Digital commerce depends on open commercial, scientific, and academic data flows. Without such flows, joint research efforts also ceased to exist. Increasingly, scientists were only working with counterparts in their own country, not those outside. In particular, the number of Chinese students and researchers in the United States began to dwindle significantly.

The medical and other supply chain shocks from COVID-19, combined with the growing US distrust of China, lent support to the increasing protectionism and breakdown in flows of information and people. In addition, the United States sought to export its standards. Even before the recent regulatory-driven breakup, the American decision-makers had tried to mobilize support for anti-China “clean networks” banning Huawei infrastructure. It wasn’t always successful, however. China offered too many economic enticements for countries even in the United States’ own backyard—Latin America— for all countries in the region to fall in line with Washington’s dictates.

Europeans decide to fight back

Europeans began to worry about their own ability to trade—not just with China but other countries in China’s orbit—and stayed out of the US clean networks program themselves, even though they followed many of the guidelines for their domestic systems due to worries, for example, about the security of data running over Huawei-built infrastructure. Brussels therefore began efforts to counterbalance the fractionalizing of cyberspace, calling on Washington and Beijing to support an international effort to map the future of the world’s climate, using the latest breakthroughs in quantum computing. Taking a leaf out of its own history, EU leaders thought cooperation on climate—a pressing interest for all, like the establishment of the European Coal and Steel Community after WWII—could decrease the centrifugal dynamics of technonationalism.

At first, Washington was wary, but when it saw Brussels sign an agreement with Beijing for a joint research effort, it wanted in. The EU said there would be no proprietary information. The detailed output—a mapping of likely effects of climate change over the next hundred years—would be a free good for countries participating in the project. Such data would be the basis for policy decided by the next UN Climate Conference, which the Europeans were scheduled to host. Any country not participating would be at a disadvantage. The fruits of an international brain trust using the latest quantum computing could demonstrate how cooperation was much more powerful than competition and conflict, curbing for a time at least the growing US-China hostility. Without more international cooperation on climate change, decision makers risked incalculable harm to everyone’s future. Were Americans really ready to balkanize the Internet if it meant undermining prospects for global innovation that could help save the planet? Moreover, EU leaders were confident that young people everywhere would side with them, putting pressure on Washington and Beijing to limit their competition and explore avenues for an era of great power cooperation.

This page is only an excerpt of a technology foresight report in order to give readers an introduction to the topic and the opportunity to browse through alternative futures. To access all content, please download a digital copy of the paper or return to the main report page.

The post Counting the costs of technonationalism and the balkanization of cyberspace appeared first on Atlantic Council.

]]>
How public trust survives in the era of automation https://www.atlanticcouncil.org/blogs/geotech-cues/how-public-trust-survives-in-the-era-of-automation/ Tue, 26 Oct 2021 19:00:21 +0000 https://www.atlanticcouncil.org/?p=448872 Automation has the potential to displace millions of jobs, while creating new ones. Drastic shifts in the labor market should offer both hope and caution; they will impact each nation’s economy significantly, and alter the demand for skills in employees, but may also stir social structures and affect citizens’ trust in their respective governments, public institutions, and the private sector. How should global leaders react?

The post How public trust survives in the era of automation appeared first on Atlantic Council.

]]>

Editorial

By 2025, automation has the potential to displace 85 million jobs, according to the World Economic Forum (WEF)’s latest “Future of Jobs” report. On a more hopeful note, the authors also argue that the robot revolution is expected to create 97 million new jobs at the same time. The resulting job balance may be positive, but these drastic shifts in the labor market should offer both hope and caution; they will impact each nation’s economy significantly, and alter the demand for skills in employees, but may also stir social structures and affect citizens’ trust in their respective governments, public institutions, and the private sector.

In essence, respective governments, nations, and markets that equip their citizens for the upcoming skills-transition will be most successful. Those nations and companies that fail to plan ahead and adapt their education plans will risk falling behind. For example, low-wage workers may need to shift to occupations in higher wage brackets and require different skills to remain employed – with analytical thinking, creativity, and flexibility being among the most sought-after skills of the future. In this vein, the Swiss apprenticeship model becomes a model for others. Switzerland’s dual system combines learning on the job – and being paid a learning wage – with one to two days of theory at school. With 230 vocational professions to choose from, ranging from catering to high-tech industries, around two-thirds of Swiss school leavers opt for an apprenticeship.

Across countries and supply chains, research has evidenced rising demand for employment, particularly in nonroutine analytics jobs accompanied by significant automation of routine manual jobs. As economies and job markets evolve, new roles will emerge across the care economy in technology fields (such as artificial intelligence) and in content creation careers. In response to these substantial changes and the rapid scale-up of digital platforms, the World Bank invites nations to (1) “ramp up investment in human capital and lifelong learning for workers,” (2) “strengthen social protection to facilitate work transition and reduce disincentives to the creation of formal jobs,” (3) “ensure affordable access to the internet while adapting regulations to confront the challenges posed by digital platforms,” and (4) “upgrade taxation systems to address tax avoidance and create fiscal space for universal social protection and human capital development.”

Technological progress will create multiple opportunities, but the process towards them can be disruptive – how well countries cope with the demand for changing job skills will also depend on how quickly the supply of skills shifts. Early planning by governments and companies alike can help avoid the escalation of unemployment rates and social unrest. Automation is no “side issue” —applied the right way, it has the potential to bring substantial benefits to both economic modernization and social well-being. These challenges will need to be approached in a humane and socially inclusive manner —offering education that keeps pace with the times, facilitating the creation of new livelihoods at all levels of society, or making the joy of invention accessible to all. It is not too late for nations and industries to adapt their learning and labor structures, increase collaboration between the public and private sectors, and build trust in the community and its leaders’ decision-making.

Sincerely,

Pascal Marmier
Economy of Trust Foundation / SICPA
Stephanie Wander
Atlantic Council GeoTech Center
Borja Prado
Editor

Get the Economy of Trust newsletter

Sign up to learn about advances in technology and data activities that, through trust and more transparent frameworks, improve nations and sectors alike.

2021 Report Rewind

Latest Reseach & Analysis

Getting from commitment to content in AI and data ethics: Justice and explainability

There is widespread awareness that the use of artificial intelligence and big data raises challenges involving justice, privacy, autonomy, transparency, and accountability. However, articulating values, ethical concepts, and general principles is only the first step

The post How public trust survives in the era of automation appeared first on Atlantic Council.

]]>
Building smarter military bases for climate resilient communities https://www.atlanticcouncil.org/blogs/geotech-cues/building-smarter-military-bases-for-climate-resilient-communities/ Fri, 01 Oct 2021 15:58:36 +0000 https://www.atlanticcouncil.org/?p=437963 To properly cope with climate-related dangers, the military must be able to future-proof its installations to defend themselves against twenty-first-century threats, specifically by capitalizing on the use of smart technologies.

The post Building smarter military bases for climate resilient communities appeared first on Atlantic Council.

]]>

For years, climate change has been considered one of the most serious threats to U.S. national security and humanity at large. Thus, there is little doubt that a rapidly shifting environment will continue to shape the geopolitical landscape. In fact, according to the latest version of the Intergovernmental Panel on Climate Change (IPCC) report, even under the lowest emissions scenarios, a 1.5-degree warming (compared to pre-industrial temperatures) or worse over the next two decades is “more likely than not” before the planet can begin to recover. Some environmental changes courtesy of warming temperatures are already irreversible within the current generation’s lifetime. This means that over the next two decades, humans are bound to see an increase in direct environmental disasters, and an increase in social and political fallout after these natural disasters. Essentially, humanity is at the brink of catastrophe and must stop pushing environmental boundaries or risk the humanitarian and fiscal costs of inaction.

Unlike previous U.S presidential administrations, the Biden administration has been relatively quick to recognize the dangers posed by climate change and has issued an executive order (EO) positioning it as a critical domestic and foreign policy crisis. The EO directs national security and foreign policy agencies including the Department of Defense (DOD), to incorporate climate change into their missions. The DOD in particular is increasingly struggling to cope with climate change, especially due to threats to its installations and bases.

A 2019 report released by the DOD analyzed 79 mission assurance priority installations based on their operational roles. It documented current and potential vulnerabilities to each of these installations over the next two decades and showed current or possible climate-related threats (e.g. recurrent flooding, drought, desertification, wildfires, and thawing permafrost). Unchecked climate change, along with sharp rises in severe weather disasters, will exacerbate global instability, resource deprivation, forced migration, and even violence. Without proper infrastructural adaptation and policy-based mitigation, the increasing frequency of extreme climate disasters will negatively affect U.S. military missions (including humanitarian aid missions), strategic positioning, and readiness.

Biden’s executive order is a good first step. However, meaningful action will not be possible without cooperation and bold actions from Congress and DOD. To properly cope with climate-related dangers, the military must be able to future-proof its installations to defend themselves against twenty-first-century threats, specifically by capitalizing on the use of smart technologies.

Domestic communities and military bases already affected by climate change

One of many areas near the southeast side of Offutt Air Force Base affected by flood waters is seen in Nebraska, U.S., March 16, 2019. Picture taken March 16, 2019. Courtesy Rachelle Blake/U.S. Air Force/Handout via REUTERS

In 2019, Offutt Air Force Base, headquarters of U.S. Strategic Command, which oversees the Pentagon’s nuclear strategic deterrence and global strike capabilities, experienced extreme flooding after a bomb cyclone storm flooded the Missouri River. Floodwaters reached up to 7 feet high and forced one-third of the base to relocate its offices. The base’s personnel had to scramble to save sensitive equipment, munitions, and dozens of aircraft. Col. David Norton, commander of Offutt, revealed the extent of flooding saying, “In the end, obviously the waters were just too much. It took up everything we put up.” He added, “[t]he speed at which it came in was shocking.” 

Although the extent of the flood may have taken some by surprise, the risks to Offutt were long known to the U.S. military. In 2011, floodwaters from another storm crept up 50 feet of the base’s runway. Despite knowing that this base was vulnerable to flooding, relevant agencies acted slowly. The risks exposed by the 2011 flood were only formally recognized in 2015, and construction was not approved quickly enough to reinforce an earthwork levee system that could protect against flooding. In the end, the approval only came in 2018, and the base flooded before construction even began. Estimates indicate the disaster will likely cost much more to repair than it would have cost to prevent; preventative action would have cost only $22.7 million, but instead, Congress had to approve approximately $650 million for a four-year rebuilding effort.

The Air Force is not the only branch of the military that has suffered severe climate-related disasters. Marine Corps Base Camp Lejeune is the main East Coast infantry base for the Marines. Unfortunately, Lejune was not built to withstand strengthening climate disasters such as Hurricane Florence, which caused an estimated $3.6 billion in damages to the base in 2018. Notably, the buildings at Lejeune that were constructed recently and at higher climate standards suffered little to no damage, while older buildings, including many key headquarters, were unable to withstand the high winds and flooding. According to Navy Captain Miguel Dieguez, Camp Lejeune’s facilities director, “Hurricane Florence… exposed the soft underbelly of our infrastructure here.”

The Marines must now rebuild to incorporate climate measures and resiliency at Camp Lejeune. In this case, such action is reactive, considering that earlier reports predicted such destruction and recommended preventive measures. Shana Udvardy of the Union of Concerned Scientists co-authored a report in 2016 highlighting the threat that climate change, especially sea-level rise, and flooding, poses to several bases, including Lejeune. Additionally, a Center for Climate and Security report was issued only months before Hurricane Florence hit Camp Lejeune. The report discussed the risks to the base and recommended significant upgrades to the base’s utilities to make them less vulnerable to storms and flooding.

However, because little action was taken to heed these warnings, Camp Lejune, like the multiple military installations that have faced climate disasters in the past decade, will take years to recover. Former Defense Secretary Mark Esper noted, ​​“You guys [Lejune] are competing with just one of many disasters.” Between just 2018 and 2019, contracted labor teams have had to travel across the country to fix multiple storm-ravaged bases, such as Tyndall Air Force Base, in Florida; Offutt Air Force Base, in Nebraska; and Marine Corps Recruit Depot Parris Island, in South Carolina.

Although such disasters have led to few human casualties, the increasing frequency and strength of these disasters suggest that climate change may soon lead to grave consequences for the armed forces.  Since the Gulf War, the United States has lost more F-22s to climate change than enemy combatants. And when Hurricane Michael decimated Tyndall Air Force Base in 2018, it crippled seventeen of the US’ F-22s (10% of the total inventory) and caused an estimated $4.7 billion in damages.

The truly tragic aspect of these losses stems from the fact that top leaders knew that this could be a possibility long before Hurricane Michael. Retired Gen. Gilmary “Mike” Hostage noted that an insufficient number of Tyndall’s hangars were built “to withstand the strength of the hurricane that hit [Tyndall], even though they had hurricanes like that back in the day.” More importantly, he was quick to note that other bases that faced “traditional” threats (e.g. missiles, enemy fighters, etc.) have long had the infrastructure to protect against storms because hangers needed to withstand enemy missiles could just as effectively shield the base against high winds. The fact that Tyndall did not have the proper infrastructure was not because it would have been financially imprudent or logistically impossible. It was simply the result of leadership failing to consider that climate change could be as destructive as an adversary.

Damage caused by Hurricane Michael is seen on Tyndall Air Force Base, Florida, U.S., October 16, 2018. REUTERS/Terray Sylvester

While Tyndall’s aircraft were not called to any operational missions during repairs, the changing geopolitical landscape may soon levy heftier demands and leave little room for error when it comes to military readiness.  In fact, Tyndall itself will soon house the F-35 fighter jets making it even more critical to focus on its design and protection.

Smart military bases translate to more resilient communities

Fortunately, much of the armed forces have already recognized this imperative and have begun to pilot initiatives designed to integrate climate resilience via smarter technologies and greener practices into “bases of the future.” While initial plans appear promising, they are being delayed by significant bureaucratic hurdles. It is imperative that the United States not wait till the next catastrophe but rather boldly expand these initiatives as most of the military’s strategic bases, while not yet damaged, are under critical threat.

Naval Station Norfolk, the largest base in the world and the US’s most important naval installation, is a prime example of an at-risk base. It is the fifth most vulnerable US base to climate change according to a report from the American Security Project.

The Navy has known about the risks of climate inaction for years and has pushed for greener practices long before Biden’s executive order. In 2009 for example, it launched the Great Green Fleet, an ambitious effort that sought to transition the Navy to partially run on biofuels. However, Congressional responses to the initiative were varied. The House Armed Services Committee constantly threatened to undercut the project by prohibiting the Navy from enacting the Great Green Fleet. Maj. Gen. John Pletcher, the Air Force’s deputy assistant budget secretary, highlighted the difficulties such political deadlock could have for military readiness, “I can’t submit a request to Congress for an unknown weather event,” Pletcher said. “What [the military is] always doing is chasing … the disasters that occur. If I submit to [Congress] a wedge that says, ‘I want to have this money in case something happens,’ it’ll probably be the first place that they have to go to source other requirements.”

Such Congressional inaction would be short-sighted and foolish, not just from a defense perspective, but also from an economic one. Climate disasters will become more frequent and destructive, and short of miraculous global climate action, they will continue to affect U.S. military installations and their surrounding areas. Proactive investment will allow these installations to better withstand climate threats while minimizing costs.

Military bases, especially those in the Continental United States (CONUS) have historically served as the lifeblood for hundreds of communities across the United States. They are often cultural hubs and the leading source of employment for communities that may otherwise not exist. One study found that in the state of Arizona alone, military bases contributed over $9.1 billion to the state’s annual revenue and accommodated over 96,300 jobs.  Tyndall Air Force Base (AFB), according to some estimates, has a “$2.5 billion-a-year economic impact” and accounts for 20,000 jobs. In short, investing in the climate resilience and efficiency of bases directly translates to investing in community infrastructure– a staple of Biden’s Build Back Better Initiative.

It is high time that Congress breaks its historic aversion to climate renovation and proactively invest in the climate resiliency of U.S. military bases. Note the numerous case studies in which Congress took prudent, preemptive actions in order to adapt to changing strategic dynamics. For example, in 1954, Congress authorized the research, design, and construction of a nuclear navy. The first ships, the USS Nautilus and USS Enterprise (CVN-65), which cost over $58 million and approximately $451.3 million (or the equivalent in $3.3 billion in 2010) respectively, would serve as the foundation for America’s nuclear navy for over 60 years. Today, all U.S. supercarriers and submarines are nuclear-powered. There is no reason why Congress could not find similar solutions for military bases. 

As mentioned in the National Defense Strategy, even the most protected military bases are no longer sanctuaries from threats. Unfortunately, it failed to mention that climate change could pose an equally grave threat to the homeland as traditional threats. Climate change is and will continue to be a strategic and existential threat to the armed forces. Changing environmental conditions could soon force the military to perform a vast array of functions ranging from traditional operations to humanitarian aid missions, power generation, and even water purification. To properly address this challenge, Congress, the military, and private partners must work together and utilize emerging technologies to build back better and build smarter bases.

Key recommendations

  • Partner with private-sector companies working to incorporate advanced technologies into “smart cities.” According to Deloitte, “smart military bases are the logical extension of existing smart cities.” The key benefit to a smart base would be that the inclusion of smart technologies would allow for flexibility and adaptiveness necessary to carry out the base’s primary mission irrespective of dynamic threats like a changing climate. Additionally, smart bases could be built incrementally over longer periods of time, which would grant fiscal flexibility and opportunities to test vulnerabilities via strategic reviews and wargaming. AT&T, The University of Georgia’s Institute for Resilient Infrastructure Systems, and the Defense Logistics Agency have all already become critical partners in rebuilding Tyndall, in a prime example of successful public-private partnership.
  • Invest heavily in improvements in climate data and analysis. Although the US already has some of the most advanced climate modeling equipment, there are still areas for improvement, particularly when it comes to increasing computing power. Higher resolution and increases in computing power will better capture many of the small-scale effects that are very important to analyzing regional climate change particularly in coastal areas.1
  • The DOD should mandate modeling high-likelihood climate disaster scenarios at critical bases. The analysis should take the following factors into consideration: warning time, response time, impact on infrastructure (particularly transportation and energy infrastructure), impact on military assets, and impact on the surrounding communities. The climate disasters experienced by the world today are worse than the projections that were made in the past decades, so in many cases, the defense community should assume that current worst-case scenarios may become future median or best-case scenarios. In order to prepare for high-risk disasters, the DOD must invest in smart technologies that are better able to integrate current and projected climate impact scenarios into DOD planning cycles and war games to make prudent assessments, investments, and infrastructure modifications.2

Looking Ahead

Today, there are more than $30 billion worth of maintenance and repair backlogs in the Air Force alone. Many of those issues can be directly attributed to climate change and its degrading effects on military readiness. Amidst renewed great power competition, emerging technologies, and unprecedented changes to the natural environment, the United States must shift its approach and focus on rising areas of vulnerability. Air Force chief of staff, General Charles Q. Brown put it simply: we must “accelerate change or lose.” Climate change is one of the most pressing challenges to the armed forces today, and reactive mitigation efforts will not suffice. Individual service branches can only do so much and cannot reasonably be expected to prepare for the most devastating climate disasters without significant strains on their budget. It is time that Congress recognizes new strategic shifts and reorients America’s military bases to confront the challenges and threats of the twenty-first century.

The views expressed in this publication are those of the authors and do not reflect the official policy or position of the US Air Force, Department of Defense, or the US Government.

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Building smarter military bases for climate resilient communities appeared first on Atlantic Council.

]]>
Reimagining a just society pt. 6: Our planetary condominium https://www.atlanticcouncil.org/blogs/geotech-cues/reimagining-a-just-society-pt-6-our-planetary-condominium/ Wed, 29 Sep 2021 20:04:32 +0000 https://www.atlanticcouncil.org/?p=437965 Carol Dumaine's latest in a blog post series on "Reimagining a Just Society" recalls the tragic condo collapse in Florida last June and asks whether the commons elements of a typical condominium association suggest any parallels for understanding "global commons" or public goods in an age of pandemics, climate change and AI.

The post Reimagining a just society pt. 6: Our planetary condominium appeared first on Atlantic Council.

]]>

Champlain Towers South in Surfside, Florida after the collapse on June 24, 2021 (Image: Source)

On June 24, 2021, two out of three condominium towers of Champlain Towers South collapsed in less than eleven seconds. Soundless surveillance video taken from a nearby building revealed cement and steel silently cascading into billowing clouds of dust.  What looked like a controlled demolition of a building was anything but planned or normal. 

In fact, engineers with years of experience were as shocked as the general public by the event. The unprecedented collapse took the lives of 98 people, including one victim who died in a hospital shortly after the collapse.   

As the rescue effort continued into a second week, speculation as to the cause(s) of such a colossal structural failure was rampant. Questions of accountability and liability for the disaster arose amid proliferating stories of those most impacted by the condominium’s collapse. Some focused on the condominium’s shared ownership and questions of its responsibility and ability to act in time to head off such a disaster.

It’s generally well-known that “condominium”, a word with Latin roots, means joint- or co-ownership. Residents in a typical condominium are responsible for their individual units and share the use of common areas which are managed through an association often known as a “homeowners association” or a condominium association. In the concept of common areas is the principle of shared responsibility for their upkeep by individual owners who benefit from collective efforts that ensure regular maintenance of the whole. Occasionally, however, the extent of needed repairs can outstrip the ability or willingness of residents to pay for them. And in a “mini-democracy” such as a typical homeowners association, such disagreement can delay action until costs become prohibitively high or prospective action comes too late. In some cases, moreover, concerns and potential remedies may be outside the purview and ability of homeowners’ associations to manage.

Many people have experience with these associations and may live in condominium arrangements themselves. Until now, however, how many would ever consider the risk of the “whole” — of which personal property and individual lives are integral parts — collapsing? Do people in these residential arrangements typically consider the quality of the rebar or other interdependent elements, such as concrete support pillars, as part of the common area over which they have joint ownership? Would they ask whether the legal structures that make up a condominium agreement anticipate the need to protect the whole from total collapse? News media have reported on multiple structural issues, including factors surrounding but not integral to the complex — such as nearby construction activity and the building’s exposure to saltwater — that may have played a role in the collapse though a definitive conclusion about its cause(s) is likely months away.

The Endangered Global Commons 

Without diminishing the tragedy of the condominium collapse, it’s possible to view this disaster as an analogy for humankind’s challenges in reaching agreement on whether and how to value the “commons,” sometimes also known as public goods, on which all life, and not just human life, depends. “Commons” are “resource domains in which common-pool resources are found.”  (They can be as small as the parking lot for an apartment complex, according to Dr. Susan Buck, editor of The Global Commons: An Introduction, or as large as the high seas or the solar system.) Whether in an apartment condominium or on a planetary scale, the “commons” refer to areas outside of privately owned assets that are essential to their functioning and value, such as the rebar-reinforced concrete. And just as Florida condominium’s sudden collapse astounded engineering experts, the speed with which critical Earth life support systems are disintegrating and even collapsing is shocking scientists around the world. In general, environmental changes have been unprecedentedly rapid when they are considered on a geological time scale. In other words, environmental changes that once unfolded over thousands of years are now happening at an accelerated pace with implications for humankind’s future as well as that of all living species.

Consider that a span of 200,000 years is a blink of an eye in geological time — as the  “The Last Time the Earth Warmed” drives home to viewers — and it’s easy to see why many people who view today’s changes within more familiar time scales, such as the span of recorded history, are frequently caught by surprise. This pattern of frequent scientific surprise has its roots in the fact that observed environmental changes frequently have been in line with, or even exceeded, worst-case scientific predictions. As science writer, David Wallace-Wells puts it, “The terrifying distant future is already here.”

In the past year, for instance, unanticipated changes in the “commons”, or common areas on which all life depends, included:

The unexpected emergence of open water in an area of the Arctic (nicknamed the “Last Ice Area” because scientists did not expect it to be ice-free until about 2100) shocked scientists who thought this ice region was stable.  Such sea ice loss has many impacts including faster warming in the Arctic than at lower latitudes; increased permafrost thaw, which drives the release of carbon dioxide and methane gases; increased ocean absorption of heat that may impact the configuration of the jet stream that in turn could affect temperatures at lower latitudes; and coastal erosion, according to the National Snow and Ice Data Center.

• The unprecedently high temperatures observed in the Pacific Northwest in late June 2021 also killed up to a billion small sea creatures—including mussels, clams, and snails-and affected other plant and animal species in ways that again validated experts’ concerns about biodiversity, ecosystem functioning, and even the food chain on which humans depend.  According to conservationists and biodiversity experts, slight changes over time can have the most dramatic impacts on species, even if initially almost imperceptible by humans. 

• In the Antarctic, the Pine Island Glacier, also known as the “soft underbelly” of the West Antarctic Ice Sheet, started moving faster between 2017 and 2020, as about 20 percent of its floating ice shelf broke apart. A new study published in June 2021 warns that the rest of the shelf, which holds the glacier onto land, could fall apart in a few decades, rather than the century previously estimated, and is further evidence that global warming can cause abrupt changes in ice sheets.

•  Alarming loss of Amazonian rainforest last year highlighted the dangers to biodiversity as the world’s tropical regions are home to 80 percent of all the species in the world. The tropical regions play a “fundamental role”  in the fight against climate change as they are capable of absorbing up to five times more carbon dioxide than terrestrial forests, according to Diana Colomina, the Forest Coordinator of the World Wildlife Fund.


Image: Source

• In early September, over 200 leading health and medical journals co-published an editorial declaring a 1.5-degree-Celsius rise in global temperatures the “greatest threat to global public health.”   They warned that the science is unequivocal: “a global increase of 1·5°C above the pre-industrial average and the continued loss of biodiversity risk catastrophic harm to health that will be “impossible to reverse.”  Together they emphasized that “Thriving ecosystems are essential to human health, and the widespread destruction of nature, including habitats and species, is eroding water and food security and increasing the chance of pandemics.”

The Human World at “The Verge of the Abyss”

In a recent interview, United Nations Secretary-General António Guterres warned that the world is at the “verge of the abyss” in relation to climate change. He said: “In countries that have democratic institutions, it is the people who have to force their governments [to take climate change seriously].”He probably would agree that citizens must grapple with what is at stake if their elected officials do not lead on climate change mitigation and adaptation, both of which must also entail national and global action on the interdependent crises of biodiversity loss and the impacts on agriculture and public health.  To do so, they must consider the planetary commons issues that must be protected, maintained, or repaired. Citizens and relevant scientific and policy experts must also consider how democratic institutions and priorities must evolve to prohibit global temperatures from reaching uninhabitable heights while also ensuring eventual climate stability. 

While there are different definitions of “global commons” available, a modern concept of “commons”  needs to include protecting the systems that enable the basic requirements of human life, such as water suitable for washing as well as for drinking and cooking. Other “commons” attributes needing protection include: a livable climate; arable land and water-efficiencies capable of supporting agriculture; sustainable forests and oceans essential to the Earth’s respiratory system; access to affordable and reliable energy and health systems; affordable housing adapted to a changing climate; an economy that works for all; affordable prescription medicines; respect and preservation of endangered cultural heritage, languages, and artifacts; as well as health, transportation, digital, education, and access to safe and clean public parks and recreation infrastructure, to name a few. In the governance and information arena, it seems logical that the commons must include rule of law; broadband access; universally fact-based history, science, public and physical health, civics education complemented with opportunities for expanding critical thinking and equality in voting rights, and access to polls in free and fair election processes. At a time of rapid advances in artificial intelligence (AI), it also seems clear that a concept of the commons must include investments in forethought as to social purpose and policy for new technologies that have the potential to exacerbate existing socio-economic inequalities, undermine democratic discourse, and empower potentially harmful government and corporate surveillance, such as discussed in Redesigning AI:  Work, Democracy and Justice in the Age of Automation, edited by economist Daron Acemoglu. Finally, one more view of commons would recognize the rights of other species to habitable and sustainable ecosystems.

The task for societies in achieving inclusive economies, adapting to a changing climate, and ensuring ecological sustainability is harder than ever before. This is due to time lost and the environmental damage already incurred stemming from lack of political and popular will to acknowledge fossil fuel-based dangers that have been scientifically documented since at least the 1970s.  

This pattern of reality avoidance,  partisan politization of scientific facts, and the diversion of national resources and expertise onto other priorities, such as the 20 year-long war in Afghanistan which the US pursued under different administrations even in the absence of a national strategy has led to a predictable result, dwindling but ever-more expensive options to combat climate change-amplified disasters. When a country is engaged for years in military conflict and support to military operations, it oftentimes follows that other courses of action that could have been prioritized and pursued wither and die, or are never even imagined let alone explored.  Even worse, there is no agreed upon method of accounting for the opportunity costs incurred, although now nature itself is exacting an accounting through increasing signs of environmental devastation.

Towards a New Governance Paradigm

On an already radically climate change-altered planet, due to the combined effects of man-made climate change, deforestation, and biodiversity loss, countries face the need for a new governance paradigm of economic, public health, and societal well-being. When real-life scenarios such as the heat crisis in the Pacific Northwest or the Arctic freeze in Texas fall well outside of any models or climate projections, the topline takeaway needs to be that we already are living on a different planet. In this uncharted new context, traditional concepts of “national security” and “foreign policy” and even “international negotiations” must be reexamined for their fitness for purpose. New measures of economic well-being need to be adopted since over-reliance on conventional measures, such as GDP, have contributed to the climate emergency.  This requires a massive shift in thinking. It will be a shift that entails rapidly re-perceiving mankind’s place in the natural world, the health of which is essential to life itself.  Unfortunately, so far the global experience of the COVID-19 pandemic has underscored how unprepared global institutions and national leaders are when it comes to respecting and protecting the “global commons” and thus individual nations’ security and well-being.

Nevertheless, we fortunately have clues on how to proceed with attending to the global “commons” and making healthy global civilization a collective priority. For instance, some societies do better than others when evaluated on a global happiness index according to six variables — gross domestic product per capita; social support; healthy life expectancy; freedom to make your own life choices; generosity of the general population; and perceptions of internal and external corruption. According to this index, Finland, Denmark, Iceland, Norway, and the Netherlands rate as the happiest countries in the world.  All five of these reportedly relatively happy countries are also in the top ten of countries ranked as “full democracies,” according to The Economist’s Intelligence Unit’s Democracy 2020 rankings.. In this index, the variables used for establishing the rankings were: electoral process and pluralism, the functioning of government, political participation, political culture, and civil liberties. According to these criteria, the report ranks the US as a “flawed democracy.” A recent Freedom House report similarly emphasizes that the US is facing an “acute crisis for democracy.”

The September 2019 global climate strikes saw thousands of people protesting for more action on climate change (Image: Source)

As discussed by historian Timothy Snyder in Our Malady: Lessons in Liberty in a Hospital Diary, healthy democracy, environmental activism, quality education, and freedom from fear of bankruptcy due to an unaffordable health crisis, all bear on whether a country can summon the will and resources to confront climate change and be better prepared for other upcoming crises, such as epidemics and pandemics. However, even nations that are well-governed by efficient democratic standards are having difficulty adapting to new ecological and public health circumstances, as the COVID-19 pandemic’s effects in Sweden and Denmark, for instance, have underscored. National capacity and governance gaps are set to widen, according to a recent report of the US National Intelligence Council, the Global Trends 2040 reportSuch capacity gaps will lead to a more contested, fragmented and turbulent world, with a heightened risk of conflict, according to the report.  Politics within states are likely to grow more volatile and contentious.  In sum, the recent Global Trends 2040 report is not optimistic. The report warns that:

In coming years and decades, the world will face more intense and cascading global challenges ranging from disease to climate change to the disruptions from new technologies and financial crises. These challenges will repeatedly test the resilience and adaptability of communities, states, and the international system, often exceeding the capacity of existing systems and models [emphasis added]. This looming disequilibrium between existing and future challenges and the ability of institutions and systems to respond is likely to grow and produce greater contestation at every level.”

In recent years, ice shelves have experienced rapid disintegration (Image: Source)

Towards a New Political Economy

A failure of the commons, such as increasingly damaging and irreversible human-induced climate change, is sometimes known as a “market failure” or an externality because it is external to economic considerations of supply and demand.  A panel of economists and environmental experts hosted by the World Economic Forum emphasized in 2017 emphasized the “need to fundamentally transform our key economic systems — our energy system, food production system, our cities, and our goods manufacturing system. We simply have no other option.”

 The ability of the oceans or tropical forests to absorb carbon dioxide, or the polar ice sheets to reflect the heat of the sun, are not factored into a so-called “free market”-based view of the world that holds that less government and more market is the ideal form of economy. When the value of everything is reduced to its price in financial markets —and the basic life-support systems of the Earth, as well as inclusive political and economic participation, have no value in this system — elements of the commons can become degraded.  When this happens—the destruction of the Earth systems commons that enabled human civilization to evolve in the first place—natural systems on which life, and human society depend,—such as the food chain, are endangered as is the future of all species, including humans.  (There is an “externality” dynamic in traditional national security doctrine, which tends to discount such inherently global issues if they do not readily fit inside the predominant framing of a “national security” as one based narrowly on traditional nation-state “interests.” The “securitization” of a global issue, such as climate change, as a national security issue is fraught with such framing problems, an issue that is itself an example of a global commons failure with immense implications, ironically, for national security.)

On an already transformed planet where new environmental conditions threaten all forms of life, national priorities must change. Embracing different ways of thinking requires opening up to different perspectives, sharing ideas and knowledge, and actively listening to people who have not traditionally been part of “national security” or “foreign policy” or “global economy” discourse. It may require leaving behind traditional nation-state-based notions of “realpolitik” in order to give climate realities the attention and resources they require. This is hard to imagine given that there are unrelenting pressures for policymaking attention in other arenas. So immense is the climate challenge, however, that French sociologist and philosopher Bruno Latour, has suggested a new lens on international relations is necessary; “what counted before as ‘realpolitik’ is escapism now.” While governments must attend to multiple crises and threats at once — these can also be seen as distractions from the existential challenge of climate change which, however, cannot be addressed without also attending to its underlying root causes.

In sum, the implications of climate change are sweeping and systemic with physical impacts that will continue over future decades and centuries, according to the first installment of the latest report of the United Nations Intergovernmental Panel on Climate Change.   As a result, the organizing principles for our societies, and the institutions that undergird them, including educational, public health, transportation, economic, and security sector institutions, must adapt to deal effectively with climate realities. It will be a measure of a society’s well-being whether its citizens insist on this and are heard in ways that lead to effective climate action. Thus, effective democracy, global cooperation, and meaningful climate change action are necessary partners in protecting the “commons” or the “whole” biosphere essential to society and future generations. Reimagining a just society in the context of contemporary challenges is necessary for survival on national and civilizational levels. It requires new enlightenment and new concepts about the planetary condominium and the future of democracy. In future blogposts, this series will look at the works of modern enlighteners contributing to needed new thinking about economy, society, and government in an age of climate change disruption.

Previous installment

GeoTech Cues

Sep 7, 2021

Reimagining a just society pt. 5: “Is this working as intended?” — Global trends amid contested futures

By Carol Dumaine

The question of ‘is this working as intended’ is applicable to contemporary concepts of national and international security as well as of economic value, growth, and development. Given how our world is being reshaped by new technologies, data capabilities, and geopolitics, leaders in both the public and private sector need to pause and consider if governance and geopolitics in today’s world are actually working – or not.

Climate Change & Climate Action Security & Defense

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Reimagining a just society pt. 6: Our planetary condominium appeared first on Atlantic Council.

]]>
Getting ahead of the next catalyst: A new paradigm for cybersecurity in the space domain https://www.atlanticcouncil.org/blogs/geotech-cues/getting-ahead-of-the-next-catalyst-a-new-paradigm-for-cybersecurity-in-the-space-domain/ Tue, 28 Sep 2021 19:59:43 +0000 https://www.atlanticcouncil.org/?p=438089 Consider the terms “cyber attacks”1and” information and influence activities2.” These two terms were relatively infrequently used before the computer malware Stuxnet and the 2016 US presidential election3 , respectively. Yet these events and the terms characterizing the emergence of new threats mark a threshold where a traditional government issue transcended into the commercial arena, and when nation-state […]

The post Getting ahead of the next catalyst: A new paradigm for cybersecurity in the space domain appeared first on Atlantic Council.

]]>
Consider the terms “cyber attacks”1and” information and influence activities2.” These two terms were relatively infrequently used before the computer malware Stuxnet and the 2016 US presidential election3 , respectively. Yet these events and the terms characterizing the emergence of new threats mark a threshold where a traditional government issue transcended into the commercial arena, and when nation-state actor capabilities became commercialized and publicly available. Each of these events, in its own way, forced commercial companies to change their security methodologies and postures to mitigate risk and control potential blowback stemming from these types of incidents. 

In the current post-Stuxnet era, there exists a much-expanded digital infrastructure, tremendous diversity in the types of threat actors and their motivations, and exponentially more capabilities that can be leveraged for substantial impact. Similarly, in a post foreign influence environment, information and influence activity is now not only a threat to western political bodies and their ideologies, but also to the commercial domain due to the proliferation of disinformation-as-a-service4(DaaS) and related destabilizing offerings5. The cyber ecosystem established as a result of the digital infrastructure built post-Stuxnet was not designed to support the addressal of such malign information and influence activity. Hence, outside of recent advancements in detection, there is no consolidated solution to effectively counter the full scope and sophistication of malicious information and influence activity. 

Why do these events matter? Simply put, they provide illustrative examples of how new, cross-domain threats result from the emergence of novel cyber activities and the proliferation of related capabilities. It is only natural to wonder what domain might be next. In this post, an argument is made explaining the space sector’s unique vulnerabilities to such cross-domain threats. This post further explores how lessons learned from previous cross-domain catalysts can be applied in the space domain. The equivalent of a Stuxnet or foreign influence-like event in space would make space the third cross-domain issue in recent time to transcend from the government into the commercial arena. And while traditional nation-state actors, capabilities, and intents would again no longer remain under the purview of the government, anticipation of such an event can enable the identification of various commercial applications, as well as produce an unprecedented security posture to prevent foreign adversaries and threat actors from exploiting space as the next domain for malicious activity. 

Cyberattacks and information and influence activities provide critical insights into how both foreign adversaries and non-state threat actors will likely use space in nefarious ways to advance their agendas. These insights can shed light on how to monitor threat indicators; how to develop cyber and related (physical, etc.) security postures; and how novel assessment methods of key threat events may provide opportunities to mitigate risks while simultaneously advancing space technologies.

This post views space as an emerging threat domain displaying early vulnerabilities to pernicious cyber activities, as well as a new vehicle to support advancements in a variety of fields. It also analyzes and discerns between foreign adversaries and threat actors. Specifically, foreign adversaries are nation-state actors advancing policy objectives through overt and covert means. Whereas, by comparison, threat actors include domestic entities, shadow proxies, and criminal enterprises engaging in activities against various sectors for financial or reputation gain. 

Why does Stuxnet matter? 

Understanding Stuxnet is critical to developing an understanding of how to anticipate, through assessment, the threat surfaces that the space domain introduces, and how to develop proactive strategies to mitigate its vulnerabilities. Stuxnet’s use against industrial infrastructure was the catalyst that both brought cyber to the forefront of the world as an attack mechanism and transformed it from a government priority to a global threat. 6 Stuxnet initiated a series of events (expansion of cyber threat landscape, awareness to cybersecurity, etc.) leading to the establishment of digital infrastructure reaching global audiences irrespective of geographic region, an aspect of information security not previously prioritized, and a springboard for today’s technology companies to monopolize digital communication and connection.

Over the course of the last 10-15 years since Stuxnet, this digital infrastructure continues to exponentially evolve by increasing in scope, size, and utility. The quantity of commercial applications, companies, and cyber incidents continues to increase, as well as the sophistication and complexity of these activities (E.g. Colonial Pipeline ransomware attacks7 , US State Department cyber attack8 ). Compounding this is that regulation and security are always second to innovation. In other words, it was not until recently that significant strides in cybersecurity were made from a regulatory and security perspective9 to position companies more effectively and authoritatively against threat actors. These strides help decrease the delta between threat actor impact and having the appropriate tools to defend against such threats. From the types of defensive tools and software to advancements in foreign threat actor analysis, companies can adhere to a much higher standard to protect their business models while leveraging the diverse digital infrastructure.

Why does the 2016 U.S. presidential election matter? 

Like Stuxnet, Russia’s campaign to influence the outcome of the 2016 U.S. presidential election was an incident where a traditionally government-centric topic transcended into the commercial space. The primary difference this time was that the mature digital infrastructure that existed in a post-Stuxnet era was not built to detect, mitigate, anticipate, or respond to malicious information and influence activities. 

In addition, the delta between incident and capability development was significantly less than post-Stuxnet. In this instance, foreign adversaries and threat actors manipulated the digital infrastructure already established to launch successful malicious information and influence activities. The mediums to reach various target audiences already existed and were in place to deliver tailored messaging to change behavior and outcomes. 

How do threat actors evolve?

Foreign adversaries’ and threat actors’ capabilities, modus operandi (MO), and methods continually evolve to advance their interests. Traditionally, this is a classic cat and mouse game as nation-state actors engage in espionage-like activities to inform their evolution. Expressly, as nation-state actors conduct covert and clandestine activities, it is always a race to detect and attribute the activity. However, there are certain instances where operations are discovered and tools or capabilities are compromised.  Each time a compromise occurs, actors are forced to consider the potential risk of continued use of compromised capabilities and whether a change in their offensive posture is necessary. To avoid detection, adversaries may improve their tools, technique, and procedures (TTP) or MO. More importantly, nation-state actor tools have become more broadly known and available for commercial use. 

In each instance where a traditionally prioritized government topic (cyber, influence, etc.) transcends into the commercial space, the timeline of its otherwise natural evolution is compressed. There are countless instances where commercial entities uncover various threat actor tools, techniques, and capabilities. In these instances, and in that exact moment, threat actors lose their competitive advantage to send a phishing email, execute malware or spyware, or penetrate a network 10. This rapid expansion of discovery causes previously proprietary and sophisticated tools to become more commonplace. 

How does threat actor evolution transcend into the commercial sector?

Foreign adversaries and threat actors must now position themselves with increasingly sophisticated capabilities and further prioritize the use of those capabilities given the higher chance of discovery. What does this exactly mean? This means that as the delta between commercial and government capability continues to decrease, the suite of tools and capabilities of non-government foreign adversaries and threat actors will increase in sophistication, incentivizing foreign government threat actors to innovate and reprioritize their efforts given the noisy digital battlefield.

In both Stuxnet and the 2016 U.S. presidential elections, threat actor capabilities, TTP, and MO eventually transcended into the commercial space. This is critical to recognize because each time this type of activity occurs, the commercial world enhances their capabilities and foreign adversaries and threat actors lose a capability. Ultimately, foreign adversaries and threat actors are required to evolve and change their TTPs, MO, and capabilities as commercial entities attempt to predict where threat actor behaviors will trend11

Why is cybersecurity specific to space more important than ever? 

Security is always second to innovation. This dynamic must change in order to proactively protect infrastructure, institutions, and processes across industry and government. This means that companies must prioritize cybersecurity from inception and leverage best practices when building their solutions. This is especially important because space will be a domain with new types of infrastructure that foreign adversaries and threat actors can manipulate to advance their own agenda. With each commercial iteration of technology improvement, foreign adversaries and threat actors increase the number of ways to launch their capabilities in their proverbial toolbox.

Foreign adversaries and threat actors continually hunt for pain points to identify and manipulate. This is no different with space. As such, implementing a robust security posture will serve multiple purposes. Firstly, robust security will help ensure that when a space infrastructure element is compromised, the damage is limited. Secondly, robust security will limit the foreign adversaries’ ability to utilize space infrastructure for covert and/or clandestine operations. Thirdly, more intentional security protections will help prevent threat actors from profiteering and using space infrastructure for nefarious purposes, including ransomware, spyware, and espionage.

We are currently at a critical juncture to maintain a competitive advantage where, unlike before (e.g., pre-Stuxnet and preceding the 2016 US presidential election), we can leverage learned historical lessons to implement cybersecurity postures from inception for space-based technologies to prevent nefarious activities. 

How can we ensure the proper cybersecurity practices and standards are implemented to support innovation while balancing protection in space? 

There are two key constant themes that have emerged over the past two decades as government issues transcended into the commercial arena. One, there is a lack of true partnership between the industry and government, which leads to breakdowns in communication and a lack of fulsome insight. Two, there is a tremendous body of academic research on cybersecurity practices and standards with solutions that have not yet been implemented. This post identifies three primary ways to ensure the proper cybersecurity practices and standards are implemented to support innovation while balancing protection in space.

• Lead by example. As new technologies are developed and advances in space infrastructure occur, the individuals at the helm need to lead by example. Establishing sound cybersecurity practices from inception and demonstrating a level of responsibility commensurate with the potential impact of these technologies is essential. Time and time again, major corporations and companies have been seen leading by negative example with mixed up priorities. Obviously, profits are a significant factor. However, companies now more than ever need to manage risk both from a proactive and reactive posture. Complex infrastructures, such as that for space, include too many shared dependencies that risk security, and therefore profit, for all industry and government entities; as such, a more collaborative, community-based approach is required. 

• Anticipate through assessment. Augmented intelligence12 is a growing expectation in the AI/ML field. To overcome challenges resulting from increasing amounts of data, subjectivity, and confirmation biases due to the human condition, and foreign adversary and threat actors continually evolving, the domestic posture needs to shift to anticipation through assessment. Studying foreign adversaries’ and threat actors’ past tendencies and histories illuminate which indicators to monitor to proactively protect critical assets and infrastructure. Space is no different. When looking at space as involving a new type of infrastructure to deliver services, it will inherently have multiple points threat actors attempt to exploit. 

• Quick to cauterize. The final piece is to accept that an attack or penetration is only a matter of time, and no company is immune.  That said, it boils down to how quickly malicious activity can be detected; the quality and confidence of the data used to identify indicators to monitor; the capacity to conduct root cause analysis; and the ability to swiftly cauterize attacks and limit blowback. This is more of a mindset and realistic expectation to maintain. 

What about space ethics? 

Regulation is always second to innovation, and following regulation is ethics. Ethics specific to space might not be developed in a realistic timeframe unless a significant event occurs. With that said, there have been two previous moments in time where government issues transcended into the commercial space overnight, as well as past lessons learned can be used to inform the proper way to secure space infrastructure in a robust manner. There are a few foundational assumptions that the U.S.  needs to make to support the development of a system of principles and rules regarding space behaviors. 

Both threat actors and foreign adversaries abide by their own rules and only play nicely when the outcome benefits their own self-driven interests. These same entities also leverage different types of infrastructure, including space, in illegal ways. These two assumptions will help determine that those who live within the letter of the law develop a standard set of norms specific to space to not only operate soundly, but also ensure a robust security posture exists to protect from malicious intent and activity. 

Conclusion

Space introduces new types of infrastructure, new types of vehicles to deliver information, new pathways to technological advancements, and new needs to support innovation. Furthermore, space as a government issue has not transcended fully into the commercial arena yet, meaning a significant catalyst has not yet forced the hand of commercial entities to change their current security postures. As we’ve seen with Stuxnet and the 2016 US presidential election, it took a significant event for commercial entities to reevaluate the importance of cyber and information and influence activity, issues the government prioritizes every day. Space is also one of those priorities. Since space exploration first began, space is, and will always be, a race to the finish. Who will get to the moon first? Who will get to Mars first? Who will colonize space first? 

The U.S. is proactively postured to develop and implement innovative techniques based on cybersecurity best practices to protect this new type of infrastructure. Foreign adversaries and threat actors will use space as another means to advance their self-interests. In order to protect national interests, stakeholders will need to prioritize cybersecurity from inception and anticipate through assessment understanding past practices, monitoring key indicators, and continually maintaining a competitive advantage. 

The views expressed in this article are based on the experiences of the individual authors and do not necessarily represent those of the Atlantic Council or the authors’ organizational affiliations.

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

2    Manheim, J., 2011. Strategy in information and influence campaigns. New York: Routledge

The post Getting ahead of the next catalyst: A new paradigm for cybersecurity in the space domain appeared first on Atlantic Council.

]]>
Reimagining a just society pt. 5: “Is this working as intended?” — Global trends amid contested futures https://www.atlanticcouncil.org/blogs/geotech-cues/reimagining-a-just-society-pt-5-is-this-working-as-intended-global-trends-amid-contested-futures/ Wed, 25 Aug 2021 15:02:25 +0000 https://www.atlanticcouncil.org/?p=422785 The question of ‘is this working as intended’ is applicable to contemporary concepts of national and international security as well as of economic value, growth, and development. Given how our world is being reshaped by new technologies, data capabilities, and geopolitics, leaders in both the public and private sector need to pause and consider if governance and geopolitics in today’s world are actually working – or not.

The post Reimagining a just society pt. 5: “Is this working as intended?” — Global trends amid contested futures appeared first on Atlantic Council.

]]>

A hundred-foot tall, twenty-ton Chinese rocket recently crashed into the ocean in an uncontrolled descent to Earth. It just as easily could have landed in the middle of a city. Before its impact, a major news anchor  asked  a space expert: “Is this working as intended?” 

This question of ‘is this working as intended’ is applicable to contemporary concepts of national and international security as well as of economic value, growth, and development. Given how our world is being reshaped by new technologies, data capabilities, and geopolitics, leaders in both the public and private sector need to pause and consider if governance and geopolitics in today’s world are actually working – or not. Such reflection seems especially necessary after reading the recently released Global Trends 2040: A More Contested World  document, part of a series of future-oriented scenario-based global outlooks produced by the US National Intelligence Council every four years. This question seems all the more urgent in light of the last seventy-plus years of investment in all types of military equipment and operations, wars, and other conflicts—epitomized by the US involvement for twenty years in, and current withdrawal from the war in Afghanistan—as well as international security and development. 

Public commentary on the latest Global Trends report has focused on its grim outlook for humanity, here, here, and here, noting the report’s emphasis on demographic, economic, climatic, and technology trends. Such commentary generally fails, however, to reflect on how humanity got here or asks what is to be done except occasionally recommending a more “anticipatory” approach to governance. In one such commentary, the New York Times editorial board recommended that “President Biden can…be the one to recognize that an increasingly complex, volatile and unpredictable world requires a serious and coherent mechanism for anticipating and preparing for what lies over that dark horizon.”  Certainly, that is necessary but if that is the extent of humanity’s purpose — merely to document and prepare for the purportedly coming derailment of human society and to harness what remains of a nation’s security in its defense against the darkness — it sounds medieval in its fatalism. Without proactive and collective engagement on changing the paradigm that contributed to the current dangers, it is as if we are living in an age of pre-science ignorance. Understanding what that paradigm is and has been, and how it evolved, is necessary. 

The pandemic itself, of course, reminds us of the costs of lack of preparedness on the parts of individual nations and multilateral systems but it does much more than that. It points out pathologies in the way our modern economy works. It has focused a spotlight, for instance, on societal inequities, systemic racism, and other vulnerabilities that continue to make the pandemic’s effects so much worse particularly in societies already weakened by inadequate social protections, polarized by social media-fueled disinformation, and harmed by poor governance and public health communication.

This certainly describes the US experience of the pandemic in its first year, with a COVID-19 death toll  now estimated  in one study to be over 900,000 Americans. Now, more than a year into the pandemic, COVID-19 has ravaged India and neighboring countries as well as Indonesia and others, causing suffering and death on an unimaginable scale. More recently, the new Delta variant also has devastated many US communities that generally have lower vaccination rates. In India, the government’s disregard for public health care has contributed to what one  informed observer, Indian author, and environmental and human rights activist Arundhati Roy, has called “a crime against humanity.” At the peak of the latest COVID-19 surge, thousands of people in India died due to lack of oxygen; hospital directors  took to social media to plead for oxygen assistance.

The pandemic is a harbinger of the mounting human toll of social and economic inequities, poor governance, lack of foresight, and social media-fueled disinformation

AP Photo/Rajanish Kakade

In the midst of this unprecedented global disaster, the Global Trends 2040 report clearly foresees a world of continued hardships and warns of the inadequacy of existing systems and models to deal with them. In the contested world foreseen in the report, the leading edge of mankind’s global equivalent of uncontrolled rocket debris — such as climate change, COVID-19 pandemic-aggravated inequities and inequality, and unevenly distributed benefits of technology — is expected to impact the developing world first and hardest. The report notes that the most effective states in this contested world will be those that “can build societal consensus and trust toward collective action and harness the relative expertise, capabilities, and relationships of nonstate actors to complement state capability.”   

The report, furthermore, raises the requirement — without actually prescribing it (policy prescriptions are outside of the Intelligence Community’s legal role) —for a dramatic departure from past practices, even as parts of the report declare that certain underlying factors, such as within-country economic inequality, are “here to stay”.  On the issue of economic inequality, the report explains: “A number of structural causes combined to contribute to this growing inequality, including technological advancements that favored advanced educations and specialized skills while automating low-skill jobs; the outsourcing of many jobs and industries to developing economies; and an ideological shift toward market-driven solutions and away from redistributive, government policies.”  From this, the assumption might be that there are no plausible scenarios in which policies are implemented to correct for this persistent and growing inequality—inequality that has been shown by research elsewhere, such as in Thomas Piketty’s latest book, Capital and Ideology, to stem from political-ideological policy choices. Piketty writes that “inequality is neither economic nor technological; it is ideological and political.” Piketty’s research into the history of inequality shows that society’s conception of social justice and economic fairness shape the legal, fiscal, educational, and political systems that people choose to adopt. Markets, competition, profits and wages, tax havens, and competitiveness are examples of “social and historical constructs” that would not exist except for political-ideological policy choices.  

Piketty maintains that the rise of inequality, along with global warming, is one of the principal challenges confronting the world today; a political-ideological refusal to take the issue of inequality seriously, particularly when it comes to wealth inequality, will prevent effective action to mitigate and adapt to climate change effects. 

To devise sensible policy responses to the warnings contained in the Global Trends report, it is necessary to first understand the history of how humankind got here (such as Piketty does in his latest book on the topic of inequality). The Global Trends report describes man-made problems, such as human-induced climate change, widening inequality, and even the pandemic’s impacts, so it follows that certain policy approaches, assumptions, and values led to this state of affairs and that new policy instruments, ideas, and institutional capacities are needed to abate, if not even reverse, its worst potential effects. This in turn would lead us into the area of what is valued as a human species and whether, the priorities and policies put in place, have led to this more “contested world” and the global public health and climate emergencies. This is a conversation that this report, among many others, implicitly makes urgent. 

The  Global Trends  document provides much food for thought relevant to this “Reimagining a Just Society” series. As a former intelligence analyst with a  role in the origins  of the Global Trends series, I’ve focused on several takeaways from the report that bear emphasis in this and subsequent blogposts as this series transitions to considerations of what is being done and what can be done to address global challenges on this scale. (The report is about 150 pages long, so the following discussion is not a comprehensive treatment of its findings.) 

Here are some key takeaways from the report: 

  • The world is facing  shared global challenges  “that often lack a direct human agent.” This means, according to the contributors to  Global Trends 2040,  that “national security will require not only defending against armies and arsenals but also withstanding and adapting to these global challenges.” These shared global challenges include diseases, such as the ongoing pandemic, climate change, technological disruption, and financial crises. The current international system is poorly suited to dealing with these global challenges, according to the report. 
  • The world is facing  disequilibrium  as ‘the scale of transnational challenges, and the emerging implications of fragmentation, are exceeding the capacity of existing systems and structures.”  The report’s authors warn of a fractured international system that is more competitive and fraught with a greater risk of conflict, despite the existence of shared global challenges. 
  • Interactions among these global trends “are likely to produce  greater contestation at all levels  than has been seen since the end of the Cold War, reflecting different ideologies as well as contrasting views on the most effective way to organize society and tackle emerging challenges,” according to the report’s authors. 
  • The report notes, there is a “growing mismatch between what publics need and expect and what governments can and will deliver.”  This “widening gap” portends more volatility, erosion of democracy and expanding roles for alternative providers of governance.”  
  • The COVID-19 pandemic has brought global health issues into sharp relief,  according to the report. Are the pandemic’s disruptions temporary,  or could they unleash new forces to shape the future? The pandemic “is slowing and possibly reversing some longstanding trends in human development, especially the reduction of poverty and disease, and closing gender inequality gaps.” 
  • Challenges related to climate change impacts, including extreme events that become more intense and frequent, will make it difficult for some societies to “recover from one event before the next one hits.” 
  • “Current international law and cooperative bodies are increasingly mismatched to global climate change challenges,” says  the report. The report’s authors note, as previous blog posts in this series also have done, that international refugee law “does not account for people displaced by climate change effects.” 

The Global Trends report provides five different scenarios of potential global futures allegedly out to 2040 but underestimates the speed and scale at which climate change impacts already are occurring. This shortcoming affects all the scenarios including one that, despite its title, might be seen by some readers as ultimately a hopeful one; in addition to the clearly hopeful “Renaissance of Democracies” scenario, is the “Tragedy and Mobilization” scenario, which can be seen as ultimately hopeful. This scenario, which is set in 2040, envisions human tragedy of the 2030s on a scale that galvanizes international action on climate change.

For now, one way of imagining the impacts of climate change impacts on more traditional considerations of “great power” rivalries and realpolitik is to witness how much the pandemic’s effects already have affected international politics and diplomacy, right down to closed consular offices and borders. Climate change impacts in the 20-year span covered by this report are likely to be much more disruptive which makes it curious that a  “Tragedy and Mobilization” scenario is not one imagined for the 2020s. Presently we have all the information we need, along with the humanitarian, economic, and national security disaster of an ongoing pandemic, to spur us to invest in needed new policies and forms of cooperation to mitigate and adapt to climate change and be better prepared for future pandemics. These issues will be the subject of the next posts in this blog post series. 

The  Global Trends 2040  report does not suggest that the world must wait until 2040 to take remedial action such as that envisioned in its “Tragedy and Mobilization” scenario, but it can be read to imply that such action is not likely sooner. Some might call this “kicking the can down the road” although that is not necessarily intended.  The report is focused on 2040, not the 2020s after all. The question to ask is why not aim for a brighter future today, when there are still more options and time available to avert the worst-case climate disruption scenarios, than to bank on currently envisioned technologies that are unproven or not clearly scalable in time? The Intelligence Community has done its job, however, and such a question is best addressed to policymakers and citizens of all countries.

Humanity would be better served to treat the future less as an out-of-control twenty-ton rocket threatening our lives and more like something to be imagined, planned for, and worked on in different ways and through different systems, with appreciation for historical root causes and lessons learned across nations, in time to avert worst-case outcomes. This is what governance means in the 21st century. 

Previous installment

GeoTech Cues

Sep 7, 2021

Reimagining a just society pt. 5: “Is this working as intended?” — Global trends amid contested futures

By Carol Dumaine

The question of ‘is this working as intended’ is applicable to contemporary concepts of national and international security as well as of economic value, growth, and development. Given how our world is being reshaped by new technologies, data capabilities, and geopolitics, leaders in both the public and private sector need to pause and consider if governance and geopolitics in today’s world are actually working – or not.

Climate Change & Climate Action Security & Defense

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Reimagining a just society pt. 5: “Is this working as intended?” — Global trends amid contested futures appeared first on Atlantic Council.

]]>
The case for a financial digital asset framework for cryptocurrencies https://www.atlanticcouncil.org/blogs/geotech-cues/the-case-for-a-financial-digital-asset-framework-for-cryptocurrencies/ Tue, 24 Aug 2021 15:45:12 +0000 https://www.atlanticcouncil.org/?p=422384 Clear jurisdictional boundaries, legislation on core principles of financial digital assets from Congress, and a flexible regulatory policy are all crucial for the effective governance of digital assets.

The post The case for a financial digital asset framework for cryptocurrencies appeared first on Atlantic Council.

]]>
The Atlantic Council’s GeoTech and GeoEconomics Centers analyze the United States regulatory landscape of decentralized financial digital assets and call for the U.S. Congress and regulators to provide more clarity regarding cryptocurrencies.

Current regulatory uncertainty around novel decentralized financial technologies such as cryptocurrencies stifles innovation, fails to protect consumers, and consequently weakens the United States’ standing in the world of global payments. To protect consumers and promote future financial innovation, Congress should:

  1. Define federal agency jurisdiction based on the specific technology and functionality of decentralized financial digital assets while accounting for the decentralized nature of crypto development;
  2. Provide guidance through legislation in key areas including governance, trade requirements, disclosure agreements, and cybersecurity standards; 
  3. Incorporate flexible regulatory policy to account for the evolution of financial digital assets.

Clear jurisdictional boundaries, legislation on core principles of financial digital assets from Congress, and a flexible regulatory policy are all crucial for the effective governance of digital assets.

Photo by XPS, Unsplash

Introduction: Regulatory Uncertainty  

The United States regulation of emerging digital financial technologies is in a state of uncertainty. This regulatory ambiguity was highlighted in the recent crypto tax provision in the Biden administration’s infrastructure bill, which included overly broad and vague language about the design of cryptocurrencies. Specifically, the language opened up a scenario in which cryptocurrency validators, miners, and software developers are unnecessarily required to report financial information to the government. A proposed amendment to clarify the language failed to pass, and as a result, greater clarity will need to be provided by the U.S. Department of Treasury and the U.S. House of Representatives. The provision exemplifies how the current environment stifles innovation and places consumers at unnecessary risk by attempting to integrate digital assets into a financial framework based on outdated technologies and functions. 

This article will focus on decentralized financial digital assets built on distributed ledger technology, primarily cryptocurrencies, and the surrounding regulatory confusion. Financial digital assets refer to the way money can be stored, transmitted, and owned online. The initial challenge in regulating financial digital assets, as former Commodity Futures Trading Commission (CFTC) chairman Heath Tarbert acknowledged, is determining if a financial instrument is a security or commodity. This is particularly challenging given the decentralized nature of cryptocurrency. Current statutes and laws cannot determine which regulatory authorities have jurisdiction over digital assets, nor what the aim of their regulation should be. Still, a blanket crypto ban is no solution. Clear jurisdictional boundaries, legislation on core principles of financial digital assets from Congress, and a flexible regulatory policy are all crucial for the effective governance of digital assets.

However, three fundamental problems permeate the financial digital asset landscape. First, the lack of consistent regulation and agency oversight regarding disclosure requirements, consumer protection, and insider trading creates informational disadvantages for ordinary traders and developers. Second, uncoordinated action by federal agencies cannot address national security, tax enforcement, privacy, and other democratic values that should be embedded and protected within the U.S. financial digital asset system. Third, regulatory confusion surrounding digital assets has slowed down innovation. Clarifying the digital asset framework could help the United States maintain its global leadership in blockchain technology innovation and consumer protection. 

Photo by XPS, Unsplash

Current Financial Digital Asset Framework 

The current regulatory landscape struggles to implement and manage the decentralized structure essential for the use of cryptocurrencies. Traditionally, regulation of financial digital assets is centered around the Howey Test, which was established in 1946 by the U.S. Supreme Court. The key aspect of the Howey Test — and primary source of confusion in managing digital assets — is determining whether a transaction qualifies as an investment in which one party expects to receive future profits. Is the transaction an “investment contract,” and therefore a security? The Howey Test was created to manage securities that do not neatly fit into listed financial instruments. The Howey test, however, fails to distinguish between attributes of decentralized financial digital assets and may incorrectly categorize many decentralized assets as securities.  

To help interpret the Howey Test, agencies consider how managers and issuers of the financial digital assets sell the product in question. Financial digital assets are often either centralized, with clear managers and issuers, or decentralized, with an asset that is completely independent. Decentralization, however, is not binary; it requires completing a step-by-step process that disintermediates the managers and issuers from control. If, in the process of decentralization, managers and issuers can still influence the success of the financial digital asset, the product could be classified as a security under the Howey Test. The difficulty in determining what constitutes a “sufficiently decentralized asset” has led to many regulatory problems. Most notably, the Securities Exchange Commission’s (SEC) ongoing lawsuit against Ripple. Additionally, given that blockchains are open-source technology, it is sometimes unclear who is influencing the success of the digital asset. If it avoids responsibility for creating new digital asset-specific rules, the SEC will face continued regulatory problems around decentralization in the future. 

As the SEC lawsuit against Ripple demonstrates, the regulatory environment is geared towards reactive enforcement instead of proactive digital asset categorization. Most guidance comes in the form of singular enforcement settlements that provide temporary solutions. The SEC’s 75 enforcement actions against cryptocurrencies between 2013 and 2020 highlight how federal agencies have relied on precedent instead of top-down policymaking. Federal agencies’ proactive guidance addresses smaller and more individualized scenarios, which often come from their research hubs, press releases, and solicitations for public input rather than through official rulemaking. These tools and initiatives indicate what the agencies are looking into for regulation and their future research without deadlines for implementation.   
Finally, while interagency collaboration is limited at present, it is slowly but surely increasing. For example, the SEC, Financial Crimes Enforcement Network (FinCEN), and CFTC issued a joint statement in 2019 to clarify anti-money laundering (AML) and countering financing terrorism regulations. The White House is also beginning to establish joint task forces between agencies including the SEC and CFTC to explore new territory for financial digital asset jurisdiction. As federal agencies lack established processes to collaborate on financial digital assets, future joint agency work will likely follow the pace set by the White House or Congress – making their initiatives essential.

Photo by Executium on Unsplash

Recommendations to create a digital asset framework for decentralized technology 

Given current regulatory failures to address the evolving decentralized technology landscape, regulators and policymakers must first clearly categorize digital assets. Congress should demarcate agencies’ jurisdiction of financial digital assets based on specific attributes and functionalities. Managing financial digital assets varies depending on factors including whether a transaction is fully decentralized, how it is performed, and whether it occurs on a permissionless platform, among other criteria. To prevent vague regulation, policymakers must understand how design features change across financial tools built on blockchain technology (e.g. cryptocurrencies, stablecoins, decentralized finance). Once clear digital asset categories are created, federal agencies will be able to better direct enforcement towards specific technologies’ design and functionality.  

Second, Congress should provide guidance for the principles of decentralized technology by establishing core principles in areas including cybersecurity, governance, informational disclosures, trade requirements, and risk management, as well as AML and know-your customer laws (KYC). Agencies should then determine how to best implement the core principles proposed by Congress to promote values of transparency, consumer protection, innovation, and accessibility relevant to digital assets. Congress’s legislation for crowdfunding and the futures industry provides a good template moving forward. By establishing overarching legislation, Congress builds a strong base for agencies to implement effective regulation and enforcement strategies. This step requires Congress to gain a better understanding of cryptocurrency, its design, and instances of use—an ongoing process

Lastly, policymakers should design a flexible regulatory framework accounting for rapid technological innovation. This framework should involve regulatory leniency for early-stage developments. A rigid regulatory framework will be unable to keep up with new and unforeseen technology models—resulting in the same challenges faced today. For example, it is currently difficult to balance the tradeoffs between ease of use and user safety. Oversight of new technology is inherently difficult. However, as we tackle emerging issues such as crypto exchanges, decentralized finance, custodianship, digital asset exchange-traded funds (ETFs), and stablecoins, we must look towards the solution that maintains the integrity of the technology while also mitigating harm done to consumers. Additionally, policymakers should refrain from implementing policies that promote certain decentralized technologies over others (e.g. the initial Warner-Sinema-Portman amendment to the infrastructure bill). Effective regulation is essential because, while some countries and individuals have sought to outright ban cryptocurrency, this fails to appreciate the wider applications of the technology beyond serving as a store of value, unit of account, and medium of exchange. A sweeping ban of decentralized technology without substantiated reasoning in the U.S. will hinder financial innovation for the decades to come.

Photo by Andrew Haimerl, Unsplash

Fostering a robust and competitive digital asset market 

Clear regulation will create a more competitive digital asset environment that can provide better services to businesses and consumers, and promote financial inclusion. Few companies have the wherewithal to be compliant in multiple states, which maintain varying degrees of regulatory authority — to say nothing of variance in federal and international regulation. As a result, even when companies attempt regulatory compliance in good faith, they take on an inordinate amount of risk of breaking the law. Additionally, it is difficult to identify areas of potential misuse if the regulatory standards are not clear or consistent. This leaves consumers vulnerable and may prevent their entering the digital asset market altogether. A well-defined regulatory environment, alternatively, will facilitate consumer engagement and transparent company policies. 

Most critically, coherent digital asset regulation will enable the United States to guide global standards for blockchain innovation on the international stage at a critical point in their development. Nations are actively constructing their digital asset framework to boost their economy or advance geopolitical objectives. 81 countries, for example, are exploring a central bank digital currency. Given the lack of regulatory clarity in the United States, many creators of digital assets have avoided the U.S market altogether. To remain a hotspot for blockchain innovation, the United States should expand digital asset categorization and create a principled framework. The result: a more robust and competitive digital ecosystem that features both centralized and decentralized options for businesses and individuals in the United States and on the global stage. 

Matthew Goodman is currently an intern at the Atlantic Council GeoTech Center. He is a junior at Middlebury College where he studies Economics and Philosophy. 

Nikhil Raghuveera is a nonresident fellow at the Geotech and GeoEconomic Centers. The views, thoughts and opinions expressed here are the author’s alone and do not necessarily reflect or represent the views and opinions of any affiliated organization.

The post The case for a financial digital asset framework for cryptocurrencies appeared first on Atlantic Council.

]]>
Contextualizing COVID-19 and its implications for the future of work https://www.atlanticcouncil.org/blogs/geotech-cues/telework-and-worker-well-being/ Tue, 10 Aug 2021 20:57:41 +0000 https://www.atlanticcouncil.org/?p=419854 As workers around the world adapted to a new normal amid a public health crisis, remote work took on a new meaning. However, questions about how to structure long-term telework arrangements remain. The current landscape of data reveals a tenuous relationship between well-being and telework. To secure a brighter future for our labor force, we must continue developing worker-centered technology policy and doing research on how to improve life at work.

The post Contextualizing COVID-19 and its implications for the future of work appeared first on Atlantic Council.

]]>
It has been more than fifteen months since the World Health Organization declared COVID-19 a pandemic on March 11, 2020. The virus’ unprecedented infection rate, coupled with a slow global response, resulted in mandatory state lockdowns and restrictions.

In the months since the start of the pandemic, states and employers had to quickly adopt new strategies to maintain production efficiency and uphold worker safety. Overwhelmingly, the response was to move work online. COVID-19 has forever altered the notion that work must be done in the office. According to Gartner, remote work (also known as work from home [WFH] or telecommuting) is “a type of flexible working arrangement that allows an employee to work from a remote location outside of corporate offices.” Of course, remote work isn’t new. According to Pew Research, “one-in-five say they worked from home all (12 percent) or most (7 percent) of the time before the coronavirus outbreak, while 18 percent worked from home some of the time.” However, because of the pandemic, 42 percent of the US labor force worked from home in 2020. Indeed, this trend is not looking transient.

This new arrangement has been adopted by many large corporations. The topic of WFH and its effects on worker well-being has been of particular interest to corporations employing knowledge workers before the pandemic – with mixed conclusions reached by major tech companies. In 2013, Yahoo CEO Marissa Mayer famously banned telework to foster company cohesion. Meanwhile, other Silicon Valley giants increased their capacity for remote work. Now that a significant portion of the workforce has shifted online, new research could provide a more conclusive understanding of how remote work arrangements impact worker well-being.

As workers around the world adapted to a new normal amid a public health crisis, remote work took on a new meaning. However, questions about how to structure long-term telework arrangements remain. The current landscape of data reveals a tenuous relationship between well-being and telework. To secure a brighter future for our labor force, we must continue developing worker-centered technology policy and researching how to improve life at work.

Photo by XPS, Unsplash

The current landscape of research 

Due to the unique circumstances of each nation and how they handled COVID-19, the current landscape of data reveals a tenuous relationship between well-being and telework. Most surveys of worker well-being conducted during the pandemic were not cross-sectional, and it was difficult to isolate the causality of well-being amidst a global public health crisis. Additionally, before the pandemic, most of the literature on telework’s impacts was not worker-centered; instead, most researchers opted to examine its effects on worker productivity and organizational outcomes. For example, Martin & MacDonnell conducted a meta-analysis in 2012 on the perceptions of telework and organizational outcomes with positive findings (e.g., telework increased worker productivity). On its own, one might speculate that increased productivity is associated with better worker well-being, but they are not established as causally-related variables.

In 2019, one pre-pandemic study by Charalampous et al. analyzed the relationship between telework and worker well-being. They found a “greater consensus towards a beneficial impact” but overall inconclusive results, citing “social and professional isolation and perceived threats in professional advancement to be significant negative impacts of WFH.” The authors of this study engaged in a systematic literature review of sixty-three studies “employing quantitative, qualitative, and mixed-method designs” to gain a deeper understanding of the association between remote work and well-being in knowledge workers. Overall, they advocated for more advanced studies to be conducted on this relationship, such as looking at “longitudinal data, diary studies, as well as moderating and mediating relationships”.

Moving into the pandemic, studies continue to illustrate the complicated nature of WFH’s effects on well-being. In contrast to Charalampous et al., Yijing et al.’s 2020 research illustrates a negative relationship between telework and the “social, behavioral and physical factors on well-being of office workstation users during COVID-19.” The results from their anonymized survey indicated a blanket decrease in overall physical and mental well-being and an increase in physical and mental health issues during the first few months of the pandemic. Additionally, Yijing et al. highlighted that lifestyle factors, “such as physical activity…eating habits,” and social aspects of WFH, “including who is living in the home, distractions while working, and communication with co-workers” were the primary predictors of decreased well-being. Regarding specific population impacts, this research also found that female workers and laborers making salaries of less than $100,000 were disproportionately more likely to have “two or more new physical and mental issues” transitioning to the new WFH arrangement. The mechanism behind these results will be discussed below. While these findings are compelling, it is important to note that the pandemic introduces numerous confounding factors that affect well-being. Cross-sectional surveys are prone to skew.

Currently, there is one longitudinal study released by Savolainen et al. that investigates the “psychological, situational, and socio-demographic predictors of COVID-19 anxiety among Finnish workers.” The study recruited participants from a previous longitudinal “Social Media at Work in Finland” survey, which was nationally representative and conducted before the pandemic. Savolainen et al. matched the sample of their study with the general Finnish working population and stratified participants according to their occupational fields. Overall, their results determined “loneliness, psychological distress, technostress, and neuroticism” to be “significant psychological predictors of COVID-19 anxiety of workers.” Additionally, Savolainen et al. found that “increased technology use has not entirely been able to maintain or create a meaningful psychological connection to work for communities” for remote workers. Mirroring all the previous studies, Savolainen et al. recommended for more research to be done on the relationship between telework and well-being.

While the findings of all these studies are crucial to understanding the relationship between telework and well-being, researchers are still in the process of coming to a consensus and remain hesitant to commit to a position. This is understandable, given the many competing logics behind remote work and its theoretical impacts on worker well-being.

Photo by XPS, Unsplash

Summary and theoretical mechanisms for remote work’s impacts on well-being 

On one hand, the flexibility factor that provides workers the freedom of dictating their own work-life balance is one of the most compelling arguments for remote work increasing well-being. When employees start working from home, they suddenly have more agency regarding their time. Workers can tailor their work hours on an individual basis and have more time to attend to their personal lives. This flexibility enables workers to see their families more often and fit in appointments and errands more easily. Additionally, workers can work when they feel the most productive since they aren’t confined to a rigid office schedule. This flexibility has been shown to not decrease worker productivity or work hours; in fact, the massive shift to remote work in the first few months of the pandemic revealed an overall increase in hours.

On the other hand, remote workers have reported issues with the blurring of work-life boundaries. Research has dictated that working from home offers no separation between work and leisure time, leading to “enhanced emotional exhaustion” and a “deterioration in healthy lifestyle behaviors.” This negative effect has been speculated to manifest disproportionately between men and women. As Yijing et al. examined, there is robustness to the claim that WFH may be more challenging for women. Women are still more likely to be responsible for household chores and child-rearing. Mothers that work at home are more likely to experience the blurring of work and life boundaries leading to increased pressure. Generally, in a traditional office work setting, commuting offers a change of scenery by default and physically separates professional and home life. However, in a WFH setting, individuals need to take initiative to change their scenery, which has been shown to boost their well-being and mood. Remote workers also have a harder time communicating with coworkers through a virtual medium. It is easier to misread conversational cues through text and video calls. Charalampous et al.‘s study also illustrated that workers were more likely to perceive boundaries to professional advancement in an online setting. Additionally, Savolainen et al. introduced the concept of technostress, which mostly manifests from “back-to-back” virtual meetings, unfamiliarity with technology, and the increased work hours associated with WFH. Finally, the lack of in-person contact with co-workers can lead to isolation and disconnection, which are correlated with increased worker stress and anxiety— marked factors that decrease well-being.

In effect, acknowledging the context of the COVID-19 pandemic further complicates researchers’ ability to draw causality between remote work and well-being. This is because the pandemic introduces new moderating variables, such as fluctuating economic insecurity, isolation, and relationship breakdown caused by the lockdown, as well as increasing mistrust in the government. These new variables worsen and weaken remote work’s theoretical mechanisms on well-being. For example, the positive aspects of a flexible remote job are hampered by economic insecurity. Likewise, relationship breakdown can completely negate the usual benefits of increased family contact provided by remote work. Lastly, compounding mistrust in institutions, specifically in the healthcare sector, is a worrying trend that can promulgate the spread of COVID-19. According to a release from the Interdisciplinary Association of Population Health Sciences, people who mistrust the healthcare sector are more prone to vaccine hesitancy, less likely to report their illnesses, and underutilize healthcare— a dangerous, and perhaps deadly, combination in the context of a pandemic.

The pandemic’s asymmetric impacts on developing countries, frontline workers, and minority populations  

According to a survey conducted on 1,022 American professionals in January 2021, “29 percent of working professionals say they would quit their jobs if they couldn’t continue working remotely.” A Gallup poll similarly reflects this estimate, which found 23 percent of the US workforce eager to keep their remote setting.

Despite the enthusiasm for remote work, some employees, such as those in frontline positions, cannot pivot online. In 2020, estimates between the Occupational Information Network and the Bureau of Labor Statistics show that 34 to 44 percent of Americans had the ability to telework. In western Europe Boeri et al. predicted the “home-based work potential as 24 percent for Italy, 28 percent for France, 29 percent for Germany, 25 percent for Spain, and 31 percent for Sweden and the UK” in the same year. Globally, the & International Labor Organization (ILO) forecasted in a policy brief that 18 percent of workers in 2020 had the means to shift to remote work. Considering that many developing countries are likely to possess remote work infrastructure below this global ILO average, it is undeniable that there is an asymmetry in worker access to remote labor correlated to wealth. Workers in developing countries with less robust information and communication technology infrastructure are likely to experience more amplified negative impacts of being unable to work remotely. In virtually every country, poorer workers are less likely to secure a remote job and more likely to experience job or economic insecurity, especially in the context of the pandemic.

Broadly, many of these frontline workers also tend to be poorer, suggesting that many intersecting factors are negatively affecting their well-being. According to a longitudinal study on 21,874 adults living in England between March 21, 2020, and February 22, 2021, key workers had “consistently higher levels of depressive and anxiety symptoms than non-key workers across the whole of the study period.” These key workers were defined as health and social care workers, teachers, childcare workers, public service workers, and food chain or utility workers. Another longitudinal study done on Polish healthcare workers supported the notion of exacerbated negative health outcomes for essential workers. Overall, frontline worker populations appear to face some of the worst difficulties of the pandemic. 

Across all developed countries, minority populations are more likely to work in frontline occupations. A recent study by Goldman et al. revealed that “greater work exposures likely contribute to a higher prevalence of COVID-19 among Latino and Black adults in the US.” Moreover, due to historic health inequities perpetuated by institutional racism (e.g., forced sterilization and scientific racism), marginalized populations tend to have less trust in the healthcare system. As previously mentioned, this renders minority populations; more susceptible to vaccine hesitancy, healthcare underutilization, and general negative health outcomes. Additionally, as the CDCwrites, ethnic and racial minority communities have had more COVID-19 cases, deaths, and hospitalizations. Further, policies targeted to mitigate the spread of COVID-19 “might cause unintentional harm, such as lost wages, reduced access to services, and increased stress, for some racial and ethnic minority groups.” In other words, due to longstanding inequalities intensified by the pandemic, well-being among minority populations has plummeted. Altogether, asymmetric impacts on specific populations all blur the relationship between telework and well-being. Moving forward, it is important to address systemic inequalities and assess how policies impact different communities in order to close the digital divide and create a diverse and resilient workforce.

Photo by Andrew Haimerl, Unsplash

Policy recommendations from the GeoTech Center and academia 

As workers around the world adapted to a new normal amid a public health crisis, remote work took on a new meaning. It symbolized the deeply intertwined relationship between labor and technology. It signified progress and the future. However, the sweeping shift to new telework arrangements also brought many questions. Is it really good for workers’ health and well-being? Who is sidelined by implementing new technology in the workplace? How can we increase the resilience of our labor supply chains in the event of a new crisis? Is there a golden ratio between in-person and remote work? Currently, the US is at a major inflection point; we must continue developing technology policy and doing research to address these important questions.

The bipartisan report by the Commission on the Geopolitical Impacts of New Technologies and Data includes a seventh chapter titled “The Future of Work,” which explores these questions and supplies recommendations for addressing the future and mitigating disaster as workers brave periods of uncertainty. Some key points from this report are combined with academic literature to provide actionable guidelines for the future of work that are relevant to both government leaders and private sector CEOs, specifically:

  • To mitigate the negative impacts of WFH, employers should prioritize creating an inclusive and empathetic work culture that provides technical and psychological support online. In line with other studies, Savolainen et al. argues that higher organizational support lowers worker anxiety, especially in times of crisis (Savolainen et al., 2021).
  • To alleviate feelings of social isolation during normal working hours, employers can set ‘virtual coffee breaks’ to create a more collaborative work environment (Mostafa, 2021).
  • Human capital development and management data should address projections of the supply and demand for workers according to categories of technical skills, results of the search and hiring process, and how well the employer’s needs were satisfied. The data also should inform how well the training policies provided equitable access to skills training across the workforce. These data should enable analyses of the expected value of different options for skills education and training for workers, the return on the investment of workforce training for businesses, and options for adjusting workforce training policies (GeoTech Commission Report).
  • Expanding on the point above, we ought to acquire data on the labor side. For example, management could implement more longitudinal questionnaires on worker well-being.
  • The United States needs to ensure equitable access to opportunity for the GeoTech Decade ahead. From access to affordable broadband to digital literacy, governments and the private sector need to make significant investments and work together to reduce barriers to full participation in the economy. Ensuring that all people can participate in the GeoTech Decade requires a commitment to equitable access to affordable, high-speed Internet. Millions do not have high-speed broadband, particularly in rural areas. What is more, many with access to high-speed broadband are still unable to afford the high cost of Internet and the devices needed to access it. Lack of access and affordability perpetuates systemic inequities (GeoTech Commission Report).
  • Digital literacy— the ability to find, evaluate, utilize, and create information using digital technology is becoming an essential skill for every individual. Digital literacy is an important element in eliminating a digital divide among nations and within a society. It complements affordable, high-speed Internet access by enabling people to develop and communicate local content, to communicate their issues and concerns, and to help others understand the context in which these issues occur (GeoTech Commission Report).
  • In order to foster lifelong learning and digital literacy, managers could encourage employees to sharpen their skills with new learning opportunities and online training through free online professional development webinars and training sessions (Mostafa, 2021). These events can be found through Khan Academy, the Atlantic Council, and many other online platforms.

Matthew Gavieta is a Young Global Professional with the GeoTech Center as well as a rising senior at Cornell University, where he majors in industrial and labor relations and minors in philosophy and law & society. He is most interested in the intersection of law, policy, and technology. He hopes to do work in the field of intellectual property to promote safe, large-scale innovation and creativity.

The post Contextualizing COVID-19 and its implications for the future of work appeared first on Atlantic Council.

]]>
How tech can rebuild public trust in government https://www.atlanticcouncil.org/content-series/economy-of-trust-content-series/how-tech-can-rebuild-public-trust-in-government/ Wed, 28 Jul 2021 19:15:00 +0000 https://www.atlanticcouncil.org/?p=472956 Public fear and mistrust in government have only been exacerbated by the COVID-19 pandemic, especially given its disproportionate impacts on poorer communities. Greater government transparency and trust-inspired communication, as well as education for the greater public good are part of the solution. We must also leverage technology for human priorities, by better enabling citizens and allowing for greater participation.

The post How tech can rebuild public trust in government appeared first on Atlantic Council.

]]>

Editorial

Before the pandemic, the 2020 Edelman Trust Barometer conducted an international study that chronicled trust in the government. The research revealed that “57% of the general population says the government serves the interests of only a few, while 30% say the government serves the interests of everyone.” Public fear and mistrust have only been exacerbated by the COVID-19 pandemic, especially given its disproportionate impacts on poorer communities. According to a recent World Economic Forum article, public trust is becoming increasingly polarized between the rich and poor, with “elites at 68% trust” and the mass population at 52%. Information streams from widely different institutions and sources on the political spectrum are contributing to this split in trust reality. Additionally, businesses have superseded governments as the most trusted institutions (61%) as a result of “developing vaccines in record time while finding new ways to work.” In the middle of a crisis, a lack of access to basic infrastructure has crippled many poor and struggling citizens across developed and developing countries. Global digital infrastructure faces similar challenges— some fear the long-term health effects of 5G deployment; some fear the government is engaging in digital surveillance; some fear that their jobs will be lost in the era of automation.

Paradoxically, according to the Pew Research Center, the notion that the “federal government has a responsibility to provide support and services for all Americans” is rising in popularity. In a recent conference on digital innovation featuring SICPA CCO Digital, Richard Budel, panelists including Francesca Bosco (CyberPeace Institute) suggested that, in order to rebuild trust, a multi-stakeholder and multidimensional human-centered approach from the government and private sector might be needed. Specifically, initiatives to bolster education, build in the architecture of institutions, increase digital literacy, and explain to citizens how data is used. Panelists highlighted the importance of engaged and educated citizens for fostering change within the public sector. In the words of Mr. Arnaud Bernaert, SICPA’s Head of Health Security Solutions, “you have to trust the citizens before they trust you.” Panelists advocated for higher government transparency and better communication. In the cyberspace, governments could be held to an accountability framework. For example, citizens could evaluate if their representatives are sticking to their commitment, enabling the available regulation, or putting forward initiatives for dialogue. Like all relationships, trust between citizens and the government will require a two-way street.

Moreover, the government needs to take a more active role in the data commercialization to make sure that it benefits citizens; this is a difficult feat, considering that would require playing catch-up and drawing regulations around the private information technology sector, which is consistently innovating. The government must tread carefully in the balance between controllership and leadership. After all, overreach in regulation would result in mistrust. We must leverage technology for human priorities, by better enabling citizens and allowing for greater participation. This starts with transparency and trust-inspired communication, as well as education for the greater public good. According to the recent GeoTech Commission Report, public education on trustworthy digital information could be achieved with the proper initiative from U.S. Congress. The Commission proposed that a government grant program led by the NSF could produce a robust curriculum developed by a coalition of select universities. Although the present is bleak, the future looks hopeful.

Sincerely,

Pascal Marmier
Economy of Trust Foundation / SICPA
Dr. David Bray
Atlantic Council GeoTech Center
Borja Prado & Matthew Gavieta
Editors

Get the Economy of Trust newsletter

Sign up to learn about advances in technology and data activities that, through trust and more transparent frameworks, improve nations and sectors alike.

2021 Report Rewind

The Economy of Trust & Digital Crossroads

Smart Partnerships & AgriTech Action

Latest Reseach & Analysis

Fast Thinking

The post How tech can rebuild public trust in government appeared first on Atlantic Council.

]]>
Event | The future of data and AI in space https://www.atlanticcouncil.org/blogs/geotech-cues/event-the-future-of-data-and-ai-in-space-2/ Fri, 16 Jul 2021 20:59:13 +0000 https://www.atlanticcouncil.org/?p=415544 On Wednesday, July 21, at 12:00 p.m. EDT, the GeoTech Center will air a previously recorded event on the future of data and AI in space. The recording will be available on this page. Find the full GeoTech Hour series here. Event description On April 29, 2020, Fredrik Bruhn, Amy Webb, Paul Jurasin, and Anthony […]

The post Event | The future of data and AI in space appeared first on Atlantic Council.

]]>
On Wednesday, July 21, at 12:00 p.m. EDT, the GeoTech Center will air a previously recorded event on the future of data and AI in space. The recording will be available on this page.

Find the full GeoTech Hour series here.

Event description

On April 29, 2020, Fredrik Bruhn, Amy Webb, Paul Jurasin, and Anthony Scriffignano shared their perspectives on how commercial space efforts are progressing to include satellites and other new technologies via advances in data and AI capabilities. 

The discussion highlighted how historical computational capabilities and limited electrical power available to satellites prevented edge computing in space. All data had to be transmitted back to Earth for processing. With advances in both processing as well as performance relative to onboard power capabilities, it is now possible to process petaflops of data in space. These advances and all future advances, change the capabilities of commercial space endeavors and services that can be provided to individuals and organizations around the world.

Featuring

Fredrik Bruhn, PhD
Chief Evangelist Digital Transformation
Unibap

Paul Jurasin
Director, New Programs, Digital Transformation Hub
California Polytechnic State University

Anthony Scriffignano, Ph.D
SVP, Chief Data Scientist
Dun & Bradstreet

Amy Webb
Founder and CEO
Future Today Institute

Hosted by

David Bray, PhD
Director, GeoTech Center
Atlantic Council

Previous episode

gtc front of a fire engine with the lights glowing

Event Recap

Apr 10, 2020

Event recap | Technology and pandemics: Challenges and opportunities

By Corina LJ DuBois

On April 10, 2020, His Excellency Omar Sultan Al Olama, Minister of State for Artificial Intelligence, United Arab Emirates, shared his perspectives in an event titled “Technology and pandemics: Challenges and opportunities” as part of a live video discussion moderated by Mr. Frederick Kempe, President and CEO of the Atlantic Council.

Civil Society Coronavirus

The post Event | The future of data and AI in space appeared first on Atlantic Council.

]]>
Event | Technology and pandemics: Opportunities and challenges https://www.atlanticcouncil.org/blogs/geotech-cues/event-technology-and-pandemics-challenges-and-opportunities/ Wed, 14 Jul 2021 12:56:00 +0000 https://www.atlanticcouncil.org/?p=410619 On April 10, 2020, H.E. Omar Sultan Al Olama, Minister of State for Artificial Intelligence for the United Arab Emirates, shared his perspectives in an event titled “Technology and pandemics: Challenges and opportunities” as part of a live video discussion moderated by Mr. Frederick Kempe, President and CEO of the Atlantic Council.

The post Event | Technology and pandemics: Opportunities and challenges appeared first on Atlantic Council.

]]>

Find the full GeoTech Hour series here.

Event description

On April 10, 2020, H.E. Omar Sultan Al Olama, Minister of State for Artificial Intelligence for the United Arab Emirates, shared his perspectives in an event titled “Technology and pandemics: challenges and opportunities” as part of a live video discussion moderated by Mr. Frederick Kempe, President, and CEO of the Atlantic Council. This one-hour live discussion included the Minister’s insights on how the current pandemic presents both challenges and opportunities for technological responses. The Minister shared his thoughts on which technologies will deliver on their promised outcomes and which technology and data trends will reshape the world as a result of the pandemic.

Featuring

H.E. Omar Sultan Al Olama
Minister of State for Artificial Intelligence for the United Arab Emirates

David Bray, PhD
Director, GeoTech Center
Atlantic Council

Hosted by

Frederick Kempe
President and CEO
Atlantic Council

Previous episode

gtc front of a fire engine with the lights glowing

Event Recap

Apr 10, 2020

Event recap | Technology and pandemics: Challenges and opportunities

By Corina LJ DuBois

On April 10, 2020, His Excellency Omar Sultan Al Olama, Minister of State for Artificial Intelligence, United Arab Emirates, shared his perspectives in an event titled “Technology and pandemics: Challenges and opportunities” as part of a live video discussion moderated by Mr. Frederick Kempe, President and CEO of the Atlantic Council.

Civil Society Coronavirus

The post Event | Technology and pandemics: Opportunities and challenges appeared first on Atlantic Council.

]]>
Event | Technologies for rebuilding after COVID-19 https://www.atlanticcouncil.org/blogs/geotech-cues/event-technologies-for-rebuilding-after-covid-19/ Wed, 07 Jul 2021 15:00:00 +0000 https://www.atlanticcouncil.org/?p=410599 On April 16, 2020, Dr. David Brin and Dr. Kathryn Newcomer shared perspectives on what technologies, investments, and policy actions could help rebuild from COVID-19 on a global scale as part of a live video discussion.

The post Event | Technologies for rebuilding after COVID-19 appeared first on Atlantic Council.

]]>

Find the full GeoTech Hour series here.

Event description

On April 16, 2020, Dr. David Brin and Dr. Kathryn Newcomer shared perspectives on what technologies, investments, and policy actions could help rebuild from COVID-19 on a global scale as part of a live video discussion. They discussed which technologies and investments show the greatest promise with the rebuilding and recovery from COVID-19 and what policy actions would help us better rebuild locally, nationally, and globally. They also considered the role of transparency, both in the public and private sector, in supporting good governance with the rebuilding and recovery efforts. In addition, the discussion highlighted the the role of countering polarizing misinformation as well as preserving individual privacy during the COVID-19 response and recovery.

Featuring

David Brin, PhD
Owner and Inventor
Epocene Communications

Kathryn Newcomer, PhD
Professor of Public Policy and Public Administration, School of Media and Public Affairs
George Washington University

Hosted by

David Bray, PhD
DirectorGeoTech Center
Atlantic Council

Previous episode

gtc front of a fire engine with the lights glowing

Event Recap

Apr 10, 2020

Event recap | Technology and pandemics: Challenges and opportunities

By Corina LJ DuBois

On April 10, 2020, His Excellency Omar Sultan Al Olama, Minister of State for Artificial Intelligence, United Arab Emirates, shared his perspectives in an event titled “Technology and pandemics: Challenges and opportunities” as part of a live video discussion moderated by Mr. Frederick Kempe, President and CEO of the Atlantic Council.

Civil Society Coronavirus

The post Event | Technologies for rebuilding after COVID-19 appeared first on Atlantic Council.

]]>
Raghuveera with the Turkish Heritage Organization: A discussion on cryptocurrency at the global scale https://www.atlanticcouncil.org/insight-impact/in-the-news/raghuveera-cryptocurrency/ Wed, 30 Jun 2021 16:41:09 +0000 https://www.atlanticcouncil.org/?p=410040 In a recent discussion with with the Turkish Heritage Organization, Nikhil Raghuveera highlights the geopolitical implications of cryptocurrencies and central bank digital currencies (CBDC). He explains the three ways blockchain technology affects foreign policy and global alliances. First, CBDC creates the opportunity for a separate payment system from the US dollar, which provides countries like China more political power and the ability to bypass US sanctions. Second, the lack of a standard regulatory framework around digital assets domestically and internationally exposes consumers to cyberattacks and financial risks. Third, many new applications will build upon existing decentralized financial technologies, which will require new international partnerships and relations. In order to create a more equitable world, Raghuveera advocates for the inclusion of marginalized communities and a broader consideration of stakeholders when creating these new technologies and subsequent regulatory policies.

The post Raghuveera with the Turkish Heritage Organization: A discussion on cryptocurrency at the global scale appeared first on Atlantic Council.

]]>
In a recent discussion with with the Turkish Heritage Organization, nonresident fellow Nikhil Raghuveera highlights the geopolitical implications of cryptocurrencies and central bank digital currencies (CBDCs). He explains the three ways blockchain technology affects foreign policy and global alliances. First, CBDC creates the opportunity for a separate payment system from the US dollar, which provides countries like China more political power and the ability to bypass US sanctions. Second, the lack of a standard regulatory framework around digital assets domestically and internationally exposes consumers to cyberattacks and financial risks. Third, many new applications will build upon existing decentralized financial technologies, which will require new international partnerships and relations. In order to create a more equitable world, Mr. Raghuveera advocates for more inclusion of marginalized communities and a broader consideration of stakeholders when creating new financial technology and subsequent regulatory policies. To listen to the whole discussion, watch the video below.

https://www.youtube.com/watch?v=YnSO8r5BQVQ&ab_channel=TurkishHeritageOrganization

Read more about our expert:

The post Raghuveera with the Turkish Heritage Organization: A discussion on cryptocurrency at the global scale appeared first on Atlantic Council.

]]>
Event | Data trusts and the global COVID-19 response https://www.atlanticcouncil.org/blogs/geotech-cues/event-data-trusts-and-the-global-covid-19-response/ Wed, 23 Jun 2021 16:33:00 +0000 https://www.atlanticcouncil.org/?p=405573 On April 15, 2020, Lord Tim Clement-Jones and Dame Wendy Hall shared their perspectives in a live video discussion titled “Why data trusts could help us better respond and rebuild from COVID-19 globally“ and moderated by David Bray, PhD, Atlantic Council GeoTech Center Director on the role of Data Trusts in the global response to and recovery from COVID-19.

The post Event | Data trusts and the global COVID-19 response appeared first on Atlantic Council.

]]>

Find the full GeoTech Hour series here.

Event description

On April 15, 2020, Lord Tim Clement-Jones and Dame Wendy Hall shared their perspectives in a live video discussion titled Why data trusts could help us better respond and rebuild from COVID-19 globally and moderated by David Bray, PhD, Atlantic Council GeoTech Center Director on the role of Data Trusts in the global response to and recovery from COVID-19.

The hour-long discussion asked key questions: What are data trusts? What roles can Data Trusts play in the global response to COVID-19? What can the United States learn from the United Kingdom’s activities involving data trusts and AI? Most importantly, Lord Clement-Jones, Dame Hall, and Dr. Bray presented actionable steps that Google, Apple, or any other major tech company or coalition could enact to move forward with a data trust initiative to help the world respond to and recover from COVID-19.

Featuring

Lord Tim Clement-Jones
Nonresident Senior Fellow, GeoTech Center
Atlantic Council

Wendy Hall, PhD
Nonresident Senior Fellow, GeoTech Center
Atlantic Council

Hosted by

David Bray, PhD
Director, GeoTech Center
Atlantic Council

Previous episode

gtc front of a fire engine with the lights glowing

Event Recap

Apr 10, 2020

Event recap | Technology and pandemics: Challenges and opportunities

By Corina LJ DuBois

On April 10, 2020, His Excellency Omar Sultan Al Olama, Minister of State for Artificial Intelligence, United Arab Emirates, shared his perspectives in an event titled “Technology and pandemics: Challenges and opportunities” as part of a live video discussion moderated by Mr. Frederick Kempe, President and CEO of the Atlantic Council.

Civil Society Coronavirus

The post Event | Data trusts and the global COVID-19 response appeared first on Atlantic Council.

]]>
How to secure smart cities through decentralized digital identities https://www.atlanticcouncil.org/blogs/geotech-cues/how-to-secure-smart-cities-through-decentralized-digital-identities/ Wed, 23 Jun 2021 15:20:29 +0000 https://www.atlanticcouncil.org/?p=407381 At the recent G7 meeting in Cornwall, there was a consensus among democratic nations to offer viable alternatives to autocrats and autocratic governments. One of the areas where such an alternative can be clearly demonstrated is in smart cities.

The post How to secure smart cities through decentralized digital identities appeared first on Atlantic Council.

]]>
At the recent G7 meeting in Cornwall, there was a consensus among democratic nations to offer viable alternatives to autocrats and autocratic governments. One of the areas where such an alternative can be clearly demonstrated is in smart cities. There, technologies advanced by autocrats are already establishing an alarming foundation for digital authoritarianism. For example, China is already setting the rules and industrial standards in facial recognition systems, as well as communication protocols for interconnected Internet of Things (IoT) devices. Chinese technology groups, such as Huawei, ZTE Corporation, Alibaba, and others are exporting “safe city” and “smart city” packages to scores of countries around the world, including Europe. Serbia’s capital Belgrade has installed a surveillance camera system that can monitor peoples’ behavior, recognize their faces, identify number plates, and assess whether “suspicious” activity is taking place. 

As 5G networks and IoT systems become the new communication and data layer of future cities, the question of who has control of and access to these networks, as well as the data they carry, acquires strategic significance. Beyond the obvious human rights threats, such as singling out persons for real-time surveillance, there are huge potential security risks too. Authoritarian regimes can gain access to data via back doors, or in extreme cases, activate a “kill switch” that would debilitate a city’s operations, triggering civil unrest. In exchange for operational efficiency, smart city surveillance systems pose the threat of controlling and coercing societies towards accepting illiberal authoritarianism while exposing themselves to security threats by foreign powers. If that’s what autocrats have to offer, how should democracies respond? 

Democracies should infuse the digital infrastructures of the future with democratic values and protect individual freedoms and liberties. In other words, they must ensure that the industrial standards and communication protocols of smart cities prevent citizen surveillance and security breaches “by design.” To do this, democracies must begin rethinking digital identities. A “digital identity” is the entity created each time someone registers an account in order to access a digital service by a Provider (e.g. a bank, a social media site, the government, etc.). Presently, each individual has multiple digital identities, with various account names and passwords, all stored in the various Service Providers’ infrastructures. This poses at least two major problems: first, data is siloed; second, data is exposed to security breaches whenever the centralized infrastructures of the Service Providers, which physically store and manage the digital identities, are hacked. Digital identities are applicable to non-humans too: robots, drones, or other smart devices (such as smart meters and sensors) also need to access services in order to connect to the IoT. 

A “decentralized” digital identity framework reverses the relationship of any user with multiple digital identities scattered over many service providers by putting users at the center. The user creates a unique digital identity that is owned by them. They then ask and receive credentials that prove their identity from various issuers, such as their government, university, or employer. For example, a university can send a credential proving that the user holds a specific degree. These credentials are issued using a unique public and private key and are verified using a public blockchain, creating the “decentralization” aspect of the new framework. Rather than a central authority managing the user’s identity, a decentralized, blockchain-based ledger would act as the trusted source of truth. The credentials themselves do not need to be stored on the blockchain. They can be stored in a “digital wallet”, i.e. a private data storage accessed only by the unique, personal, digital identity of the user. This data wallet can be hosted and secured anywhere (for example, in the user’s own computer or a cloud service). Each time the user needs to interact with a digital service, they do not need to expose all their personal data. For example, if the service requires that the user be above a certain age, the blockchain can verify this in a trusted interaction without the need to supply additional proof, such as a copy of a passport or a driver’s licence. By separating and decentralizing sensitive personal data from centralized digital services we not only protect privacy but also vastly improve data security. There is no longer a central “honey pot” of personal data for hackers to break into. Just like humans, smart devices can also have their digital identities decentralized and verified via a public blockchain. 

Decentralized digital identity frameworks could be the key to avoiding an Orwellian dystopia for the smart cities of the future and to establishing a liberal, democratic alternative to the authoritarian model, protecting privacy and enhancing security. A grand coalition of democratic governments, cloud providers, mobile network operators, chip and IoT component manufacturers, academic researchers, and entrepreneurs is necessary to establish the standards for commercial, decentralized identity frameworks. 

Logo of the Commission on the Geopolitical Impacts of New Technologies and Data. Includes an 8-point compass rose in the middle with the words "Be Bold. Be Brave. Be Benevolent" at the bottom.

GeoTech Center

Championing positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post How to secure smart cities through decentralized digital identities appeared first on Atlantic Council.

]]>
Event recap | Opportunities and challenges presented by EO 14028 https://www.atlanticcouncil.org/blogs/geotech-cues/event-recap-eo-14028/ Mon, 21 Jun 2021 11:40:50 +0000 https://www.atlanticcouncil.org/?p=406846 An in-depth discussion about the opportunities and challenges presented by the Biden Administration's EO 14028 on Improving the Nation's Cybersecurity, jointly hosted by the GeoTech Center and Virtru.

The post Event recap | Opportunities and challenges presented by EO 14028 appeared first on Atlantic Council.

]]>

Event description

The increasing threat of ransomware and software supply chain attacks has created the need for urgent responses, and EO 14028 is one of many important steps to protect US cybersecurity. The Atlantic Council’s GeoTech Center held a private roundtable discussion in cooperation with Virtru to discuss the import of the Executive Order. In their conversation, all discussants were optimistic about the current trajectory of US cybersecurity. They emphasized future opportunities to collaborate with the private sector, standardize protocols, and educate the public.

One speaker explained that EO 14028 will quickly and effectively be implemented because the EO has specific short- and long-term goals for multiple agencies.  By building upon existing cyberinfrastructure, the order avoids the delay of overhauling technology. Moreover, the implementation of EO 14028 provides many opportunities to build and enhance relationships between the private and public sector.

The long-term implementation of EO 14028 will require more money for relevant agencies, and careful attention to how its new protocols shift financial burdens is necessary.  The government however can do more to nudge people and companies towards safer cyber practices, and laws can be changed to make inadequate cybersecurity protocols more expensive. The need to improve software, hardware, and user awareness is pressing as ever but should primarily focus on software such as TCP/IP.  Threats can emerge in any place with internet access, and government and industry must provide information to keep people cyber conscious and resources to allow them to innovate in their respective industries.   

Featuring

John Ackerly
CEO and Co-Founder
Virtru

Matthew T. Cornelius
Executive Director
Alliance for Digital Innovation 

Joseph Klimavicz
Managing Director
KPMG US 

Essye Miller 
CEO, EBM Consulting 
Principal, Pallas Advisors 

Renee Wynn 
CEO, RP Wynn Consulting LLC
Cybersecurity and Leadership Consultant, The Charles F. Bolden Group 

Hosted by

David Bray, PhD
Director, GeoTech Center
Atlantic Council

In partnership with

The post Event recap | Opportunities and challenges presented by EO 14028 appeared first on Atlantic Council.

]]>
Event recap | Reimagining education in a rapidly changing era https://www.atlanticcouncil.org/blogs/geotech-cues/event-recap-reimagining-education/ Wed, 16 Jun 2021 17:21:42 +0000 https://www.atlanticcouncil.org/?p=405600 A GeoTech Hour discussion exploring how to link education to the jobs of today and tomorrow, to ensure what people learning gives them the necessary skills, abilities, and knowledge to succeed amid global change.

The post Event recap | Reimagining education in a rapidly changing era appeared first on Atlantic Council.

]]>

Find the full GeoTech Hour series here.

Event description

Education is essential for ensuring individuals are prepared for both the jobs of today as well as the jobs of tomorrow. In the “GeoTech Decade,” where data and tech will have significant impacts on global geopolitics, competition, and collaborations, education is even more essential given exponential changes in digital systems, physical supply chains, health technologies, and commercial space solutions. It is essential to avoid being caught-up in the veneer of new technologies and losing focus on how people learn best.

On Wednesday, June 16, from 12:00 -1:00 p.m. EDT, as part of the weekly GeoTech Hour, the GeoTech Center hosted a discussion about teaching tech, data, and engineering in our exponential era ahead. Panelists discussed how to link education to the jobs of today and tomorrow to ensure people learn the necessary skills, abilities, and knowledge to succeed amid global change.

Featuring

Bevon Moore
Founder, CEO, and Lead Designer
CollabWorkx

AnnMarie P. Thomas, PhD
Professor, School of Engineering and Schulze School of Entrepreneurship
University of St. Thomas

Bo Stjerne Thomsen
Chair, Learning Through Play
LEGO Foundation

Stephanie Wander
Deputy Director and Senior Fellow, GeoTech Center
Atlantic Council

Hosted by

David Bray, PhD
Director, GeoTech Center
Atlantic Council

Previous episode

Event Recap

Jun 9, 2021

Event recap | The human dimensions of autonomous systems employing AI

By the GeoTech Center

A GeoTech Hour discussion exploring what should be off-limits when it comes to autonomous systems paired with artificial intelligence, particularly when they have the ability to impact human lives.

Digital Policy Technology & Innovation

The post Event recap | Reimagining education in a rapidly changing era appeared first on Atlantic Council.

]]>
EU-US tech cooperation: Strengthening transatlantic relations in data-driven economies https://www.atlanticcouncil.org/blogs/geotech-cues/eu-us-tech-cooperation/ Wed, 16 Jun 2021 15:15:13 +0000 https://www.atlanticcouncil.org/?p=404997 The window of opportunity for an agreement between the United States and European Union (EU) on a common tech rulebook is open once again. In December 2020, the European Commission, encouraged by the victory of ardent Atlanticist Joseph Biden in the US presidential election, issued A New EU-US Agenda for Global Change, an ambitious proposal for strengthening and broadening the transatlantic relationship in multiple key domains including technology governance.

The post EU-US tech cooperation: Strengthening transatlantic relations in data-driven economies appeared first on Atlantic Council.

]]>
The window of opportunity for an agreement between the United States and European Union (EU) on a common tech rulebook is open once again. In December 2020, the European Commission, encouraged by the victory of ardent Atlanticist Joseph Biden in the US presidential election, issued A New EU-US Agenda for Global Change, an ambitious proposal for strengthening and broadening the transatlantic relationship in multiple key domains including technology governance. In turn, the Biden Admiration has vowed to restore US alliances harmed by the previous administration and to deepen its cooperation with like-minded countries to counter the influence of authoritarian states over global technology rules. This alignment of interests on both sides of the Atlantic has created an unprecedented opportunity to overcome the differences that have prevented the two parties from adopting a common, normative framework on tech regulation, most notably on data governance, privacy protection, and digital taxation. Building on the momentum generated by the EU-US summit, leaders from both sides should take advantage of this new political climate to strengthen the transatlantic bond and adapt it to the needs of data-driven economies.

Why does it matter?

First, the global economy and international trade have become increasingly data driven. According to the report on the future of international trade launched by the  World Trade Organization in 2018, the growing digitalization of the global economy will impact international trade in three significant ways:  the importance of cross-border data flows as a component of trade in goods and services will grow significantly in the coming years.; trade in digitizable goods (e.g. DVDs or physical books) will decline while trade in digital services such as streaming services and e-books will grow; and  regulation of data flows and other technology legislation will become an important source of comparative advantage. Therefore, adopting an agreement on transatlantic data flows is indispensable to adapt the normative framework that governs the EU-US trade relations to the new data-based reality.

Second, innovation in the transformative technologies of the Fourth Industrial Revolution (e.g. artificial intelligence and cloud computing) requires a vast amount of data from various sources. As a consequence, countries and businesses that have access to large pools of data are more competitive than those that do not. Currently, China is often referred to as a country with access to almost infinite datasets while having data protection rules focused on national security rather than individual rights. This gives Chinese companies an enormous advantage over their European and American competitors in the development of AI and other technologies. Therefore, an agreement facilitating the exchange of data across the Atlantic via a secure and privacy-respecting framework may increase the competitiveness of both European and American companies in the global economy.

Third, authoritarian states such as Russia or China promote an illiberal, techno-nationalist vision of global governance based on harsh restrictions on cross-border data flows with little respect for fundamental human rights. Even more troubling is that these states export their vision of tech governance to developing countries by selling their technology and providing training programs on surveillance and other repressive techniques. They are also highly active at the multilateral level. China, for instance, promotes its approach to internet regulation as an alternative to the current internet architecture via various standardization fora and strategic documents such as China Standards 2035 or the new IP protocol proposed by China to the International Telecommunications Union (ITU). For this reason, by establishing a transatlantic framework on data governance that would ensure free flow of data while protecting human rights, the EU and United States would reiterate their commitment to free internet and set a global standard for other countries to follow.

Fourth, the COVID-19 pandemic has shown how crucial it is for governments to have  well-functioning, speedy, and secure access to data of different types and origin. By using data modeling and AI technologies, public authorities can predict with greater accuracy the evolution of different public emergencies as well as long-term threats and thus adopt better informed, more precisely targeted policies. This will be of particular importance to refine societal adaptation capacity and resilience to climate change in a wide array of fields, ranging from agriculture to urban planning to public health. Secure data sharing between the US and European publics as well as research authorities may help significantly in this endeavor. However, to tackle the most pressing global issues such as global pandemics or climate change, the United States and the European Union need a data sharing framework that extend beyond the transatlantic space. Therefore, it is crucial that the EU and United States find agreement on the creation of a safe, rights-based data exchange framework that would foster the connection between experts and research institutions from other global players such as China, India, or Brazil.

Where are we now?

Despite the clear benefits to be harnessed by both sides in establishing a framework for free exchange of data across the Atlantic, there are multiple contentious points that make the adoption of such agreement difficult. Assessing the current transatlantic digital landscape, there is tremendous asymmetry between the United States and the European Union in tech legislation. Although the two entities often call each other like-minded, there are significant limits to this like-mindedness, notably when it comes to data governance, privacy protection, and digital taxation. While the United States applies a laissez faire approach to tech governance that leaves a lot of space for self-regulation by private entities, the EU has a robust regulatory framework that imposes firm guardrails for tech companies. The best illustration of the EU’s strict approach to tech regulation is the General Data Protection Regulation (GDPR) that defines how private data of EU citizens may be collected and processed. The EU plans to expand its tech rulebook even more through recently unveiled drafts of  the Digital Markets Act, the Digital Services Act, and the Data Governance Act. However, despite its global leadership in tech regulation, the EU lags far behind the United States (and China) in industry innovation, with few major tech companies of its own.

On the other side of the Atlantic, as the the birthplace and home of the world’s five biggest tech companies, the United States is a global leader in tech innovation. However, for the moment, it has a relatively thin tech rulebook, lacking federal legislation on issues such as privacy protection or governance of online content.  This regulatory asymmetry creates a challenging constellation for any potential transatlantic regulatory framework on technology as it would have to reconcile two partners with seemingly irreconcilable approaches to tech regulation and vastly different levels of innovation. Some EU policymakers have already identified this regulatory asymmetry as a problem and call for more innovation and less regulation. For instance, French president Emmanuel Macron has said, “when you look at the map, we have what we call the GAFA [Google, Alphabet, Facebook, Apple] in the US, the BATX [Baidu, Alibaba, Tencent, Xiaomi] in China and GDPR in Europe.”

This regulatory asymmetry is most palpable in the field of privacy protection. The difference of views between the EU and the United States on the subject stems from different paradigms and historical experiences. In the United States, the prevailing approach to the use of private data by tech companies has been driven to a great extent by a utilitarian market perspective, according to which a certain loss of privacy by data collection has been acceptable so long as it results in greater consumer satisfaction. The approach of EU regulators is completely different as it is rooted in the European experience with two totalitarian regimes that relied on massive surveillance programs to keep their citizens in check. For instance, in the former German Democratic Republic (GDR), the State Security Service (Stasi) kept detailed records on one in three citizens acquired by a vast network of agents and collaborators. Similar experience was lived by people in other totalitarian states in Europe. Therefore, the European Commission is highly cautious when dealing with issues pertaining to privacy of EU citizens.

Nonetheless, we witness a change in the US privacy paradigm with powerful voices in the United States Senate calling for tighter restrictions on the use of personal data by private companies. In March a bipartisan group of legislators led by Senator Amy Klobuchar of Minnesota proposed an ambitious body of legislation called the Social Media Privacy Protection and Consumer Rights Act that, if adopted, would grant internet users more control over their data by providing them with opt-out options on data tracking and collection. Although the future of this draft is still uncertain, US citizens will certainly demand more action from their legislators and government executives on the matter as, according to a poll by the Pew Research Center on privacy, surveillance, and data-sharing, the majority of Americans are increasingly more concerned about the safety and security of their data.  

The European Union and the United States have already tried twice to find a framework that would allow a free flow of data between the two entities in full respect of European data protection rules. However, both attempts, Safe Harbor and Privacy Shield frameworks, were struck down by the European Court of Justice by its Schrems I and Schrems II, expressing serious doubts “as to whether US law in fact ensures the adequate level of protection [of personal data] required under Article 45 of the GDPR,” pointing to the risks of US surveillance programs to the rights of EU citizens recognized by the EU Charter of Fundamental Rights. To solve this impasse and develop a new framework for the exchange of data across the Atlantic, the European Commission has “intensified” the negotiations with the US Department of Commerce. However, these talks are still ongoing, so it is not clear yet whether they will produce a result acceptable to both parties.

Another contentious point in the mutual tech relations between the European Union and United States is digital taxation. In 2018, the European Commission issued a proposal for a 3 percent tax levied on digital business activities, arguing that under current rules it is impossible to tax the profits of influential tech companies generated in Europe as they are not physically present in the EU. The blueprint for a digital tax introduced by the Commission would replace the requirement of physical presence by a system taxing companies in the place where they “have significant interaction with users through digital channels.” The United States looked quite skeptically on this proposal. Then Trump Administration expressed a series of concerns that the tax would apply disproportionality to US companies operating in Europe and threatened retaliatory measures by increasing tariffs on European goods. The proposal has not been adopted yet as it is currently discussed by the European Parliament. However, there is a chance that the EU and the United States find a multilateral solution on digital taxation through a negotiated outcome of the discussions currently ongoing under the auspices of the Organization for Economic Cooperation and Development (OECD) and G20 within the framework of the Inclusive Framework on Base Erosion and Profit Shifting (BEPS).

Despite these differences, there are areas where the two parties are relatively in line or in the process of alignment. First, the US Congress is currently considering several pieces of legislation, the Platform Accountability and Consumer Transparency Act (PACT Act), Eliminating Abusive and Rampant Neglect of Interactive Technologies Act (EARN IT Act of 2020), and Online Consumer Protection Act (OCPA), that, if adopted, would alter the immunity granted to online content intermediaries by Section 230 of the Communication Decency Act (CDA) with a series of transparency and reporting obligations as well as enhanced protection of internet users as online service consumers. The same approach to the regulation of content intermediaries has been adopted by the European Commission in the aforementioned Digital Services Act, which imposes a set of transparency, reporting, and due diligence obligations on large content intermediaries but maintains their immunity under the e-Commerce Directive of 2000, a European version of Section 230.

Another area where the two entities may find some common ground is net neutrality. While the EU has already adopted a regulation guaranteeing its citizens “the right to access and distribute information and content, use and provide applications and services […] via their internet access service,” net neutrality has been the subject of fierce debate in the US, partly due to a hostile attitude of the previous administration on the matter. Nonetheless, the Biden administration seems dedicated to resubscribe the United States to this principle. Although such policy change has not been announced yet, Biden’s appointment of Jessica Rosenworcel, a vocal supporter of net neutrality, as the Chairwoman of the Federal Communications Commission (FCC) and the decision of the Department of Justice to drop the lawsuit filed by the Trump administration against California’s net neutrality legislation are signs that a reinstatement of the Obama-era rules on open internet repealed in 2018 by the Restoring Internet Freedom Order may be forthcoming.

Finally, the United States and the European Union seem aligned on the need for greater governmental involvement in financing and creating incentives for tech innovation. As a key component of its efforts to mitigate the economic consequences of COVID-19, the EU has approved the creation of the Next Generation EU fund worth of 750 billion euros that will be distributed in the form of loans and grants to EU member states. More that 50 percent of this unprecedented funding will be allocated to stimulate tech innovation and foster the digital and green transitions of European economies. In the United States, in an unusual bipartisan fashion and primarily with the objective to counter Chinese influence, the US Senate has approved the US Innovation and Competition Act (USICA), pouring more than $200 billion to support research and development in strategic tech sectors such as semiconductor industry, artificial intelligence, and wireless broadband. Although the underlying motivations behind the adoption of USICA and Next Generation EU differ, the two initiatives attest to the fact that governments both in the United States and the EU are now more inclined to support home-grown tech innovation by direct public financing programs. 

What should be done?

Although there are serious differences between the EU and the United States on how the tech sector should be governed, these disagreements are not insurmountable. First and foremost, any successful negotiation on a transatlantic tech governance framework should start with issues where the policies and interests of the two parties (at least partially) overlap, such as with net neutrality or a more active role of the public sector in financing and incentivizing tech innovation. Having an agreement or declaration on cooperation in enhancing and promoting these policies on the domestic and global levels is a good starting point for talks on more delicate and contentious issues.

Since the issue of digital taxation is being dealt with at a multilateral level with reasonable chances of arriving at a negotiated outcome soon, the priority for EU and US negotiators should be finding an agreement on free data flows and privacy protection. However, such an agreement cannot simply be a refurbished version of Safe Harbor or Privacy Shield that would be struck down by another decision from the European Court of Justice. Instead, the solution must be durable to provide US and European companies with legal certainty in their business activities on either side of the Atlantic.

The first step towards such a sustainable solution must be a realization that data is a precious but reusable resource generated by human activity. This realization has two crucial implications for data governance: first, data can be shared across borders and sectors; second, any data regulation should be human-centered. However, these two conclusions are difficult to reconcile with each other as it is a challenging task to ensure the free exchange of data across multiple jurisdictions while guaranteeing the same level of privacy protection  among them.

Considering the judgments of the European Court of Justice (ECJ) in the Schrems I and Schrems II cases, the prospects for full and unhindered flow of personal data from the EU to the United States does not seem realistically attainable given a negative attitude from the ECJ towards US surveillance policies. Therefore, EU and US negotiators should be extremely cautious and creative in crafting a regulatory framework for personal data transfers. However, there are data processing solutions that may present a reasonable compromise such as encryption or anonymization of personal data.  

Nevertheless, even without an agreement on personal data transfers, there are plenty of opportunities to be harnessed by establishing a legal framework on the exchange of non-personal data across the Atlantic. On this note, it is important to keep in mind that industrial data and non-personal public sector data also have enormous potential to empower innovation and progress.

Some of the mechanisms introduced in the draft of the Data Governance Act (DGA) recently unveiled by the European Commission may serve as inspiration for a future transatlantic regulatory framework on non-personal data transfers. The underlying philosophy behind DGA is a vision of data as an indispensable resource for increasing and sustaining the competitiveness and innovative potential of European companies. First, the regulation lays out the conditions for reuse of data produced by the public sector that is protected by law (e.g. personal data protection, commercial confidentiality, and intellectual property protection). Most importantly, DGA allows public authorities to share their data with other entities only if they are pre-processed (e.g. anonymized) and via a non-discriminatory and publicly accessible mechanism. Second, the regulation institutionalizes the functioning of data sharing platforms that will serve as intermediaries between those who produce data (mainly private companies) and those who seek data for their business, research, or other activities. The providers of these services are required to commit to a number of safeguards and protections with respect to sensitive and commercial data. Third, the DGA establishes a voluntary framework for data altruists—nonprofit organizations that collect data made available voluntarily by data owners for the common good.

Since the above-described data sharing mechanisms introduced by DGA deal almost exclusively with non-personal data (data not protected by GDPR), some of them, with minor modifications, could be implemented also in a new regulatory framework on data transfers between the EU and the United States. Institutionalizing and regulating the exchange of non-personal data across the Atlantic by some of the aforementioned legal mechanisms would not only give public entities, private companies, and  nonprofit organizations from both sides a secure channel for a data exchange but also would dramatically increase the pool of they have access to. This would be a positive regulatory incentive for innovation that is increasingly more dependent on the quality and quantity of data that researchers and businesses can access and process.  

Finally, it is indispensable that the United States respond positively to the call by the European Commission for the establishment of the EU-US Trade and Technology Council (TTC)—a forum for government executives to discuss trade facilitation, the development of compatible standards, and innovation promotion. TTC would provide both parties with a permanent channel for dialogue and exchange of views even in situations of political tensions that could hinder the progress on tech related issues.

Conclusion

Although there are multiple contentious points between the European Union and the United States regarding tech regulation, the political climate on both sides of the Atlantic seems favorable for finding agreement on these issues. The benefits that such an agreement may generate for US and European businesses, research institutions, and civil society in terms of secure and facilitated data sharing are too important to ignore. This is especially true during what is often coined the Fourth Industrial Revolution, with data as its main driving force. The good news is that data is reusable, unlike oil or coal, which were behind previous industrial revolutions. The same data that empowers innovation on one side of the Atlantic can generate another kind of innovation on the other side. Doing so, however, requires institutionalized, secure, human-centered channels that would allow both stakeholders in both the European Union and the United States to harness the full potential of data in the modern digitalized economy.

Juraj Majcin (@JMajcin) is a PhD Candidate in International Law at the Graduate Institute of International and Development studies in Geneva, Switzerland and a GeoTech Action Council expert.

The post EU-US tech cooperation: Strengthening transatlantic relations in data-driven economies appeared first on Atlantic Council.

]]>
GeoTech recommendations for President Biden’s meetings with allies overseas https://www.atlanticcouncil.org/blogs/geotech-cues/president-bidens-first-foreign-affairs-meetings-overseas/ Fri, 11 Jun 2021 15:54:19 +0000 https://www.atlanticcouncil.org/?p=401289 US President Joseph Biden is set to embark on his first official overseas foreign affairs trip from June 10 to 16. In light of mounting geopolitical issues around technology, the GeoTech Center provides actionable recommendations for the President to reach global solutions.

The post GeoTech recommendations for President Biden’s meetings with allies overseas appeared first on Atlantic Council.

]]>
US President Joseph Biden is poised for his first major overseas foreign affairs trip from June 10 to June 16. President Biden’s itinerary includes a G7 summit in the United Kingdom, a NATO summit and a US-EU summit in Belgium, and his first face-to-face meeting with Russian President Vladimir Putin in Switzerland. These meetings will set important precedents for the new administration’s international relations, with particular focus on significant critical technologies.

The GeoTech Center’s recent publication, the Report of the Commission on the Geopolitical Impacts of New Technologies and Data, posits that the world is entering the “GeoTech decade”—a new era characterized by interwoven geopolitics and technology, as well as increasingly “sophisticated but potentially fragile systems.” This fragility is evident in the context of a delicate global healthcare system and vulnerable supply chains, compromised by an unprecedented number of cyberattacks, trade restrictions, and disruption amidst the pandemic. Additionally, a report from NPR states that the FBI has attributed major attacks, such as the SolarWinds hack and Colonial Pipeline incident, to Russian-affiliated groups, which include REvil and Nobelium; some speculate that these groups are connected to the Russian government, raising the stakes for President Biden’s meeting with President Putin.

The potency of these attacks is exacerbated by the shift to remote working environments. In a recent study, Statista demonstrated that 44 percent of Americans worked remotely during the pandemic and argued that “different remote work models will persist post-COVID-19.” However, the American labor market has not yet recovered from the pandemic-induced crash, and onlookers remain fearful of a growing labor shortage. Additionally, increasing reliance on IT infrastructure extends far beyond the workplace. In cryptocurrencies, volatility persists even as they are legitimized by entities like El Salvador, which recently announced BitCoin as legal tender.

In a period of uncertain leaps in technological development, the Geotech Commission Report underscores the need for President Biden to “maintain science and technology (S&T) leadership, ensure the trustworthiness and resiliency of physical and software/informational technology supply chains and infrastructures, and improve global health protection and wellness.” President Biden must collaborate with international allies to “remain preeminent in key technology areas” and “take measures to ensure the trustworthiness and sustainability of the digital economy, the analog economy, and their infrastructures.” Most crucial are the report’s six key geopolitical recommendations that President Biden should consider in his trip abroad, which are instrumental for shaping policy for the GeoTech decade ahead. Below is an abridged summary.

Seven findings and actionable recommendations:

Global science and technology leadership

The United States, with like-minded nations and partners, must collectively maintain continued leadership in key S&T areas to ensure national and economic security, and that technology is developed and deployed with democratic values and standards in mind. The United States must pursue, as strategic goals, establishing priorities, investments, standards, and rules for technology dissemination, developed across government, private industry, and academia, all in collaboration with allies and partners. Collaboration among like-minded nations and partners is essential to attaining global S&T leadership.

Secure data and communications

Sophisticated attacks on software/information technology (IT) supply chains have led to significant breaches in the security of government and private networks, requiring an improved cybersecurity strategy. Such a strategy should center on updating and renewing the National Cyber Strategy Implementation Plan with a focus on streamlining how public and private sector entities monitor their digital environments and exchange threat information. Beyond these current challenges, advances in quantum information science (QIS) will lay the foundation for future approaches to securing data and communications, including new ways to monitor the trustworthiness of digital and physical supply chains. With allies and partners, the United States should develop priority global initiatives that employ and account for transformative QIS.

Enhanced trust and confidence in the digital economy

Diminished trust and confidence in the global digital economy could constrain growth; destabilize society, governments, and markets; and reduce resilience against the cascading effects of local, regional, or national economic, security, or health instabilities. Trust and confidence are diminished by practices that do not protect privacy or secure data and lack legal and organizational governance to advance and enforce accountability. As such, organizing and amplifying both automation and artificial intelligence while minimizing their weakness or vulnerabilities in open societies is essential for digital economies. The United States should develop international standards and best practices for a trusted digital economy and should promote adherence to these standards.

Assured supply chains and system resiliency

Because of their increasing complexity and design, both physical and digital supply chain vulnerabilities can have compounding negative effects on the global economy and national security. Protecting against these diverse risks requires understanding which goods and sectors of the economy are critical and how supply chains that are inherently more adaptable, resilient, and automated can be constructed. Doing so requires assessing the state and characteristics of supplies, trade networks and policies, inventory reserves, and substitutes for products or facilities. The United States should conduct regular assessments of itself and allied countries to determine critical supply chain resilience and trust, implement risk-based assurance measures, establish coordinated cybersecurity acquisition across government networks, and create more experts. A critical resource is semiconductor chip manufacturing of foreign suppliers and the long lead time and cost of new production facilities requires the United States to invest in an assured supply of semiconductor chips.

Continuous global health protection and global wellness

Inherent to the disruption caused by the COVID-19 pandemic are three systemic problems: (i) global leaders acted slowly to contain the spread of the virus, (ii) global health organizations reacted slowly to identify and contain the spread of the virus, and (iii) a mixture of factors delayed national responses, including late threat recognition, slow incorporation of science and data into decision making, low political will, and inconsistent messaging regarding the nature of the threat and what precautions to take. Though nations may adopt their own strategies to enhance resilience and future planning, a more global approach to this interconnected system is essential. The United States and its allies should lead the effort to field and test new approaches that enable the world to accelerate the detection of biothreat agents, universalize treatment methods, and deploy mass remediation, through multiple global means. Such a system is needed not only for recovering from the COVID-19 pandemic and preventing future outbreaks, but also for responding to human-developed pathogens.

Assured space operations for public benefit

To maintain trusted, secure, and technically superior space operations, the United States must ensure it is a leading provider of needed space services and innovation in launch, on-board servicing, remote sensing, communications, and ground infrastructure. A robust commercial space industry enhances the resilience of US national security by increasing space industrial base capacity, workforce, and responsiveness. It also advances a dynamic innovative environment that can bolster US competitiveness across existing industries while facilitating the development of new ones. The United States should foster the development of commercial space technologies that can enhance national security space operations and improve agriculture, ocean exploration, and climate change activities, as well as align civilian and military operations and international treaties to support these uses.

The future of work

People will power the GeoTech Decade, even as technology and data capabilities transform how they live, work, and operate in societies around the world. Successful societies must find ways to augment human strengths with approaches to technology and data that are uplifting, while also minimizing the impact of biases and other shortcomings of both humans and machines. Developing a digitally resilient workforce that can meet these challenges will require private and public sectors to take an all-of-the-above approach, embracing everything from traditional educational pathways to less traditional avenues, such as employer-led apprenticeships and mid-career upskilling. Ensuring that people are not left behind by the advance of technology—and that societies have the workforces they need to innovate and prosper—will determine whether the GeoTech Decade achieves its full promise of improving security and peace.

Matthew Gavieta is a Young Global Professional with the GeoTech Center as well as a rising senior at Cornell University, where he majors in industrial and labor relations and minors in philosophy and law & society. He is most interested in the intersection of law, policy, and technology. He hopes to do work in the field of intellectual property to promote safe, large-scale innovation and creativity.

The post GeoTech recommendations for President Biden’s meetings with allies overseas appeared first on Atlantic Council.

]]>
Can AgriTech entrepreneurs save the Middle East’s food supply? https://www.atlanticcouncil.org/blogs/menasource/can-agritech-entrepreneurs-save-the-middle-easts-food-supply/ Fri, 11 Jun 2021 09:15:00 +0000 https://www.atlanticcouncil.org/?p=402164 On June 9, the Atlantic Council’s GeoTech Center and empowerME Initiative hosted a private, on-the-record roundtable. Read the key takeaways.

The post Can AgriTech entrepreneurs save the Middle East’s food supply? appeared first on Atlantic Council.

]]>
On June 9, the Atlantic Council’s GeoTech Center and empowerME Initiative hosted a private, on-the-record roundtable featuring Vita F&B Capital Managing Director Kamel Abdullah, Pure Harvest Smart Farms CEO & Co-Founder Sky Kurtz, and GeoTech Center Nonresident Senior Fellow and Founder of Bold Text Strategies Daniella Taveau, moderated by empowerME Director Amjad Ahmad and GeoTech Center Deputy Director and Senior Fellow Stephanie Wander.

Below is a summary of the discussants’ key points.

AgriTech Innovation

  • Wander initiated a conversation tying emerging data capabilities and technologies directly to agriculture and food production: “Data capabilities and new technologies will heavily impact geopolitics, global competition, and global opportunities. The GeoTech Center recently released a new report, which offers practical and implementable recommendations that will enable the world to peacefully employ data capabilities and new technologies for beneficial purposes, including transforming agriculture and food security.”
  • Ahmad presented the current state of Agritech in the Middle East and North Africa (MENA): “Given the scarce arable land and water supply, the region’s food security is vulnerable with many highly dependent on imported agricultural products. Add climate changes and rapid urbanization to the mix, and the region represents a ripe environment for innovation.”
  • Kurtz emphasized that the Middle East has the opportunity to create food cheaply because it is abundant in the essential resources at the core of food production: sun, CO2, nutrients, water, land, energy, taxation, and capital. In his view, “If you can solve for issues like climate, source capital, and collaborate across all sectors including government, there is an opportunity for MENA to be the cheapest food producer in the world.”
  • Abdullah highlighted three areas where we must improve the efficiency of farming: seed technology, to consume less water and sustain plants in higher temperatures; watering technology, to plant with saltwater and reduce evaporation losses; and digital technology, to spotlight the best land areas for planting.
  • Abdullah underscored the cross applicability of existing research and development institutions. Governments can incentivize R&D from farmers through payments and, if we connect well-funded local universities with farmers, we can integrate new technologies into the existing marketplace.
  • According to Abdullah, “The waste of food in this region is still among the highest in the world. Governments have recognized that they need to mandate a shift in people’s minds about healthy food consumption as important and about food waste patterns. Culturally, our social occasions revolve around abundant food displays which end up being thrown, especially during the holy month of Ramadan. We can use new technologies to convert disposed food into animal feed and partially address this issue.”
  • Food loses half of its shelf life before it reaches the supermarket, and supermarkets will not risk selling a product in the last few days, so they throw the food away. Taveau pointed out that inefficiencies at ports are the reason for much of the lost shelf life. Overhauling the ports and streamlining efficiency will be important to preserving the longevity of the food supply.
  • To maximize the utility of data, we have to harness existing technology. We don’t need one specific piece of technology designed to capture data from farmers when we have phones that each farmer already possesses. Agromovil CEO and GeoTech Center Non-resident Senior Fellow Andrew Mack emphasized that distributing more technology or resources is not the key to capturing more data—we must maximize the existing technological infrastructure of small-scale farmers.

Investing in Domestic and Local Farming

  • Abdullah noted that governments lack major budget services, especially post-pandemic, and cannot maintain exorbitant expenditures on importing food. Governments must create a sustainable agriculture system by supporting local farmers.
  • Kurtz added that the private sector is also trending towards local farming. He believes market forces will move us towards a world with more locally produced food.
  • Abdullah highlighted that we still have a lot to do to ensure local food production. Supermarkets make the most money because they are closest to the end customer, while the farmer takes on the greatest risk.
  • Mack emphasized the importance of protecting local farmers in the marketplace. He pointed out that farmers are going out of business. The average age of a farmer in multiple countries is 58, and they are telling their kids to get a degree and look for a job in another field. There are 1.5 billion people in the farming sector around the world and 570 million of them are small farmers. Small farmers are too important politically and economically to ignore.
  • Ahmad added that consumer behavior will drive the capability of agricultural technologies to be relevant, profitable, and investable. He questioned consumer trends and preferences, particularly for locally-sourced fresh products and the willingness of consumers to pay a premium for these products.
  • Kurtz responded by pointing out that there has to be someone to sell and buy this stuff from the commercial side. Right now, this trend is highly relevant on socioeconomic disparity. For the middle and upper class, the calories of a product and the knowledge that their food is locally grown is becoming more important.

International and Regional Cooperation

  • Taveau argued that we must accept that the MENA region will continue to receive a large amount of their food supply from neighboring countries like India, which is a close ally of many MENA countries and possesses sizeable amounts of arable land. Therefore, food security for the Middle East requires investments in other nations in addition to investing in the region itself.
  • According to Taveau, “it is always important, as we are developing greater independence in supply chains, that we ensure independence in a way that is globally-minded. We cannot survive without one another and should resign ourselves to the fact that we will get the greatest food security when we work with our partners.”
  • Kurtz emphasized that regional players should continue coordinating their efforts within and beyond the GCC to build a sustainable food market. Adopting technology and improving food production drives profitability, making this shared goal more appealing.
  • When describing the importance of cooperation versus competition, Abu Dhabi Investment Office Director General H.E. Dr. Tariq Bin Hendi stressed how critical building partnerships is and how competition between nations is healthy, as it allows for more creative solutions to pressing issues in the AgriTech field.

Recommendations

  1. Taveau strongly believes that we need long-term economic viability in order to combat food insecurities in the MENA region. Sustained attention is required to progress in tackling this persistent issue. Taveau cautioned that people can be reluctant to accept and adopt new technology. The Agritech industry needs a good, segmented marketing strategy to combat social issues that arise on the path to innovation.
  2. Abdullah concluded that we must have a method of consumption changes equal to that of production changes.
  3. From a capital perspective, an individual’s ability to access capital, especially risk capital, is key to developing solutions, commercializing them, and creating new companies that can create jobs and economic prosperity in these countries, said Kurtz.
  4. Futurity CEO Jack Bobo defined efficiency as “reducing the friction in the system.” This can happen in terms of regulation and technology, as well as across the entire food chain. During the COVID-19 pandemic, efficiency means short-term arrivals when inventory is dwindling. He recommended a balance between slack and reducing friction in the system.
  5. According to Zuaiter Capital Holdings Managing Member Abbas Zuaiter, long-term investments, such as a thirty-year green bond, could result in investors being more likely to commit and remain in the region.

Hezha Barzani is an intern with Middle East Programs. Matthew Goodman is a consultant with the GeoTech Center. Follow him @matt_goodman22.

The post Can AgriTech entrepreneurs save the Middle East’s food supply? appeared first on Atlantic Council.

]]>
Event recap | The human dimensions of autonomous systems employing AI https://www.atlanticcouncil.org/blogs/geotech-cues/event-recap-autonomous-systems-employing-ai/ Thu, 10 Jun 2021 02:43:00 +0000 https://www.atlanticcouncil.org/?p=401480 A GeoTech Hour discussion exploring what should be off-limits when it comes to autonomous systems paired with artificial intelligence, particularly when they have the ability to impact human lives.

The post Event recap | The human dimensions of autonomous systems employing AI appeared first on Atlantic Council.

]]>

Find the full GeoTech Hour series here.

Event description

Linking autonomy with artificial intelligence represents one of the most challenging topics for both civil and military affairs. Autonomous systems can operate independently of human activities, either through coded rules or through some combination of machine learning. However, not all autonomous systems include artificial intelligence. Those that include AI raise the thorniest of questions when it comes to operating in situations that could directly and physically impact human lives – such as driverless cars or use in weapon systems.

Recently there have been concerns that an autonomous drone may have hunted down a human in asymmetric warfare and anti-terrorist operations. At the same time autonomous drones can help find humans trapped in rubble after an earthquake or other natural disaster. Like all technologies, these tools can help improve our lives (fire warms homes, cooks food) or harm them. The primary question that we must now grapple with is how to renew the commitment of societies to ensuring that the human dimensions of autonomous systems employing AI uplift lives and provide “net positive good”.

Join us for what promises to be a robust and lively GeoTech Hour discussion where we consider what should be off-limits when it comes to autonomous systems paired with artificial intelligence, particularly when they have the ability to impact human lives. Does the world need an international limitation agreement with regards to AI-enabled autonomous systems that can exercise lethal force? Is such an agreement realistic and enforceable? Would AI-enabled autonomous systems that directly and physically defend humans be acceptable?  

All these are challenging questions that we must consider as we look ahead towards the GeoTech Decade where advances in data and new technologies will have disproportionate impacts on geopolitics, competition, and global collaborations.  

Featuring

Joseph T. Bonivel Jr.,PhD
Nonresident Senior Fellow, GeoTech Center
Atlantic Council

Lord Tim Clement-Jones
Nonresident Fellow, GeoTech Center
Atlantic Council

Sally Grant
Vice President
Lucd AI

Dana W. Hudson
President and CEO
c6 Strategies, LLC

Hosted by

David Bray, PhD
Director, GeoTech Center
Atlantic Council

Previous episode

Event Recap

Jun 9, 2021

Event recap | The human dimensions of autonomous systems employing AI

By the GeoTech Center

A GeoTech Hour discussion exploring what should be off-limits when it comes to autonomous systems paired with artificial intelligence, particularly when they have the ability to impact human lives.

Digital Policy Technology & Innovation

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Event recap | The human dimensions of autonomous systems employing AI appeared first on Atlantic Council.

]]>
Event recap | Assured trust in medicine, credentials, and supply chains https://www.atlanticcouncil.org/blogs/geotech-cues/event-recap-assured-trust-in-medicine/ Thu, 03 Jun 2021 01:05:00 +0000 https://www.atlanticcouncil.org/?p=397547 An expert panel exploring new ways of increasing resilience by assuring trust in medicine, credentials, and supply chains.

The post Event recap | Assured trust in medicine, credentials, and supply chains appeared first on Atlantic Council.

]]>

Find the full GeoTech Hour series here.

Event description

The COVID-19 pandemic has disrupted medical supply chains, how hospitals operate, and shown how medicine may need to be provided better at distance. Such challenges represent both systems problems and commerce problems that require improvements in trust and the resilience of the medical system, specifically an improvement in the Economy of Trust. Medical resilience needs to be increased in response to a growing and accelerating deficit of trust in the world, to include deficit of trust in hospital systems and medical supply chains.

This resilience can be strengthened via Economy of Trust efforts that leverage the power of transparency and technology to build trust in every physical or digital transaction, interaction and product in daily life. With the COVID-19 pandemic, it is essential that we increase resilience by assuring trust in medical supply chains for medicine and crucial PPE equipment. This includes eliminating the risks from fakes. We must increase resilience by assuring trust in people credentials, be they staff, patients, or visitors – who can COVID-19 or other medical status privately and securely. We also must increase resilience for the entire system by assuring trust in ‘medicine at distance’ – which represents an important growing area both in the pandemic context and the future of healthcare globally.

For the medicine, credentials, and supply chains, Economy of Trust efforts add by giving stakeholders confidence as they conduct their business and relationships across the physical and digital worlds. Actors in the Economy of Trust value technologies, policies and ethical practices that facilitate flows of products, services and people, by providing auditable proof all along their physical and digital journey. ​With Economy of Trust efforts, no party reveals more data than is necessary and all are able to provide proofs independently or via reliable and affordable third-party systems.

Featuring

Dr. Divya Chander
Nonresident Senior Fellow, GeoTech Center
Atlantic Council

Yves Daccord
Former Director General
International Committee of the Red Cross

Philippe Gillet
Chief Scientific Officer
SICPA

Toomas Hendrik Ilves
Former President
The Republic of Estonia

Idris Guessous
Head of the Division of Primary Care Medicine
University Hospitals of Geneva

Peter Rashish
Senior Fellow and Director of the Geoeconomics Program
American Institute for Contemporary German Studies

Danielle Tavino
VP & Co-Founder, Code-X
President & CEO. Young People in Recovery

Hosted by

David Bray, PhD
Director, GeoTech Center
Atlantic Council

Previous episode

Event Recap

Jun 9, 2021

Event recap | The human dimensions of autonomous systems employing AI

By the GeoTech Center

A GeoTech Hour discussion exploring what should be off-limits when it comes to autonomous systems paired with artificial intelligence, particularly when they have the ability to impact human lives.

Digital Policy Technology & Innovation

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Event recap | Assured trust in medicine, credentials, and supply chains appeared first on Atlantic Council.

]]>
Event recap | Commission Report Launch, Part II https://www.atlanticcouncil.org/blogs/geotech-cues/event-recap-commission-report-launch-part-ii/ Wed, 26 May 2021 12:43:00 +0000 https://www.atlanticcouncil.org/?p=397531 The official report launch event for the Commission on the Geopolitical Impacts of New Technology and Data.

The post Event recap | Commission Report Launch, Part II appeared first on Atlantic Council.

]]>

Find the full GeoTech Hour series here. + View Part I of the Launch here.

Event description

On Wednesday, May 26, from 5:00 – 6:00 p.m. EDT, leaders from industry and government gathered to recognize the official release of the Report of the Commission of Geopolitical Impacts of New Technologies and Data (GeoTech Commission). This report provides an extensive set of recommendations for the United States and its like minded allies to thrive decade defined by data and technology collaboration and competition.

The report is premised on the arrival of the “GeoTech Decade,” in which new technologies and data capabilities will have an outsized impact on geopolitics, economics, and global governance. The speed, scale, and sophistication of new technologies and data capabilities that aid or disrupt our interconnected world are unprecedented. Emerging technologies promise to make our increasingly fragile global society more resilient. However, so far, no nation or international organization has been able to create the appropriate governance structures needed to grapple with the complex and destabilizing dynamics of emerging technologies.  Maintaining economic and national security, resilience, and democratic ideals requires new approaches for developing and deploying critical technologies, cultivating human capital, rebuilding trust in domestic and global governance, and establishing norms for international cooperation. 

The GeoTech Commission was established by the Atlantic Council in response to these challenges and to develop key recommendations and practical steps forward for Congress, the White House, private industry, academia, and like-minded nations. Specifically, the Commission examined how the United States, along with other like-minded nations and partners, can maintain its leadership in science and technology; ensure the trustworthiness and resilience of physical and IT supply chains, infrastructures, and the digital economy at large; improve global health protection and wellness; assure commercial space operations for public benefit; and create a digitally fluent and resilient workforce. As the GeoTech Decade unfolds, ensuring the leadership of the United States and allied partners in these areas will be imperative to ensure peace, security, and resilience across all domains. 

The report was made available for public access on May 26, 2021. Areas of significant importance include: 

  • Global scientific and technology leadership
  • Secure data and communications
  • Enhanced trust and confidence in the digital economy
  • Assured supply chains
  • Continuous global health protection
  • Assured space operations for public benefit
  • The future of work

This is Part 2 of a two-part series discussing the main findings of the GeoTech Commission and the next steps that can be taken, both in the private and public sector, to translate the recommendations into action. Part 2, taking place from 5:00 to 6:00 p.m. ET, featured additional members of the GeoTech Commission focused on promoting science and technology leadership in the coming decade, and will include Keynote Remarks from the GeoTech Commission Congressional Co-Chairs.

Featuring

Hon. Suzan DelBene

United States Representative (D-WA 1st District)

Hon. Michael T. McCaul

United States Representative (R-TX 10th District)

Fireside interview

David Treat

Senior Managing Director, Blockchain and Multiparty Systems
Accenture

Max Peterson

Vice President, Worldwide Public Sector
Amazon Web Services

Panel with GeoTech Commissioners

Teresa Carlson

President and Chief Growth Officer, Splunk
Co-Chair, GeoTech Commission

Vint Cerf

Vice President and Chief Evangelist
Google

Ramayya Krishnan, PhD

Dean, Heinz College of Information Systems and Public Policy
Carnegie Mellon University

Hosted by

David Bray, PhD

Director, GeoTech Center
Atlantic Council

Previous episode

Event Recap

Jun 9, 2021

Event recap | The human dimensions of autonomous systems employing AI

By the GeoTech Center

A GeoTech Hour discussion exploring what should be off-limits when it comes to autonomous systems paired with artificial intelligence, particularly when they have the ability to impact human lives.

Digital Policy Technology & Innovation

The post Event recap | Commission Report Launch, Part II appeared first on Atlantic Council.

]]>
Event recap | Commission Report Launch, Part I https://www.atlanticcouncil.org/blogs/geotech-cues/event-recap-commission-report-launch-part-i/ Wed, 26 May 2021 12:34:00 +0000 https://www.atlanticcouncil.org/?p=397506 The official report launch event for the Commission on the Geopolitical Impacts of New Technology and Data.

The post Event recap | Commission Report Launch, Part I appeared first on Atlantic Council.

]]>

Find the full GeoTech Hour series here. + View Part II of the Launch here.

Event description

On Wednesday, May 26, from 12:00 – 1:00 p.m. EDT, leaders from industry and government gathered to recognize the official release of the Report of the Commission of Geopolitical Impacts of New Technologies and Data (the GeoTech Commission). This report provides an extensive set of recommendations for the United States and its like minded allies to thrive in a decade defined by data and technology collaboration and competition.

The report is premised on the arrival of the “GeoTech Decade,” in which new technologies and data capabilities will have an outsized impact on geopolitics, economics, and global governance. The speed, scale, and sophistication of new technologies and data capabilities that aid or disrupt our interconnected world are unprecedented. Emerging technologies promise to make our increasingly fragile global society more resilient. However, so far, no nation or international organization has been able to create the appropriate governance structures needed to grapple with the complex and destabilizing dynamics of emerging technologies.  Maintaining economic and national security, resilience, and democratic ideals requires new approaches for developing and deploying critical technologies, cultivating human capital, rebuilding trust in domestic and global governance, and establishing norms for international cooperation. 

The GeoTech Commission was established by the Atlantic Council in response to these challenges and to develop key recommendations and practical steps forward for Congress, the White House, private industry, academia, and like-minded nations. Specifically, the Commission examined how the United States, along with other like-minded nations and partners, can maintain its leadership in science and technology; ensure the trustworthiness and resilience of physical and IT supply chains, infrastructures, and the digital economy at large; improve global health protection and wellness; assure commercial space operations for public benefit; and create a digitally fluent and resilient workforce. As the GeoTech Decade unfolds, ensuring the leadership of the United States and allied partners in these areas will be imperative to ensure peace, security, and resilience across all domains. 

The report was made available for public access on May 26, 2021. Areas of significant importance include: 

  • Global scientific and technology leadership
  • Secure data and communications
  • Enhanced trust and confidence in the digital economy
  • Assured supply chains
  • Continuous global health protection
  • Assured space operations for public benefit
  • The future of work

This is part 1 of a two-part series discussing the main findings of the GeoTech Commission and the next steps that can be taken, both in the private and public sector, to translate the recommendations into action. Part 2, taking place from 5:00 to 6:00 p.m. EDT, featured additional members of the GeoTech Commission promoting science and technology leadership in the coming decade, and included keynote remarks from the GeoTech Commission Congressional Co-Chairs.

Featuring

Teresa Carlson

President and Chief Growth Officer, Splunk
Co-Chair, GeoTech Commission

John Goodman

Chief Executive Officer, Accenture Federal Services
Co-Chair, GeoTech Commission

Michael Chertoff

Former United States Secretary of Homeland Security
Co-Founder and Executive Chairman, Chertoff Group

Shirley Ann Jackson, PhD

President
Rensselaer Polytechnic Institute

Zia Khan, PhD

Senior Vice President, Innovation
Rockefeller Foundation

Hosted by

David Bray, PhD

Director, GeoTech Center
Atlantic Council

Previous episode

Event Recap

Jun 9, 2021

Event recap | The human dimensions of autonomous systems employing AI

By the GeoTech Center

A GeoTech Hour discussion exploring what should be off-limits when it comes to autonomous systems paired with artificial intelligence, particularly when they have the ability to impact human lives.

Digital Policy Technology & Innovation

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Event recap | Commission Report Launch, Part I appeared first on Atlantic Council.

]]>
SCOTCH: A framework for rapidly assessing influence operations https://www.atlanticcouncil.org/blogs/geotech-cues/scotch-a-framework-for-rapidly-assessing-influence-operations/ Mon, 24 May 2021 15:41:35 +0000 https://www.atlanticcouncil.org/?p=391996 The increased involvement of digital technology and media in war requires innovative frameworks for understanding information warfare and influence operations. In this piece, GeoTech Action Council member Sam Blazek outlines SCOTCH, a new framework for characterizing influence operations.

The post SCOTCH: A framework for rapidly assessing influence operations appeared first on Atlantic Council.

]]>
Most of humanity now engages with digital and social media, in large part through smartphones. This new reality has cross-sectoral impacts and has changed the nature of conflict. For instance, in LikeWar: The Weaponization of Social Media, Peter Singer and Emerson Brooking note how the information landscape altered the dynamics of the recent war in Syria:

How information was being accessed, manipulated, and spread had taken on new power. Who was involved in the fight, where they were located, and even how they achieved victory had been twisted and transformed. Indeed, if what was online could swing the course of a battle—or eliminate the need for battle entirely—what exactly, could be considered war at all?

The increased involvement of digital technology and media in war requires innovative frameworks for understanding information warfare and influence operations. Based on experience assessing hundreds of influence operations across six continents over the past seven years, this paper offers a new framework for professionals engaged in analyzing, understanding, and countering them.

Characterizing frameworks for influence operations

Geopolitical influence operations may be defined as those that i) are either coordinated or supported by a state actor and ii) seek to influence an audience in the interests of said actor. Such activities have been used for millennia to gain tactical or strategic advantage in combat and competition; however, the global proliferation of information technology has dramatically enhanced their scale, speed, and reach. Individuals charged with recognizing and forestalling such threats utilize many projects and platforms aimed at detecting, quantifying, forecasting, and countering influence operations. However, these efforts all rely on some characterization of what these operations are and how they work. Unfortunately, the rapid speed of change of the battle space has caused practitioners and researchers alike to struggle with defining threats and attacks.

In spite of the many existing tools, datasets, case studies, and processes that these teams have either acquired or built, there is little consensus on how practitioners and decision makers describe and address influence operations. As a result, they talk in circles with varying amounts of shared context or situational awareness and struggle to quickly adapt and respond as a community to social media innovations. For example, as the Atlantic Council’s DFRLab recently noted, policy makers lack a cohesive strategy to combat the malicious use of real-time audio and video broadcasting.

In addition, many neglect the facts that the battlespace is highly complex and that its features are both evolving and interdependent. Society witnesses technological evolution in real-time in discussions with families and peers because the platforms in question are ubiquitous, but it is difficult to define threats, activities, and objectives in a shared operational and analytical language—as a result, researchers and policy makers struggle to validate and communicate their observations. For instance, understanding how both coordination by bad-faith actors and organic irony poisoning can morph ironic misinformation into genuine disinformation across communities is intuitive. Nonetheless, characterizing and developing practical countermeasures for these mechanisms is a remarkable challenge.

Existing taxonomies of influence operations tend to be incomplete—for example, the Carnegie Mellon BEND framework and the earlier 4Ds framework characterize only the means and some tactical objectives of individual and mass behavioral exploitation. MITRE’s ATT&CK framework, as well as AMITT, a library and clearinghouse of incidents and TTPs supported by fellow Atlantic Council Fellows Dr. Pablo Breuer and Ms. SJ Terp, formally categorizes adversarial tactics, techniques and procedures (TTPs), while specific frameworks such as Graphika’s ABC(D) focus on the “who,” the “what,” and some of the “how” of operations. These frameworks provide excellent summaries of certain key elements of influence operations but fail to address the big picture.

A few “big picture” models do exist, nonetheless. One example is the Malign Foreign Influence Campaign Cycle developed by the US Department of Justice (DOJ) Cyber Digital Task Force. However, given DOJ’s institutional objective of establishing a solid basis for legal action, the rigor and sophistication of the framework may be unwieldy for practical, time-sensitive use, and for deeper social and behavioral study.

All these frameworks can provide value to those addressing influence operations. However, as influence operations grow in complexity and technical sophistication, operators and analysts continue to lag in one key area: characterizing operations succinctly and effectively to colleagues and decision makers. Many describe their work using ad-hoc mental models in large part because existing classification schemes are either too simple to describe the nuances of complex operations or too specific to comprehensively summarize the entire information battlespace.

One key point that most will intuitively recognize, but that is too often absent from formal frameworks, is that the technical affordances of an information environment dictate available adversary tactics. Social media sites, news platforms, and mobile messaging apps form an operational landscape, and the features of each platform are all features that can be operationalized—hashtags, comment or reply capabilities, live video streams, shared-interest sub-groups, privacy settings, rebroadcast capabilities, advertising and ad targeting systems, chat rooms, and so on. Just as these features comprise the many ways that people digitally communicate with one another and browse content, they are also the means of capturing and refocusing attention on which bad-faith actors rely.

Introducing SCOTCH

In seeking a comprehensive yet succinct framework to serve the operational community, it is important to follow a Bottom Line Up Front (BLUF) philosophy: if a framework cannot be used to both quickly describe an operation and easily distinguish it from others, it does not work. Based on this approach, this paper has developed and operationalized the SCOTCH Framework for characterizing influence operations.

This framework was developed in close partnership with planners and operators within the United States Government (USG) and allied governments, analysis and data science teams across USG and NGO spaces, and several researchers and investigators from major news organizations and academic institutions. Operators examined how information is communicated to decision makers through chains of command, and how they might improve these information flows to enhance both situational awareness and decision making. Researchers examined how they sought to identify, contextualize, and communicate findings in order to improve resource allocation in a resource-constrained, data-rich environment.

The SCOTCH framework enables analysts to comprehensively and rapidly characterize an adversarial operation or campaign. Further, it is built to enable researchers and policy makers to explore the underlying facets and constructs of influence, propaganda, and psychological operations in an organized and straightforward way. In doing so, SCOTCH helps to bridge the research and policy communities and to identify dimensions of these operations that merit greater attention. The framework may be used at both the strategic and tactical levels of analysis. SCOTCH can characterize both a single operation and an overarching campaign.

The acronym describes:

S – Source

C – Channel

O – Objective

T – Target

C – Composition

H – Hook

Source

The source of a campaign may be identifiable individuals associated with a state or non-state actor, cutouts, “bots”, or a third party such as a moderator. In many cases, the source may not immediately be known to an analyst. During the 2020 US presidential election, the source was occasionally the platform itself, as Twitter, Facebook, and other platforms took measures to counter and limit the spread of what the managing organizations determined dangerous influence operations.

Channel

Both the platform and its associated features or affordances are channels.  A channel may be a news site, an online game and its chat features, an advertising platform, a social media platform, an online forum or chat room, and so on. Features of interest may include the availability and searchability of hashtags or viral content (and the existence of unsupervised “virality” algorithms), the existence of in-platform “groups” or subcommunities, the ability to live-stream video, the persistence and public visibility of posted content over time, and the ease of creating a new account and/or sharing new content. Such features create what some call a “dancing competitive landscape” for varied forms of attention and influence.

Objective

As with the source, when monitoring influence operations in real-time, the objective may be heavily obfuscated. However, objectives may still be indirectly inferred given prior experience with an adversary and its tradecraft.

Some of the most powerful influence operations are those that galvanize populations to pursue new objectives themselves. For example, in reviewing the QAnon conspiracies, a plurality of hypotheses exists regarding the group’s actual objectives:  an attempt to destabilize civil relations within the United States, a mechanism of making sense of abstractions such as “federal power,” to which many have limited exposure, a religious movement, or a cash grab, to name a few. Analysts must apply their experience and hypothesis testing abilities carefully in making a determination and must also recognize that objectives may change over time within a campaign— were any of these the original objective(s) of “Q”? In the case of QAnon, organizing a group with shared, extreme views may be understood as an objective in and of itself; once achieved, new objectives become attainable, ranging from further entrenching members’ beliefs, to doxxing and harassment, or even to real-world violent attacks.

Target

Conservatively, a target can be defined as the intended audience of a campaign over a specific channel. The target may be demographically and/or geographically bounded or characterized by shared beliefs. In terms of scope, the direct targets of a campaign may be users of an app, players of a game, individuals who meet particular advertising criteria, individuals who are characterized by social media platforms’ ad tech as members of some demographic category, members of a particular online community or network, subscribers of particular publications, and so on. An adversary may choose a “target-channel” pair based on the coverage of the target population afforded by the channel, as well as the sharing mechanisms baked into the design of the channel. In this way, available targets are determined in large part by the available channels and features, and in some cases, they can be further scoped by the personal data and metadata available to these channels about their users. The feasibility of targeting a particular group may also be mediated by a channel’s algorithmic capabilities, which are frequently opaque.

Composition

The composition refers broadly to the specific language or image content used in an influence operation. In many cases, it refers simply to the content being shared. However, in more sophisticated operations, it can also include technical details, such as the generation and employment of deepfakes or synthetically generated text and the structure and presentation of the materials. This category is also moderated by the channel and the hook (below), since the social channels and exploitation mechanisms leveraged will naturally inform the type of media content that may be generated and shared.

Hook

Typically, the hook of an influence operation represents both the technical mechanisms of exploitation, which are closely tied to the composition and channel(s), as well as the psychological phenomena being exploited. The hook relates to the tactics of persuasion and diffusion leveraged in the operation or campaign. These two constructs (persuasion and diffusion) are both complementary—by design, particular diffusion or injection techniques best serve particular strategic objectives—and occasionally substitutionary, wherein less convincing content that is more widely shared may achieve the same objective as highly persuasive content that is less widely shared.

SCOTCH example

At the campaign level, a hypothetical example of a SCOTCH characterization is: Non-State Actors used Facebook, Twitter, and Low-Quality Media Outlets in an attempt to Undermine the Integrity of the 2020 Presidential Election among Conservative American Audiences using Fake Images and Videos and capturing attention by Hashtag Hijacking and Posting in Shared-Interest Facebook Groups via Cutouts.

An influence campaign may feature multiple exemplary items within each category and may include multiple sub-branches representing a series of individual operations.  For instance, in the above campaign, a careful reader may identify and distinguish two separate operations, both characterizable using SCOTCH. In one, hashtag hijacking (a hook) was used to draw the general public (a target) to a particular narrative or shared-interest community (an objective). In the other, extreme content (including fake images and videos – a composition) hosted on low-quality media outlets (another channel) is injected directly into this community (a different target and hook) in order to harden group beliefs through collective sensemaking and social identity-building activities (a different objective). SCOTCH quickly and accurately summarizes both operations, as well as the broader campaign.

The benefit of this framework is twofold. First, it is lightweight: SCOTCH characterizations are succinct and intuitive, leading to short, comprehensive summaries that can be easily briefed and/or indexed. The above campaign characterization may remind many readers of headlines from major media outlets, and it takes only moments to read and interpret.

Second, SCOTCH offers decision makers the comprehensive information needed to understand an operation, and it provides sufficient information to take counteractions that specifically cater to the source(s), channels, objectives, targets, composition, and hooks observed. It captures the key parameters of an operation or a campaign and enables easy comparison between distinct operations without becoming unwieldy. To achieve the same using other frameworks, an analyst would need to draw from ABC(D), ATT&CK, and BEND all at once:

  • BEND for the behavioral, social network, and narrative hooks employed
  • ABC(D) for the sources, channels, and content composition observed in the operation, as well as channel-specific technical hooks
  • ATT&CK for characterization of and insight into the source and its behavioral & technical patterns, including common targets and channels

Conclusion

The SCOTCH framework is both a general-purpose framework for operational analysis and characterization and a starting point for deeper study and decision making. Strategic planners may use SCOTCH to frame adversarial operations as one component in a broader operational and sociotechnical context. From a research standpoint, SCOTCH provides a single framework for researchers to characterize influence operations to behavioral, technical, operational, political, and commercial audiences. Operationally, it seeks to enhance analysts’ sensemaking capabilities by covering all key points and to enable them to quickly and succinctly summarize their observations. However, there are still missing pieces. For instance, the framework does not provide for a more substantial explanation of how a campaign may play into existing narratives in a nation or community.  But in a bottom-line up front (BLUF) environment, brevity is often an advantage.

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post SCOTCH: A framework for rapidly assessing influence operations appeared first on Atlantic Council.

]]>
Games with serious impacts: The next generation of serious games https://www.atlanticcouncil.org/blogs/geotech-cues/games-with-serious-impacts-the-next-generation/ Fri, 21 May 2021 18:32:07 +0000 https://www.atlanticcouncil.org/?p=393349 Nonresident Fellow RJ Cordes discusses the growth of and potential applications for the study of serious games.

The post Games with serious impacts: The next generation of serious games appeared first on Atlantic Council.

]]>
When most people think of gaming, entertainment is the first thing to come to mind, but it may be possible to leverage that entertainment for research, improving artificial intelligence, and simulating everything from pandemic and food security scenarios to regulatory impacts. By 2012, some estimates suggested that humans had spent over 50 billion hours, or roughly 6 million years, playing World of Warcraft, a long-running massively multiplayer online (MMO) game. As high as that number is, it’s important to note that World of Warcraft is just one of many MMOs, and MMOs are just one of many categories of game. With most gamers averaging six hours a week of playtime, which likely does not reflect COVID-19 pandemic binge-gaming, and the growing blur of what constitutes a “gamer” due to the sustained popularity of mobile games like Candy Crush and Angry Birds, the amount of time spent gaming annually is now unfathomably large. Many have seen this as an unprecedented waste of time and effort, but as Buckminster Fuller once said: “[Waste] is nothing but resources we’re not harvesting—we allow them to disperse because we’ve been ignorant of their value.” In this case, the resources dispersed are human attention and potential.

Not all have been so dismissive of the potential value in time spent gaming. The potential for positive impacts in terms of mental health, problem solving, and social skills, as well as the cultivation of helping behaviors such as sharing and facilitating, has garnered attention in popular media, but others are going further, seeing it not as time wasted but as collective effort waiting to be tapped. For those just now considering what value could be extracted from that collective effort, such as continuing professional development, education, research, and simulation, a good first step is acknowledging that the idea isn’t new. Looking to the history of this concept and recent successes in implementing it will help leverage new opportunities and prevent the repetition of past mistakes.

The beginning of serious games

The use of games for purposes other than entertainment is arguably as old as chess and the intentional design of such games begins with Kriegsspiel, a wargame designed by the nineteenth-century Prussian Army. However, the recognition of the value of the aforementioned effort untapped begins in the early twentieth century with John von Neumann, a Hungarian mathematician and physicist famous for his work on the Manhattan Project and his contributions to (and his founding of) a wide variety of fields. His “Theory of Parlor Games,” presented in a paper published in German in 1928, looked at the nature of “statistical hazards” in social games and how the clear, formal rule sets of games allow for bounding an environment such that the utility of certain decisions and the probability of outcomes can be measured, thereby allowing better understanding, measurement, and observability of behavior. He began with roulette-style games and worked up to more complex ones, suggesting, subtly, that the hazards of the game and the behavioral responses available to players “reflect the essence” of the real world. Neumann later worked with Oskar Morgenstern, an economist, to produce the seminal text “Theory of Games and Economic Behavior.” The work essentially founded the field of “game theory” by building on both Neumann’s previous work and the work of other mathematicians, such as Emile Borel, who had also seen the value of the bounded environments within games for analyzing behavior and probability.

Since that time, the tools and insights of game theory have been thoroughly interwoven in mathematics and science but are often removed from the analysis of actual “games’ as they would be defined in common parlance. Game theory focuses primarily on abstract games, with simple ones typically represented as matrices and more complicated ones being structured as models. These representations help researchers better understand behavior, forecast outcomes, and design mechanisms, such as auction rules, for engineering outcomes. The study and use of actual games have instead been picked up by a relatively new field known as “serious games.” The focus of serious games is the use and study of game-like systems and actual games to engineer and understand outcomes.

A full accounting of the field’s beginnings would be difficult to describe in detail here, but its transition from obscurity began in the 1980s with an observation by management consultant Chuck Coonradt. Coonradt found that there is painstaking effort in keeping professionals working in refrigerated warehouses, but an increase in absenteeism in days with good skiing weather led him to note that under the right conditions people would not only expend more effort than they would at work but that they would pay for the privilege to do so.  

“People will pay for the privilege of working harder than they will work when they are paid”

The Game of Work, Charles Coonradt

Coonradt has sometimes been referred to as being the “grandfather of gamification,” a result of his book “The Game of Work.” Coonradt asserted that while games and sports require work and effort, they enjoy the status of being enjoyable pursuits for six key reasons:

  1. Clear goals: The objectives of the work are clear and well scoped, making navigation toward those goals manageable.
  2. Scorekeeping: The measurement of performance outcomes are clear, comparable, and unambiguous.
  3. Feedback: Given the clarity of objectives and performance outcomes, individuals participating in a game or gamified system have reasonable basis to consider the impact of certain behaviors on results.
  4. Choice: Games and game mechanisms provide players with choices, some clearer than others—the clearer the choices, the more valuable feedback becomes, and the more opportunities are provided for players to invest in understanding the impacts of their choices on outcomes and in innovating or adapting those choices.
  5. Field of play: The time and space in which the game is played are well scoped, so players have clear expectations entering this scope: they know what to expect, what is expected of them, and that the game will eventually end, and therefore that they will have time to rest if they exert themselves.
  6. Skin in the game: This concept from game theory was communicated to a much wider audience in the book of the same name by Nassim Taleb—that players need to acknowledge some value on the table, some potential cost or gain at stake that is tied to their performance in order to play effectively and fairly.

Coonradt argued that these principles could be applied when designing mechanisms for work-flows and processes, in order to achieve desired outcomes. Today many might refer to this idea as the “gamification” of work. Coonradt’s principles have since been tested and implemented in a wide variety of domains. An enormous amount of work has been done since the 1980s on both adapting work flows for gamification, and developing games to facilitate continuing professional development and education. This being said, many suspect that the value of gamifying work has been overhyped. Much of this critique is fair, sometimes targeting the nature of the concept but usually focusing on the nature of implementation and the unreliability of setting out to design engaging experiences. Designing games that are engaging is difficult enough when the center of gravity for design is player engagement, let alone when it is learning chemistry or debugging an operating system. As would be expected from the trajectory of hype in any particular concept, serious games has fallen from what the Gartner Hype Cycle defines as the “peak of inflated expectations” and is on the road toward more reasonable and practical applications.

A part of this progress has been acknowledging the “blood and bones in the path”— designing engaging experiences is difficult and unreliable— and that the driving questions need to change focus from designing games to engineer outcomes to adapting and adopting games to engineer outcomes. Moving from “design from scratch” to “adapt and adopt” has ushered in a new era of serious games, and this shift in prioritization began with an observation: “People are already playing these games for billions of hours, can we harness that work for something useful? Can we adapt the games for impact?”

MMOS and Eve Online

The speculation that games could be adapted and adopted rather than designed from scratch to engineer outcomes had been considered before, but when Attila Szantner, a co-founder of the pre-Facebook, Hungarian social media giant iWiW, considered the billions of hours being spent gaming in 2015, he decided to take action. Attila and his partner, Bernard Revazs, converted theory into practice by founding MMOS (Massively Multiplayer Online Science), which set out to apply the adapt-and-adopt approach for citizen science initiatives. In discussing citizen science and the perspective of MMOS on serious game design for this article, Attila demonstrated a grounded and realistic approach. He noted being inspired by previous work in designing serious games for research and education, particularly Zooniverse and other forms of “people-powered research.” Attila’s insights on the topic cautioned against the perception that the work of MMOS was entirely revolutionary—he claimed that there is no shortage of games designed for research and education and that their impacts should not be overlooked. The key difference between MMOS and these past endeavors however was that they avoided constructing any game from scratch, instead focusing on refining a recipe for turning players of existing games into “virtually limitless human computation engines for citizen science.” The recipe, or at least the outline for it, goes something like this:

  1. Find games that players already love;
  2. Find large-volume, modular research tasks (such as image and pattern classification);
  3. Map the tasks to potential adaptations in-game that facilitate the crowdsourcing of those research tasks;
  4. “Connect the dots with the in-game lore” to make the adaptations to the game “aesthetically fitting and thematically adoptable;” and
  5. Inform the players that by playing the game they already love, they’re contributing to science and making an impact.

Attila wasn’t shy in stating that there was another factor in making this recipe work: developing a relationship with CCP, an Icelandic game company that runs EVE Online. In fact, the recipe itself was developed in part by EVE Online’s game design director at the time, Petur Thorarinsson. No short description of EVE can do it justice, but, in short, it is a space-focused MMO game in which players pilot and manage ships in a vast set of star systems known as “New Eden,” competing and cooperating in their attempts to control territory and earn money in a myriad of ways. EVE is one of the longest running MMOs of all time and is innovative for a number of reasons, but of interest here was CCP’s willingness to commit to and experiment with academic and scientific collaborations, evidenced by the fact that they employ an in-house economist.

The first collaboration with MMOS, championed by EVE Online’s former executive producer, Andie Nordgren,  produced “Project Discovery,” a minigame within the game that connected players with tasks assisting researchers at the KTH Royal Institute of Technology in Sweden with the Human Protein Atlas project, helping to identify subcellular localization of proteins in cells. The task chosen was an image classification activity traditionally delegated to researchers with specialized training. The reigning consensus at the time was that only large-scale machine learning algorithms could handle parsing the millions of images that needed to be classified. Given expectations calibrated by past projects, when Emma Lundberg, the KTH researcher leading the Human Protein Atlas Project, first engaged in the collaboration, among her team’s higher estimates was forty thousand classifications a day. Very early into the launch, they saw as high as nine hundred thousand classifications per day, far exceeding expectations. When critics suggested that machine learning solutions could outperform EVE’s citizen scientists, they weren’t necessarily incorrect—Loc-CAT, a state-of-the-art tool for protein classification tasks did outperform players on common classes of proteins. However, the aggregate data from EVE players outperformed Loc-CAT in identifying rare cases and novel patterns. The aggregate data was integrated into the machine learning model using “transfer learning,” and this injection of human insight boosted the performance of Loc-CAT “significantly.”

The project generated tens of millions of image classifications. These were true milestones for the medium that would soon be rivaled by future collaborations between MMOS and EVE Online. In the next iteration of Project Discovery in 2017, EVE players were tasked with the analysis of astronomical data from the CoRoT telescope in collaboration with Reykjavik University and the University of Geneva. The project itself was championed by none other than Professor Michel Mayor, the famed co-discoverer of the first exoplanet, 51 Pegasi b. Similar to Lundberg, at the start of the project Mayor seemed to temper his expectations, suggesting that the opportunity for outreach and participation was in and of itself a reasonable goal. MMOS did so as well, noting that they did not expect to achieve the same level of success as the previous iteration. However, by week two they had achieved a peak classification rate of 1,539 classifications per minute and a total of 13.2 million classifications, breaking all previous daily records. By the project’s end, 279 million classifications had been performed by 422,000 players. Mayor, speaking about Project Discovery after attending EVE Online’s yearly “Fanfest” conference, stated, “I discovered a new world twenty years ago with a telescope and another one this year when I learned about EVE Online.”

Project Discovery is currently in its third iteration, this time tackling COVID-19 in collaboration with the University of Modena and Reggio Emilia, McGill University, and the University of British Columbia. A statement from Dr. Ryan Brinkman, a professor in medical genetics at the University of British Columbia reads:

“This project is crashing through all my expectations, with players continuing to show great engagement and interest in the work we are doing, as well as providing huge amounts of high-quality data for our research. Their efforts will not only contribute to the understanding of COVID-19, but the data they are generating will also be freely and widely shared with the entire scientific community. There is very high interest in re-using their results for the generation of machine learning algorithms. There is simply no other resource out there for this anywhere close to what is now being generated.”

In response to a request for an update on the undertaking, the Project Discovery team indicated that over 82.2 work-years have been spent by 327,000 active players thus far, with a running total of 119 million classifications.

MMOS is now bringing the lessons of its success with EVE to other franchises, such as Borderlands 3, a first-person shooter developed by the company Gearbox. In a collaboration among Gearbox, McGill University, and the UC San Diego Microsetta Initiative’s “American Gut Project,” the game’s millions of players now have the opportunity to help map the human gut microbiome, which is instrumental in developing a wide variety of medical treatments. That study, too, has vastly exceeded expectations. Even the humble Attila was proud to note that they had performed in a single day five times the analysis that some traditional web-based citizen science projects had done in ten years. The success in a mainstream first-person shooter was a key test for the approach, indicating a reliability in implementation that allows risk-averse organizations to consider adoption and ambitious ones to experiment further.

Social complexity in virtual worlds

Looking further into EVE Online begs a question: what else can the game be adapted to produce? One only needs to take a single, real look at EVE’s player base and complex social organizations to consider its potential for continuing professional development, large scale social simulation, and wargaming in complex environments. EVE draws a very interesting global crowd: it is infested with policy wonks, scientists, military and intelligence professionals, members of the foreign service, politicians and lobbyists, traders and fund managers, and other working professionals, some of whom have publicly admitted to using the game to hone skills they apply in the real world. For example, a significant portion of players are information professionals, and EVE’s provision of a robust API for developers to build on gave them an opportunity to practice their craft by providing a wide variety of complex tools for other players. It is a game often played by people with serious jobs, and their impacts as professionals are seen in the presence of organizational wikis, thousands-strong organizations with formalized roles and workflows, functioning intelligence networks, mercenary organizations, logistics companies, months-long military campaigns, and complex financial instruments. When asked about EVE’s dark, dystopian world of political and financial intrigue, and its tendency to attract hedge fund managers, Russian tycoons, and working professionals, the company’s CEO, Hilmar Veigar Pétursson, simply asked “Have you looked outside?”—a response that closely parallels von Neumann’s aforementioned observation that games often reflect the essence of the real world.

This set of EVE players does not necessarily bring the seriousness of their jobs to the game, but that doesn’t mean they don’t make serious impacts. For example, EVE’s “Space Pope,” a player who once influenced many thousands of players to engage in a religious crusade, is actually a thirty-year veteran of NASA’s Jet Propulsion Laboratory. EVE’s Council of Stellar Management, an elected board that represents players to the developers of the game, has amongst its members a Republican politician and maritime law lobbyist. While running for election he noted, “If you replace ‘government’ with CCP, ‘union members’ with the player base, and ‘country’ with the game world, I’m already basically a [member of] CSM. It’s literally my day job.” A fact well known to the EVE community, but perhaps not to those outside of it, is that Sean Smith, a Foreign Service Information Management Officer and Airforce Veteran among the casualties of the 2012 attack on the US Consulate in Benghazi, was a key figure within EVE. Known within EVE as “Vile Rat”, Sean is memorialized as having shaped the geopolitical environment there to such an extent that even Hillary Clinton noted his impact in the game. Before shirking off what it means for these individuals to have had an impact on the geopolitical environment of a virtual world, it should be noted that whole books have been written on the histories of conflicts within EVE Online, and certain assets in-game can be worth tens of thousands of dollars and take months to build.

In response to a request for more information, EVE’s Senior Strategist Tryggvi Hjaltason, a former intelligence analyst who was originally hired to focus on monitoring financial activity in the game, discussed how EVE generated such a complex economic and political environment. Tryggvi, in reference to EVE’s reliability in generating social complexity, lovingly referred to EVE as the “friendship machine” and laid out the framework for the mechanisms that drive it:

  1. Agency: The game does not restrict identity in the way other MMOs do—EVE simply provides for trade-offs in skill-choices and has some formal roles for participation within in-game corporations. The rest is up to the individuals. It is up to the player to define their identity. This self-definition means a wider variety of choices and strong consideration of the impact of their choices in game.
  2. Loss is Real: This concept maps perfectly to the aforementioned “skin in the game” principle. Unlike other games famous for high-priced game assets such as Second-Life, high value items in EVE can be destroyed in conflict, and players do not always have a choice in whether or not conflict happens. A player could spend many months and thousands of dollars of in-game currency building something valuable just to have it destroyed or taken from them hours later. The cost of mega-projects and the inescapable danger of loss creates a center of gravity for cooperation, contractual relationships, and the development of trust. These scale from small groups of individuals protecting each other from emergent mutual threats all the way to thousands of individuals participating in large-scale conflicts to defend trading territories. Tryggvi noted that war bonds have even been issued to fund some of these conflicts—such instruments in-game rely entirely on institutional trust.
  3. Active Facilitation. An insightful factor rarely found in analyses of social complexity in games, but have found in analyses in successful communities of practice such as Complexity Weekend, is the presence of Facilitators, whom Tryggvi referred to as “helpers,” or participants in the system of interest who are incentivized by having a positive impact on others. Tryggvi believes that EVE attracts these helpers because of the aforementioned agency and potential for loss. The ability to construct a unique identity means that one can manifest an identity as a helper, and the impact of their actions in the construction of that identity is proportionate to the potential for catastrophe that EVE allows for.

It is remarkable that these factors mapped so well to descriptive, but difficult to test, models from organizational research and cybernetics. When asked whether EVE Online would make use of this “friendship machine” for research beyond classification tasks, for contributing to organizational and personality research for example, Tryggvi noted that it was of interest and that there were parties who might be interested in collaborating on some of these fronts, but he also noted caution would be needed as CCP strictly abides by EU data privacy regulations. Given the success of the numerous iterations on Project Discovery, it’s likely that they could actually translate those interests and ambitions into results.

The future of serious games

Given the success of the adapt-and-adopt model in applying serious game approaches to citizen science and the social and political complexity EVE has shown is achievable in virtual worlds, some ideas for serious games that had previously been cast aside for being too ambitious should be reconsidered. Perhaps the greater challenge was for a group of researchers studying social and political complexity to build a game as engaging as EVE, not whether or not that level of complexity could be achieved in a game. There are a series of potential applications of serious games that can now be resurrected:

  1. Wargaming in the gray zone: Using virtual worlds as a basis for conducting wargames isn’t new, but using them for gray zone warfare is a challenging proposition. Simulating society, geopolitical environments, or creating a synthetic internet are monumentally difficult endeavors and potentially not achievable depending on what expectations are present. Perhaps games like EVE could be adapted or adopted to allow for exploration, experimentation, and education through wargaming. Wargames building on games that already have complex markets and information systems might be uniquely suited to prepare participants for the kind of campaigns seen in the modern operating environment.
  2. Pandemic research: EVE and MMOS are already making an impact here, but there’s always more that could be done. Research on contact tracing, policy impacts on trade, and network impacts of narrative influence on behavior could all be potential research initiatives of interest. In particular, epidemiological models could be greatly enhanced with data from these kinds of games, evidenced by the fact that World of Warcraft once served as an accidental testbed for pandemic research after an in-game illness escaped its intended environment.
  3. Next-generation simulation: There is an ongoing discussion about the impact of focusing on “toy problems” and failing to communicate the limitations of models in a variety of fields, especially economics and artificial intelligence. Further, the simulation mechanisms intended to generate data “closer to the real thing” come with their own limitations. When trying to study human behavior, nothing is better than actual human behavior. Virtual worlds could provide a sandbox for research of a variety of complex social and organizational phenomena without the ethical dilemmas posed by research projects like Facebook’s psychological experiments on its users. Further, given the proof-of-concept impact EVE player-data had on its AI-counterparts, there is an opportunity for researchers using agent-based models to study phenomena like foraging and information dynamics to both see their ideas tested and to collect data to better inform future models.
  4. Research beyond classification tasks: There is a paradox in researching organizational behavior: the constrained environments of the lab and the timescales used means better data, but on behavior that might not reflect the real world. On the other hand, the unconstrained environments of the real world mean limited or illusory data, and the generated insights can take many years to reduce to mechanisms that are (or are not) reproducible in the lab. Virtual worlds allow for both large-timescale experimentation and more iterations of short-timescale experimentation in spaces that can provide participants with an enormous amount of flexibility in their choices while still ensuring data collection can be robust and accurate.
  5. Non-scientific annotation and knowledge-management problems: There are plenty of places outside the context of traditional scientific research where annotation and classification data is needed: the creation of nuanced training data-sets for specific machine learning use-cases and filling in the gaps on citation data, for example. While these needs have been approached in the past using other crowdsourcing methods, applying a serious games approach would greatly lower costs and increase the rate of completing these tasks.
  6. Regulatory sandboxing: Regulation and legislation are especially difficult to plan and implement as societies become increasingly complex, jurisdictions become increasingly interconnected, and technology rapidly changes. A recent solution to this has been “Regulatory sandboxing,” where companies are placed into incubators or special, closed regulatory environments for new but potentially risky business models not yet addressed by current regulations. This environment allows for those business models to be tested in a supervised environment while regulators consider how their operations would be affected under new regulations. Virtual worlds with sufficiently complex economies and political environments may offer opportunities to actually test many iterations of regulations and policies to get some sense of loopholes and impact.

In response to concern about how some of the approaches might affect gameplay in live MMO environments, a few players of EVE Online provided their take on the potential impacts. The use of regulatory sandboxing and general research didn’t seem to faze any of them, even where it involved data collection. However, when they expressed initial confusion about the potential impacts of having professionals conducting gray zone wargames to impact in-game geopolitics, asking for a strict definition of “gray zone.” When given one, they laughed and clarified that it is simply what the EVE community refers to as the “metagame.” Spycraft, narrative campaigns, social engineering, and sabotage were already so commonplace as to be considered comically benign. When asked what it would mean to have professionals actively attempting to “ruin” the game for others by attempting to impact market prices or disrupt trade in order to better understand food security risks, interviewed players pointed to a recent post from one of CCP’s own developers responding to a player on a public forum. The developer noted that many had come to EVE to ruin the game for other players but have done little more than generate “oodles” of content for other players, and they proudly stated that “we are all still here, trying to ruin it for each other. And it is great.”  Maybe few game platforms are appropriate or resilient enough for this particular kind of productive and participatory serious game. However the fact that one exists bodes well for the future of serious game implementations.

GeoTech Cues

May 19, 2021

Games with serious consequences: Extremist movements and kayfabe

By Richard J. Cordes

Extremist movements and emergent collectives have found a home in online communities and platforms. In this piece, Nonresident Fellow R.J. Cordes outlines how such groups blur the lines between games and reality and presents a strategy for handling online collectives effectively.

Digital Policy Disinformation

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Games with serious impacts: The next generation of serious games appeared first on Atlantic Council.

]]>
Games with serious consequences: Extremist movements and kayfabe https://www.atlanticcouncil.org/blogs/geotech-cues/games-with-serious-consequences-extremist-movements-and-kayfabe-2/ Wed, 19 May 2021 19:34:41 +0000 https://www.atlanticcouncil.org/?p=391994 Extremist movements and emergent collectives have found a home in online communities and platforms. In this piece, Nonresident Fellow R.J. Cordes outlines how such groups blur the lines between games and reality and presents a strategy for handling online collectives effectively.

The post Games with serious consequences: Extremist movements and kayfabe appeared first on Atlantic Council.

]]>
This year, millions of people have found themselves sitting in both awe and terror, entirely bewildered by the views they’ve discovered others have embraced. It is easy to question why people hold beliefs that make us purse our lips and raise our brows, but the simple answers, “They’re crazy,” “They’re stupid,” “They’re evil,” simply aren’t very good explanations when speaking about hundreds of thousands or millions of people. In the common maxim attributed to H.L. Mencken, “For every complex problem there is an answer that is clear, simple, and wrong,” and reflexively designating malintent, low intelligence, or mental disorder as a primary basis of explanation for the existence of any belief system one might deem beyond the pale is certainly clear, certainly simple, and probably wrong.

When looking at the impacts of extremist movements and emergent collectives online, such as QAnon, Right Wing Militias, Accelerationists, and Left-Wing Splinter Groups, it’s important to recognize that common assumptions about them may be illusory. The stories carried within digital discourse are dynamic and subject to change over time both on public and private channels, and there may be both accidental and purposeful blur between what the members of those groups think is real, hypothesized, exaggerated, or a joke—a blur between reality and performance sometimes referred to as kayfabe, a concept which will require a bit of groundwork in order to be addressed properly.

The origins of radical ideas

Sociotechnical systems like social media are complex, and within every narrative thriving inside them are a myriad of actors playing a variety of roles at different stages of its life cycle. To assign one explanation to all of those who hold a belief is to resign oneself from ever having a chance to understand the belief’s origin, evolution, spread, and plausibility, because no idea is an island, and no belief system is composed of a single personality. In researching the origins of these chimerical ideas—conspiratorial webs of blurred truth and falsehood, both in the mainstream and on the fringes—one of the most important areas is the scaffolding. At some point, somebody did work putting together the necessary structures upon which others would be inspired to add—this material didn’t appear out of nowhere—but the question is, why did the creators do this work? Careful observation may indicate that some of the actors playing a role in the evolution of these ideas didn’t believe in them at all.

The first thought that comes to mind for some professionals is likely the foreign state actor—however, state actors have significant cultural barriers to overcome in attempting to co-construct and develop ideas abroad. It’s not that they can’t be successful—it’s just difficult and highly unreliable, which is the reason nations that prioritize narrative warfare have generally moved toward “water on a stone” and systems-oriented doctrine. The idea isn’t to expect any single deployment to be effective. Instead the focus is placed on many attempts at amplification and placement of information paired with disruption and escalation. There are other types of actors who are much better positioned to create rather than adjust, and one of these types of actors is simply playing a game—a game with serious consequences.

A necessary primer

First, a necessary primer: there is a concept called “serious games” that has shown up in academic, military, and systems engineering literature from time to time, with shifting definitions from the time of its inception to today. In short, the study of serious games is the study of game-like frameworks and systems that have impacts or purposes other than entertainment—and unfortunately this field has generally received limited attention and poor reception. Part of this poor reception is shared with the mathematical field of “game theory,” where the uninitiated layman might have preconceptions about the term game, preventing them from taking the study seriously despite its importance and impact. Ironically, the study of serious games has been taken even less seriously. Where game theory has the benefit of being an integral part of the mathematics community and being associated with mathematical formulas and frameworks, the study of serious games is often accompanied by actual games. The concept has nonetheless had some popular impact: those who have heard the terms “gamify” or gamification have interacted with concepts from this area of study.

Sometimes games are designed to be entertaining while providing meaningful impacts. For instance, a project undertaken by MMOS (Massively Multiplayer Online Science, an organization which helps tie scientific research to gaming experiences) introduced into the already existing MMO space game Eve Online a minigame that has players hunt for exoplanets for personal gain in a fictional universe, using and creating data that could be used to find real exoplanets. In other cases, games are designed solely for entertainment. Meaningful impacts might emerge still, as was the case with World of Warcraft teaching many of its players about economics and accounting, as well as human resources and organization management practice, as they attempted to coordinate and plan with dozens of other players in order to handle the game’s most difficult content.

A blur between games and real life

What these two games have in common, however, is that even when role-play occurred, it was impossible to be unaware of the fact that a game was being played. All activity was inside of a virtual frame, inside an application that displayed a virtual world (the game itself), and whenever role-play occurred outside of this frame, it was still tied to digital avatars players use to interface with that virtual world. In some games however, this line is not so clear. Consider the game Ingress, created by Niantic, the makers of Pokemon Go. Ingress was not just played online as it was an augmented reality (AR) game, where a smartphone layered images onto the real world. Players chose one of two factions, each of which vied for control over “portals.” These portals were, according to the game’s lore, entry points for something called “exotic matter,” which was said to affect human cognition. Players acted as “agents” working for their faction in coordination with other players to control portals for noble and storied reasons.

PokemonGO. Photo by David Grandmougin on Unsplash.

Players often began to roleplay as agents and to take actions that certainly weren’t part of the affordances that Niantic offered. This included the adoption of intelligence and counterintelligence practices, threats and intimidation, surveillance of other agents, and cyberattacks. Players began to map the game’s lore to the real world. For instance, players discovered that the fictional intelligence organization, the NIA, said to have been involved in the discovery of the portals, was actually a real organization. The US NIA was replaced by the US National Security Council in the 1940s, but players found patterns in declassified documents and matched dates to suggest that this was actually a “hidden” agency that had never been shuttered. Further, “exotic matter” is actually a real phenomenon and, while the game was at its peak, scientists succeeded in detecting and creating it. For many of the players involved, linking these events in the real world to the game was just part of the role-play, but for many, that line began to blur. Some players began to suggest that the portals represented in-game were actually real, that they could feel their effects, and that the game was built by intelligence agencies to direct people to do the work to ensure the portals were used for good or destroyed before others could use them for evil. Trying to sort out what people actually believed versus what was just part of the role-play is now very difficult.

This is not a unique phenomenon. With the amount of information online, it is nearly always possible to find patterns that support arguments, and eventually the line between role-play and reality blurs. There’s a word for this blur: “kayfabe,”  which was originally used to refer to the stage performances of wrestlers and the drama that unfolded in and outside of the ring. It’s difficult to tell what’s real and what’s not when the performance isn’t constrained to the stage and when it incorporates ongoing events in the real world—and even more difficult when there’s no stage at all. Here, another example of a serious game emerges: a game that is emergent, has no clear boundaries on its environment, and has not yet received a specific definition. It’s a game in which players lock on to a specific theme and then work to discover relatively reliable information to form patterns that support or reinforce that theme. The themes can be silly and inconsequential, or they can be serious: “there is an international cabal of Satan-worshipping pedophiles which infest our academic, cultural, and governmental institutions.” It’s something of a collaborative and improvisational puzzle game, and it falls into the same category as Cadavre Exquis and PPPiP (Partner Pen Play in Parallel)—game-like frameworks in which “players” co-construct art, narrative, and story by adding pieces to the work of others, sometimes in parallel. The closest match to this cooperative co-construction of narratives online is found in systems derived from the fictional Glass Bead Game in the book of the same name by Herman Hesse, in which players attempt to form meaningful connections between otherwise unrelated concepts and topics.

Even in the case when the theme is ridiculous by any standard, such as thinking “Finland Isn’t Real,” Kayfabe can blur what is real for those who are not in on the joke, and in some cases, even those who are in on the joke can begin to question whether or not it is a joke. The text-based nature of internet communication only deepens the confusion, as text lacks tone and inflection. Further, the fact that these games are often played with strangers further subverts the norms necessary to distinguish sarcasm or role-play from serious expressions. “Finland Isn’t Real” is not hyperbole. It is a very real (or parody of a very real) conspiracy theory about the fabrication of a country and landmass for the sake of preventing competition over fishing rights. The creator has stated it was a joke and yet, after its spread, wasn’t sure anymore that it was being interpreted as such. It gained a life of its own with people forming fairly convincing arguments—players are rewarded with attention, and the patterns they find that can’t be easily dismissed get added to the collective story. The emergent, Glass Bead-like mechanisms within have been framed as a set of game mechanics on more than one occasion.

The Proud Boys: A case study

While the spread of “Finland Isn’t Real” certainly exceeded expectations, it never impacted national narratives. Study of narrative warfare would suggest that this is because it failed to channel preexisting resentments, hostilities, tribalism, and fears, but others have. For example, the Proud Boys arguably started as a joke, but few people considered them as such when then-US president Donald Trump asked them to “stand back and standby.” The Proud Boys allowed for a creative co-construction of a parody of a men’s club built out of mechanisms intended to facilitate the development of masculinity in opposition to what was perceived to be the destruction of Western culture.

Much like “Finland Isn’t Real,” this creative co-construction began with more obvious parody and Kayfabe acting. The founder, Gavin McInnes, a co-founder of Vice, is an excellent example of a Kayfabe actor; he was well known as a provocateur prior to the founding of the Proud Boys, with a number of public figures stating that there was a consistent blurriness to what he was joking about and what he was serious about—a reason for which Vice sought to distance itself from him.

Gavin’s footprint is obvious in the Proud Boys’ initiation rites, which include an initiate being beaten by five men until they can name five breakfast cereals. However, the obvious parody begins to blur further after coverage of a series of violent encounters resulted in the leaders claiming they were intentionally misrepresented, generating a narrative renaissance within the organization that increasingly attracted violent members and created an impetus to action that eventually transformed it into something that McInnes himself and others felt compelled to leave. The Proud Boys took on a life of its own, with many splinter groups and chapters continuing to co-create a narrative about the West and its enemies—the ideas that fit the narrative got pushed to the top, and the ideas that didn’t, got left behind. Mainstream organizations and platforms have now blurred the story further; McInnes has since been repeatedly deplatformed, and his explanations of the founding of the Proud Boys and his early appearances speaking about it have now mostly been taken down, censored, or put into supercuts removed from their original context.

Future handling of emergent collectives

The truth is that this blur is deeply discomforting—humans are naturally uncomfortable with the sense of not knowing. When reducing the complexity of the environment is difficult, the brain tends to reduce the complexity of the strategy used to make sense of it. People want to put these “groups,” such as QAnon, into the same box as more clearly defined organizations like the KKK or ecoterrorist groups. People crave objective and unambiguous claims such as:

  1. This is who they are.
  2. This is what they believe.
  3. This is what they want.
  4. This is who their leaders are.
  5. This is their origin story.

However, getting used to the discomfort of not knowing and being cautious about accepting craved, unambiguous explanations are necessary to avoid the risk of blurring the space further through oversimplification—after all, the people in these groups are not the only ones provided with multiple incentives for advancing popular narratives.

Further, these groups won’t be the last of their kind, as the GameStop swarm event, January 6, and ongoing mayhem in Portland have demonstrated, emergent collectives online, extreme or not, will likely have as much of a role in defining the twenty-first century as nation-states, and, this being the case, there are actions and strategies that could be taken and implemented now to avoid being caught off guard by their materialization in the future:

  1. Talking about and researching conspiracy theories and radical narratives as a subset of more general, benign psychological phenomena can improve understanding of the space, avoid escalating tensions, and inform the creation of tools that help to maintain cognitive security.
  2. More than 60 percent of Western adults use social media as a primary source of news. It’s time to fund the development of new tools for navigating the information environment online to provide alternatives which optimize for metrics other than dwell time and moderate emotional engagement rather than incentivize it.
  3. Research on attitudinal change suggests that being aware of one’s own narrative is a basis for changing it. Tools that help monitor and aggregate flow and change of digital narratives may help moderate their impacts, especially if stakeholders in those narratives are allowed to contribute.
  4. Funding nonpartisan, interdisciplinary research on memetics (spread and adaptation of cultural artifacts), narrative, and serious games might help codify sometimes obscure topics and allow better monitoring and aggregation of influence online. Using a serious games approach to frame the underlying incentive structures behind the digital spread of narrative and co-construction of story and art could help enable monitoring and predicting radical shifts online and help prevent missteps in handling them.

Digital narratives cannot be censored out of existence. It would be like hitting mold with a hammer: fact-checking has been mostly ineffective at cooling tempers, and shaming or exiling believers often just drives individuals further into radical communities by creating both a common vacuum for community and a common object to bond over. Even if they could be removed entirely, doing so might be unwise. In all disruptive digital narratives there is a blur of truth, art, exaggeration, parody, and risk. This is the risk that comes with having a free society—or perhaps a risk that defines it. Nietzsche once wrote, “A superstitious society is one in which there are many individuals and more delight in individuality,” but inaction is not an option. It is past time to improve the ability to monitor, impact, and discuss the spread of narratives online.

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Games with serious consequences: Extremist movements and kayfabe appeared first on Atlantic Council.

]]>
Event recap | Computing to win: Bridging the AI compute divide https://www.atlanticcouncil.org/blogs/geotech-cues/event-recap-computing-to-win-the-ai-compute-divide/ Wed, 19 May 2021 12:47:00 +0000 https://www.atlanticcouncil.org/?p=392772 On Wednesday, May 19, 2021 at 12:00 p.m. EDT, the Atlantic Council’s GeoTech Center hosted a panel of experts focused on the future of national AI strategy and the computational infrastructure needed to advance AI ambitions.

The post Event recap | Computing to win: Bridging the AI compute divide appeared first on Atlantic Council.

]]>

Find the full GeoTech Hour series here.

Event description

There is a global policy imperative to leverage artificial intelligence (AI) to drive economic growth. While AI may generate substantial economic value over the next decade, this value will not be evenly distributed or easily captured. Early signs point to a lack of understanding and planning around domestic AI compute capacity that is fueling a “compute divide” that will strangle innovation across governments, academia, startups, and industry.

In order to address these problems, policymakers must include “domestic AI compute capacity” in strategic planning and budget priorities. Doing so is a challenge, however, due to a lack of standards and definitions. The OECD recently established the AI Compute Taskforce to address this policy gap. US and European government leaders and policy experts must take additional steps to address the following questions:

  • How much domestic AI compute capacity do we have?
  • How does this compare to other nations?
  • Do we have enough capacity to support our national AI ambitions?

Featuring

Dr. Divya Chander
Faculty Chair, Neuroscience
Singularity University

Charles Jennings
Founder
NeuralEye

Saurabh Mishra, PhD
Researcher and Manager, AI Index
Stanford Institute for Human-Centered Artificial Intelligence

Keith Strier
Vice President, Worldwide AI Initiatives
NVIDIA

Hosted by

Stephanie Wander
Deputy Director and Senior Fellow, GeoTech Center
Atlantic Council

Previous episode

Event Recap

Jun 9, 2021

Event recap | The human dimensions of autonomous systems employing AI

By the GeoTech Center

A GeoTech Hour discussion exploring what should be off-limits when it comes to autonomous systems paired with artificial intelligence, particularly when they have the ability to impact human lives.

Digital Policy Technology & Innovation

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Event recap | Computing to win: Bridging the AI compute divide appeared first on Atlantic Council.

]]>
How to modernize the global food system for an economy of trust https://www.atlanticcouncil.org/blogs/geotech-cues/how-to-modernize-the-global-food-system-for-an-economy-of-trust/ Thu, 13 May 2021 16:42:00 +0000 https://www.atlanticcouncil.org/?p=473284 The pandemic has revealed that the world is increasingly governed by complex food systems prone to cascading failure. Decision-makers must leverage new technologies and data analytics to build stronger, yet flexible, global networks to rely on.

The post How to modernize the global food system for an economy of trust appeared first on Atlantic Council.

]]>

Editorial

On March 11, 2020, the World Health Organization (WHO) declared COVID-19 a pandemic, but the virus’ impact quickly spread beyond public health. Indeed, just one month later, supply chain disruptions and job loss escalated the pandemic into a global economic crisis. Economic hardship, in turn, spawned social unrest, sparking waves of protests around the world. Taken together, the COVID-19 pandemic has revealed that the world is increasingly governed by complex systems, defined by Dr. Melanie Mitchell as “a system in which large networks of components with no central control and simple rules of operation give rise to complex collective behavior, sophisticated information processing, and adaptation via learning or evolution.” Such networks are prone to failure – when one node breaks down, because of the complexity of the entire system, others fall too. As the authors of a recent National Institute of Health paper put it, “the random events in a market in Wuhan, China have released a set of cascading consequences that have diffused across global networks.” 

The global food system demonstrates how cascading failures in complex systems result in economic hardships that are exponential in scale. As the GeoTech Center predicted in March 2020, the pandemic has exacerbated global food insecurity. In the pandemic’s early stages, food-exporting countries—including Russia, the world’s largest wheat exporter—moved to restrict or suspend their crop exports. Anticipating a worldwide food shortage as a result of such restrictions, net importers like Jordan began stockpiling food supplies, creating a feedback loop that further drove up prices and threatened to push low-income populations into food insecurity. Now, the UN World Food Program estimated that 270 million individuals are on the brink of starvation, double the pre-COVID figures. Cascading failures of this scale will only be more common in the future. Prior to the pandemic, GeoTech Center Nonresident Senior Fellow Dr. Marcus Ranney and Action Council Member Mr. Sahil Shah wrote that “climate change presents unprecedented challenges to agriculture,” noting how “the increasing incidence and severity of natural hazards, soil degradation, a decline in arable land, climate-related migration and conflict, all contribute to the challenges we are facing to food security.” 

Of course, while the threat of cascading failures may be increasing, it is not new. As GeoTech Commissioner Dr. Shirley Ann Jackson explained in a recent interview with the American Institute of Physics, the 2011 Fukushima nuclear disaster is an example of a complex-systems failure with cascading consequences. Russia’s 2017 NotPetya cyberattack is as well. Indeed, all of today’s complex systems are vulnerable to failure, for they were designed for efficiency over resilience. But as the risk of cascading failures increase, it is imperative that decision-makers create strong, yet flexible, global networks that governments, business, and citizens can rely on.

A successful strategy would leverage new technological developments and data analytics, building a more efficient and resilient decision-making process that better connects farmers and consumers. Moreover, COVID-19 lockdowns have accelerated the trend toward the use of online platforms for food purchases, and have proven the critical role of digital infrastructure in making food accessible and reducing the risk of food perishing. The Asian Development Bank however warns that “giving farmers access to e-commerce requires support to standardize production, organize the farmers, and build logistics capacity in remote areas.” Incentivizing the digitalization of the supply chain would also benefit consumers, who have become more educated and are increasingly demanding to know if the foods they consume are environmentally and socially sustainable or not. Experts highlight that, as a result, “food product traceability, safety, and sustainability issues have become crucial concerns to food retailers, distributors, processors, and farmers. Digitalization,” they argue,  would allow “food supply chains to be highly connected, efficient, and responsive to customer needs and regulation requirements.”

Ultimately, an economy of trust is an economy of efficiency, resilience, transparency, and accountability.

Sincerely,

Pascal Marmier
The Economy of Trust Foundation / SICPA
Dr. David Bray
Atlantic Council GeoTech Center
Borja Prado & Benjamin Schatz
Editors

Get the Economy of Trust newsletter

Sign up to learn about advances in technology and data activities that, through trust and more transparent frameworks, improve nations and sectors alike.

Latest Reseach & Analysis

The post How to modernize the global food system for an economy of trust appeared first on Atlantic Council.

]]>
Event recap | Exploring the future of data, human rights, speech, and privacy https://www.atlanticcouncil.org/blogs/geotech-cues/event-recap-data-human-rights-speech-and-privacy/ Wed, 12 May 2021 17:20:00 +0000 https://www.atlanticcouncil.org/?p=390220 On Wednesday, May 12, 2021 at 12:00 p.m. EDT, the Atlantic Council’s GeoTech Center hosted a panel of experts to explore “Exploring the future of data, human rights, speech, and privacy.” The panel included Alex Feerst, General Counsel at Neuralink; Chris Hazard, PhD, Co-founder and CTO at Diveplane; and Nathana Sharma, General Counsel at Labelbox.

The post Event recap | Exploring the future of data, human rights, speech, and privacy appeared first on Atlantic Council.

]]>

Find the full GeoTech Hour series here.

Event description

On Wednesday, May 12 from 12:00-1:00pm EST, GeoTech Center Director Dr. David Bray moderated an exciting GeoTech Hour discussion on the future of data, human rights, speech, and privacy as we look to upcoming possibilities.

Data capabilities and new technologies increasingly exacerbate social inequality and impact geopolitics, global competition, and international opportunities for collaboration. The “GeoTech Decade” must address the sophisticated but potentially fragile systems that connect people and nations while prioritizing resiliency as a foundational pillar. The rapidity of machines to understand large datasets and the speed of worldwide communications networks mean that events can escalate and cascade quickly across regions with the potential to exacerbate economic inequities, widen disparities in healthcare, and facilitate increased exploitation of the natural environment. The future can also present new avenues for bad actors to cause harm. Authoritarian nations will be able to increasingly monitor, control, and oppress their people, and diplomatic disputes can escalate to armed conflict across land, sea, air, space, and cyberspace.  

Domestically and internationally, the United States and like-minded nations and partners must promote strategic initiatives that employ data and new technologies to amplify the ingenuity of people, diversity of talent, strength of democratic values, innovation of companies, and the reach of global partnerships.

Featuring

Alex Feerst
General Counsel
Neuralink

Chris Hazard, PhD
Co-founder and Chief Technology Officer
DivePlane

Nathana Sharma
General Counsel
Labelbox

Hosted by

David Bray, PhD
Director, GeoTech Center
Atlantic Council

Previous episode

Event Recap

Jun 9, 2021

Event recap | The human dimensions of autonomous systems employing AI

By the GeoTech Center

A GeoTech Hour discussion exploring what should be off-limits when it comes to autonomous systems paired with artificial intelligence, particularly when they have the ability to impact human lives.

Digital Policy Technology & Innovation

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Event recap | Exploring the future of data, human rights, speech, and privacy appeared first on Atlantic Council.

]]>
Event recap | Achieving healthy communities and economic renewal https://www.atlanticcouncil.org/blogs/geotech-cues/event-how-we-can-achieve-both-healthy-communities-and-economic-renewal/ Wed, 05 May 2021 20:46:00 +0000 https://www.atlanticcouncil.org/?p=383901 On Thursday, April 30, 2020 at 8:00am EDT, the Atlantic Council’s GeoTech Center hosted a panel of experts to explore “How We Can Achieve Both Healthy Communities and Economic Renewal”. The panel included Mona Nemer, chief science advisor to Canada’s Prime Minister; Philippe Gillet, chief scientific officer with SICPA; Luukas Ilves, head of strategy with Guardtime; Daniella Taveau, principal of Bold Text Strategies; and Declan Kirrane, the managing director of ISC Intelligence in Science.

The post Event recap | Achieving healthy communities and economic renewal appeared first on Atlantic Council.

]]>

Find the full GeoTech Hour series here.

Event description

On Wednesday, May 5, the Atlantic Council’s GeoTech Center will be revisiting a panel of experts focused on how the international community can use vulnerabilities highlighted by COVID-19 to create resilience in the face of future crises.

The discussion, moderated by the GeoTech Center’s Director, Dr. David Bray, focused on how the international community can use vulnerabilities highlighted by COVID-19 to create resilience in the face of future crises. Though the pandemic revealed new fears surrounding emerging technologies, it also confirmed the public’s continued concern over issues of data storage, privacy, and security. However, much like the pandemic, data extends well beyond borders. Innovation and invention, informed by data, are necessary prerequisites to a swift, effective response to the crisis. As global stakeholders balance the efficacy of using data with ensuring data protection, they must do so under a more transparent framework. Global trust matters now more than ever.

As the international community explores how to achieve both healthy communities and economic renewal, it must also consider which standards and expectations may never again be the same. The businesses that have adapted to, and embraced, the “new normal” are finding the most success in navigating the pandemic. While COVID-19 has unveiled weaknesses in current systems and practices, it has also specified critical areas for governments to modernize and reform. Now is the time to acknowledge the unknowns of the “new normal,” and to fill in those gaps before the next crisis strikes.

Featuring

Philippe Gillet
Chief Scientific Officer
SICPA

Luukas Ilves
Head of Strategy
Guardtime

Declane Kirrane
Chairman
Global Science Collaboration Conference

Mona Nemer
Professor, Department of Biochemistry, Microbiology, and Immunology
University of Ottowa

Daniella Taveau
Nonresident Senior Fellow, GeoTech Center
Atlantic Council

Hosted by

David Bray
Director, GeoTech Center
Atlantic Council

Previous episode

Event Recap

Jun 9, 2021

Event recap | The human dimensions of autonomous systems employing AI

By the GeoTech Center

A GeoTech Hour discussion exploring what should be off-limits when it comes to autonomous systems paired with artificial intelligence, particularly when they have the ability to impact human lives.

Digital Policy Technology & Innovation

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Event recap | Achieving healthy communities and economic renewal appeared first on Atlantic Council.

]]>
Event recap | Countering bot swarms, mass false accounts, and deep fakes, Part 2 https://www.atlanticcouncil.org/blogs/geotech-cues/event-recap-countering-bot-swarms-part-2/ Wed, 28 Apr 2021 18:59:00 +0000 https://www.atlanticcouncil.org/?p=384179 An episode of the GeoTech Hour exploring the impacts of tech on propagating misinformation and the innovative solutions that can follow.

The post Event recap | Countering bot swarms, mass false accounts, and deep fakes, Part 2 appeared first on Atlantic Council.

]]>

Find the full GeoTech Hour series here.

Event description


The challenge of countering bot swarms (swarms of automated programs which could potentially draw away and deny system resources or attention from actual humans) and mass false accounts is not a new phenomenon – rather this challenge is something that only nation-states could do online since the start of the 2010 and, as tech has become democratized, individuals now in 2021 can do with ease. Other social media platforms face similar challenges. 

For example, the Center for Countering Digital Hate and Anti-Vax Watch for example finds that only 12 prominent anti-vaccine leaders are responsible for about two-thirds of anti-vaccine content on major social media sites. These challenges have been growing since the mid 2010s – when most of the general public were unaware of how bot swarms could amplify a few human individuals to look and appear like a much larger number of people. The public is growing increasingly aware of a reality where few people can recruit millions of human members of the public and indoctrinate them with fear and doubt. These bot-human hybrids (also known by researchers as “cyborgs”) can deny system resources or attention from actual humans and pose challenges for public and private organizations alike. 

This episode is the second part of a two-part special GeoTech Hour series. On Wednesday, April 28, we continued the conversation focusing on new data and technological solutions to identify bot swarms and mass false accounts, with special focus on human perceptions of media-mediated reality.

Featuring

Renee DiResta
Technical Research Manager
Stanford Internet Observatory

Jeff Frazier
Nonresident Fellow, GeoTech Center
Atlantic Council

Eric Sapp
President
Public Democracy

Hosted by

David Bray
Director, GeoTech Center
Atlantic Council

Previous episode

Event Recap

Apr 21, 2021

Event recap | Countering bot swarms, mass false accounts, and deep fakes, Part 1

By the GeoTech Center

An episode of the GeoTech Hour exploring the impacts of tech on propagating misinformation and the innovative solutions that can follow.

Americas Disinformation

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Event recap | Countering bot swarms, mass false accounts, and deep fakes, Part 2 appeared first on Atlantic Council.

]]>
Event recap | Countering bot swarms, mass false accounts, and deep fakes, Part 1 https://www.atlanticcouncil.org/blogs/geotech-cues/event-recap-countering-bot-swarms/ Wed, 21 Apr 2021 19:17:00 +0000 https://www.atlanticcouncil.org/?p=381736 An episode of the GeoTech Hour exploring the impacts of tech on propagating misinformation and the innovative solutions that can follow.

The post Event recap | Countering bot swarms, mass false accounts, and deep fakes, Part 1 appeared first on Atlantic Council.

]]>

Find the full GeoTech Hour series here.

Event description

A former Facebook data scientist on the company’s integrity team, Sophie Zhang, recently identified that Facebook wasn’t paying enough attention to coordinated disinformation networks. This included a loophole in Facebook policies linked to creation of unlimited numbers of fake “pages,” which, unlike user profiles, don’t have to correspond to an actual person but could still like, comment on, react to, and share content. 

The challenge of countering bot swarms (automated programs that are not humans and can draw away and deny system resources or attention from actual humans) and mass false accounts is not a new phenomenon. Rather this tactic was once only available to nation-states, but as tech has become democratized, individuals can now also carry out such attacks, creating challenges for social media platforms. 

For example, the Center for Countering Digital Hate and Anti-Vax Watch found that only twelve prominent anti-vaccine leaders are responsible for about two-thirds of anti-vaccine content on major social media sites. Such challenges have been growing since the mid 2010s, when most of the general public were unaware of how bot swarms could make a voices look and appear like a much larger group. These bot-human hybrids (also known by researchers as “cyborgs”) can deny system resources from actual humans and pose challenges for public and private organizations alike. 

Join us for a two-part special GeoTech Hour series. On Wednesday, April 21, the first part of this special GeoTech Hour series will focus on the just how the democratization of technologies has created a systemic issue that requires whole of society solutions and strategies. The event will assemble individuals who have spent the last decade working on different parts of the challenges and on both the defensive and offensive sides of employing tech to counter coordinated inauthentic behavior. 

On Wednesday, April 28, the second part II of this special GeoTech Hour series will continue the conversation on new data and technological solutions to identify bot swarms and mass false accounts that are only now possible, as well as the importance of recognizing that these are challenges of human belief. The second GeoTech Hour also will consider novel strategies to counter what, up until now, has mostly been a defensive posture in the face of those who would spread coordinated inauthentic behavior.

Featuring

Pablo Breuer
Nonresident Senior Fellow, GeoTech Center
Atlantic Council

Alex Ruiz
Nonresident Senior Fellow, GeoTech Center
Atlantic Council

Sara-Jayne Terp
Nonresident Senior Fellow, GeoTech Center
Atlantic Council

Hosted by

David Bray
Director, GeoTech Center
Atlantic Council


Previous episode

Event Recap

Apr 21, 2021

Event recap | Countering bot swarms, mass false accounts, and deep fakes, Part 1

By the GeoTech Center

An episode of the GeoTech Hour exploring the impacts of tech on propagating misinformation and the innovative solutions that can follow.

Americas Disinformation

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Event recap | Countering bot swarms, mass false accounts, and deep fakes, Part 1 appeared first on Atlantic Council.

]]>
How cybersecurity and citizen trust in digital vaccine certificates are inextricably linked https://www.atlanticcouncil.org/content-series/economy-of-trust-content-series/how-cybersecurity-and-citizen-trust-in-digital-vaccine-certificates-are-inextricably-linked/ Thu, 15 Apr 2021 13:30:00 +0000 https://www.atlanticcouncil.org/?p=473294 With the international rollout of vaccine certificates, both public and private sector actors are coming together to create reliable standards. But what are governments doing to ensure their security and integrity?

The post How cybersecurity and citizen trust in digital vaccine certificates are inextricably linked appeared first on Atlantic Council.

]]>

Editorial

On Saturday, April 3, the U.S. Center for Disease Control and Prevention (CDC) reported that a record four million coronavirus vaccines had been administered across the United States in a single day. Indeed, vaccine administration is accelerating around the globe, prompting governments and businesses to develop digital vaccine certificates. These “vaccine passports” store an individual’s COVID-19-related health data, including whether they have been vaccinated, tested negative, or shown proof of immunity to the virus. Vaccinated Israelis can use the government’s Green Pass mobile app, for instance, to return to theaters, sporting events, hotels, and gyms. Recently, The European Union (EU) proposed a similar Digital Green Certificate, and slides from the Office of the National Coordinator for Health Information Technology indicate the Biden administration is considering similar initiatives. In late March, Governor Andrew Cuomo announced that New York will launch its own digital certificate, Excelsior Pass, built on IMB’s Digital Health Pass blockchain technology. And SICPA, the leading Swiss company that provides security inks for currencies and sensitive documents worldwide, has developed CERTUS, a blockchain-based QR code solution compatible with the international efforts on securing vaccination certificates, and currently offered to several states around the globe.

With the rollout of so many passports, both public and private sector actors are coming together to create reliable standards. One such organization, the Vaccine Credential Initiative, which includes Microsoft, Salesforce, MITRE, and the Mayo Clinic, aims to promote transparency and include “Privacy by Design” principles into digital passports. Industry groups like the International Air Transport Association (IATA) have also undertaken efforts to standardize vaccine certification for international travel. 

Despite efforts to harmonize these passports, little has been done to ensure their security and integrity. On February, Europol warned on the “illicit sale of false negative COVID-19 test certificates” thanks to the “widespread technological means available, in the form of high-quality printers and different software.”  Researchers at the cyber-security company Check Point discovered that forged certificates can be obtained for as little as $250 on the dark web; negative COVID-19 test results are on sale for just $25. Further, the number of adverts for fraudulent certifications has tripled since January, adding urgency for technologies to be able to verify the certificates’ authenticity. To make matters worse, certificate platforms and apps remain insecure. An early version of the Israeli Green Pass, for instance, easily allowed individuals to forge the QR code displayed on the mobile app. While the Israeli government has since patched the issue, the app still uses an outdated encryption library that is prone to security breaches. Nevertheless, some of these passport technologies have made an effort to prioritize security. The AOKpass, IBM’s Digital Health Pass, and Guardtime’s VaccineGuard all use blockchain to safeguard the integrity of their passport. Meanwhile, GeoTech Center Action Council Member John Ackerly believes the encryption platform of his company, Virtru, can be harnessed for secure digital certificates. In a recent interview with Forbes, Mr. Ackerly argued that “these kinds of approaches can be super useful in giving the public the confidence to embrace these tools.” 

It is critical that policymakers adopt secure technologies to ensure citizens’ trust of public institutions. A 2017 Pew study found that 49 percent of Americans are not confident that the federal government can protect their data. If passports are compromised, it will further erode citizens’ faith—not only health organizations, but in all institutions, including elections. Ultimately, cybersecurity and citizen trust in institutions are inextricably linked. 

Sincerely,

Pascal Marmier
Economy of Trust Foundation
Christine Macqueen
SICPA
Dr. David Bray
Atlantic Council GeoTech Center
Borja Prado
Editor

Get the Economy of Trust newsletter

Sign up to learn about advances in technology and data activities that, through trust and more transparent frameworks, improve nations and sectors alike.

Latest Reseach & Analysis

The post How cybersecurity and citizen trust in digital vaccine certificates are inextricably linked appeared first on Atlantic Council.

]]>
Event recap | Agriculture technology: Opportunities and challenges for new farmers https://www.atlanticcouncil.org/blogs/geotech-cues/event-recap-agriculture-technology/ Wed, 14 Apr 2021 11:07:00 +0000 https://www.atlanticcouncil.org/?p=378102 An episode of the GeoTech Hour exploring the intersection of technology, policy, and the global food system.

The post Event recap | Agriculture technology: Opportunities and challenges for new farmers appeared first on Atlantic Council.

]]>

Find the full GeoTech Hour series here.

Event description

On this episode of the weekly GeoTech Hour, experts in agriculture and technology policy gather to discuss one of the most pressing issues in agriculture in North America today: the average age of farmers is steadily increasing (up to 57.5 years old in 2017) while the barriers to entry for young farmers continue to grow, from the increased burden of student loans to enormous upfront expenses for land, labor, and equipment. New farms tend to be smaller and less productive, which is precisely why agriculture technology, such as precision ag on the field and phone apps to connect directly to consumers, could benefit new farms attempting to break into this difficult business. However, the growing “Agriculture 4.0” movement comes with challenges and disparities of its own.

The leaders featured on the panel will discuss their work in researching and implementing tech-focused solutions that lower barriers to entry for farmers with a focus on economic benefits. Join us on Wednesday, April 14, at 12:00 p.m. EDT, as we continue to explore the intriguing intersection of technology, policy, and the global food system.

Featuring

Andrew Mack
CEO
Agromovil

Dr. Elaine Ingham
Founder
Soil Food Web School

Phil De Luna
Director, Materials for Clean Fuels Challenge Program
National Research Council Canada

Hosted by

Daniella Taveau
Nonresident Senior Fellow, GeoTech Center
Atlantic Council

Previous episode

Event Recap

Apr 21, 2021

Event recap | Countering bot swarms, mass false accounts, and deep fakes, Part 1

By the GeoTech Center

An episode of the GeoTech Hour exploring the impacts of tech on propagating misinformation and the innovative solutions that can follow.

Americas Disinformation

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Event recap | Agriculture technology: Opportunities and challenges for new farmers appeared first on Atlantic Council.

]]>
Event recap | Digital identity https://www.atlanticcouncil.org/blogs/geotech-cues/event-recap-digital-idendity/ Wed, 07 Apr 2021 15:51:00 +0000 https://www.atlanticcouncil.org/?p=375523 An episode of the GeoTech Hour focusing on the concept of digital identity, and how it can eliminate barriers and promote inclusion.

The post Event recap | Digital identity appeared first on Atlantic Council.

]]>

Find the full GeoTech Hour series here.

Event description

On this episode of the weekly GeoTech Hour, the GeoTech Center is returning to the sixth episode of the Data Salon Series, hosted in partnership with Accenture. This episode focuses on the concept of digital identity, and how it can eliminate barriers and promote inclusion.

The leaders featured on the panel discuss their efforts in the digital identity space, particularly in the past year as the world has increasingly moved onto the digital space. The current movement is siloed as companies and groups focus on their individual digital identity platforms in the hopes of becoming the “first”, without paying close attention to interoperability and plans for scale-up. Additionally, while there is a growing number of solutions, there is still a need for national or international standards to define requirements and functional outcomes for digital identity. This nuanced discussion addresses not only the technology that makes this movement possible, but the ethical standards that must surround digital identity to make it equitable and successful.

Featuring

Dr. Dante A. Disparte
Vice Chairman and Head of Policy and Communications
Diem Association

Dakota Gruener
Executive Director
ID2020

David Treat
Senior Managing Director
Accenture

Sheila Warren
Head of Blockchain, Data,and Digital Assets, Member of the Executive Committee
World Economic Forum

Hosted by

David Bray, PhD
Director, GeoTech Center
Atlantic Council

Previous episode

Event Recap

Apr 21, 2021

Event recap | Countering bot swarms, mass false accounts, and deep fakes, Part 1

By the GeoTech Center

An episode of the GeoTech Hour exploring the impacts of tech on propagating misinformation and the innovative solutions that can follow.

Americas Disinformation

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Event recap | Digital identity appeared first on Atlantic Council.

]]>
Event recap | Indigenous data sovereignty: Opportunities and challenges https://www.atlanticcouncil.org/blogs/geotech-cues/event-recap-indigenous-data-sovereignty/ Wed, 31 Mar 2021 23:37:17 +0000 https://www.atlanticcouncil.org/?p=371861 On Thursday, October 22, the GeoTech Center hosted the fifth installment of the Data Salon Series in partnership with Accenture to discuss the challenges to achieving data sovereignty for indigenous communities. The panel featured Dr. Tahu Kukuthai, Professor of Population Studies and Demography at the University of Waikato, Dr. Ray Lovett, associate professor of Aboriginal and Torres Strait Islander Epidemiology for Policy and Practice at Australian National University, Dr. Desi Rodriguez-Lonebear, Assistant Professor of Sociology and American Indian Studies at UCLA, and Ms. Robyn Rowe, Research Associate and PhD Candidate at Laurentian University. GeoTech Center Director Dr. David Bray moderated the panel and the discussion that followed.

The post Event recap | Indigenous data sovereignty: Opportunities and challenges appeared first on Atlantic Council.

]]>

Find the full GeoTech Hour series here.

Event description

On this episode of the weekly GeoTech Hour, the GeoTech Center is returning to the fifth episode of the Data Salon Series, hosted in partnership with Accenture. This episode focuses on indigenous data sovereignty: what it means to indigenous populations, how it can be improved through re-thinking certain concepts of data ownership, and the challenges and opportunities in moving forward.

The leaders featured on the panel will discuss their work in deconstructing colonial approaches to data collection and governance. Typical Western-style data approaches not only fail to properly apply to the lives, cultures, and societies of indigenous peoples, but also misalign priorities in terms of indigenous understandings of communal identity and personal ownership. Each panelist is working within their own national and societal context to construct new structures for data governance, which both better serve indigenous communities and might be more effective for empowering the wider population.

Featuring

Dr. Tahu Kukuthai
Professor, Population Studies and Demography
University of Waikato

Dr. Ray Lovett
Associate Professor
Australian National University 

Dr. Desi Rodriguez-Lonebear
Assistant Professor, Sociology and American Indian Studies
University of California – Los Angeles

Ms. Robyn Rowe
Research Associate, PhD Candidate
Laurentian University

Hosted by

David Bray, PhD
Director, GeoTech Center
Atlantic Council

Previous episode

Event Recap

Apr 21, 2021

Event recap | Countering bot swarms, mass false accounts, and deep fakes, Part 1

By the GeoTech Center

An episode of the GeoTech Hour exploring the impacts of tech on propagating misinformation and the innovative solutions that can follow.

Americas Disinformation

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Event recap | Indigenous data sovereignty: Opportunities and challenges appeared first on Atlantic Council.

]]>
Middle skill jobs as a strategic imperative https://www.atlanticcouncil.org/blogs/geotech-cues/middle-skill-jobs-as-a-strategic-imperative/ Wed, 31 Mar 2021 22:16:31 +0000 https://www.atlanticcouncil.org/?p=371312 The U.S.' economic competitiveness depends on a deep base of manufacturing and service capabilities that enable cutting-edge technologies to proliferate. In this piece, the author argues that "strategic government spending must be translated into positive economic spillovers for 'middle-skill' workers. What’s needed," he writes "is a coordinated approach to funneling federal, state, and local resources to target sectors and jobs."

The post Middle skill jobs as a strategic imperative appeared first on Atlantic Council.

]]>

In the past decade, American policymakers have become increasingly wary of the balance of power with China. Under the Obama administration, American foreign policy pivoted towards East Asia with a series of bilateral and multilateral trade proposals and security deals with South Korea, Japan, Australia, Vietnam, and other potential regional counterweights to China. Domestically, President Obama launched manufacturing institutes and artificial intelligence strategies aimed at increasing R&D and coordinating national resources for high-end technological innovation, initiatives with clear implications for a burgeoning US-China rivalry. President Trump put an even finer point on the issue, introducing a series of tariffs to protect American manufacturing and enacting an executive order to maintain AI leadership in the face of Chinese competition. Given the growing bipartisan consensus on the strategic and economic challenges posed by China, Biden is likely to maintain the course or escalate trade, defense, and economic priorities set by his two predecessors.

While policymakers have rightly called attention to several “winner takes all” technologies – quantum computing, artificial intelligence, and biotechnology, to name a few – economic competitiveness is broader than just the top-end innovation generated by leading tech companies and highly educated workers. National vitality depends on a deep base of manufacturing and service capabilities that enable cutting edge technologies to proliferate into the broader economy. A well-coordinated “strategic industries” policy can combine broadly shared economic prosperity and national competitiveness by creating good-paying jobs while strengthening industries and capabilities essential to American global leadership. Despite these linkages, the United States has not yet developed a clear, unified approach to industrial policy, national security, and workforce development. This is a missed opportunity to translate strategic government spending into positive economic spillovers for “middle-skill” workers: bipartisan support already exists for each of those three pillars; what’s needed is a coordinated approach to funneling federal, state, and local resources to target sectors and jobs.

Policy framework for strategic jobs

To identify areas of policy synergy, consider the following criteria for jobs that should attract government funding and policy support:

  1. Essential to economic growth: roles that are frequently employed in high-growth industries, or else required to improve the future general productivity of businesses.
  2. Necessary to protect American national interests: jobs that have clear national security implications, especially as it pertains to economic competition.
  3. Middle-skilled but high-paying roles that do not require college degrees: while higher educational attainment is generally desirable, it is not a suitable nor affordable option for all individuals, and many roles can or should support workers who have alternative credentials.
  4. High current job shortages: demand for roles far exceeds current labor supply.

Over the last few years, employers consistently struggled to fill high-skilled roles in healthcare and STEM fields, a trend that has only become more acute due to the pandemic and immigration restrictions. However, the United States also has significant middle-skill job shortages: in 2016, the Society of Human Resources Management (SHRM), an industry association for human resources, reported significant shortages in skilled trades, telecom, healthcare, and environmental technicians, and IT and computer specialists. These gaps continue to persist during the pandemic, and shortages are especially acute for mid-size businesses between 50 – 250 employees that form the backbone of the American economy.

Source: “The New Talent Landscape” (SHRM)

Given these talent shortages and the “strategic jobs” framework laid out above, the federal government should support large-scale expansion of three critical job families:

  1. Cybersecurity: In 2016, the Department of Homeland Security estimated that cyber theft costs the United States upwards of $100 billion every year. Attackers include state actors such as Iran, Russia, North Korea, and China, who is thought to have accelerated the development of its J-31 stealth strike fighter by stealing IP from the F-35. Despite the clear national imperative for additional cybersecurity talent, America suffers from a major talent shortage: according to ISC(2), a leading certification of cybersecurity skills, the cybersecurity talent gap in the United States is over 350,000 professionals. Cyber jobs pay $83,000 on average with higher pay for individuals holding cyber security certifications such as the CISSP. Moreover, these roles do not necessarily require a college degree, making it an appealing pathway to economic mobility.
  2. IT and Software development: Digitization is essential to every business today, from consumer goods companies selling customized products online to healthcare companies providing telehealth services and banks giving their customers a smoother experience. It is also increasingly essential to providing government services as well as defense: the present and future of combat is predicated on quick dissemination of information and precision strikes, often in the digital space, that minimize damage and keep troops out of harms’ way. Nevertheless, the United States has an IT and software development shortfall of 1 million professionals with a particularly acute shortage in the federal government, where half the IT workforce will be over the retirement age of 61 by 2030. Part of the problem is that many postings require bachelor’s degrees in computer science, even though many of these roles could be more than adequately filled by individuals with alternative certifications such as a coding bootcamp program. While graduating additional STEM and CS degrees is highly desirable, non-degree programs and apprenticeships can provide excellent training for high-paying jobs that address the urgent shortfall. 
  3. Skilled trades: A broad bucket that includes technicians, electricians, and carpenters, these occupations have generally been sidelined as more Americans have sought higher education. Nevertheless, these trades are essential to improving America’s physical and digital infrastructure, operating the factories of the future, and ensuring the integrity of the country’s supply chain. Traditionally considered “middle-skill” jobs, skilled trades have been historically served by vocational programs but are experiencing massive shortages: for example, 70% of construction companies are experiencing difficulty hiring even though salaries start in the $55,000 range. Moreover, these roles are increasingly prone to “degree inflation”: according to a Harvard Business School report, 67% of openings for production worker supervisors require a college degree even though just 16% of current supervisors hold one. Additional support for alternative, work-based training programs will provide excellent signals of ability, promote access to middle-class jobs, and develop a highly trained workforce for strategic sectors.     

The value of apprenticeships

While recent discourse around higher education has been chiefly concerned with cancelling student debt, the key underlying driver is the spiraling cost of college: tuition at four-year universities has risen by 37% in the last decade alone, far outpacing inflation and leaving students with an average debt load of $27,000 by graduation. To alleviate the strain, policymakers have increasingly recognized the potential of non-degree training, particularly apprenticeships, which mix on-the-job training with targeted academic skills acquisition. Apprenticeships, which typically last between a few months and 2 years, enable an individual in a high school or tertiary education program to work with an employer, earning a wage while developing skills that may lead to a permanent position or enhance future employability. President Obama spent $260 million on apprenticeship training, while the Trump administration spent $1 billion. Biden’s campaign platform was even more ambitious, calling for $50 billion to support programs that lead directly to “ready to be filled” jobs

Nevertheless, apprenticeships in America remain vastly underutilized compared to some of our peer economies. In Germany, 1.3 million adults are enrolled in apprenticeship programs across 330 occupations. By contrast, the U.S. has roughly half as many apprentices despite enrolling 7.5 times as many college students. Unlike their European counterparts, American families and high schools historically rejected apprenticeships as second-tier options, while American employers have been reticent to invest in individuals who might depart for competitors. 

However, attitudes seem to be warming towards apprenticeships. Staffing agency Addeco found that 89% of employers believe that corporate apprenticeship programs would alleviate skills gaps. Companies are starting to take action: recently, a consortium of blue-chip employers led by Accenture and Aon launched a technology-focused apprenticeship program in Chicago with the goal of reaching 1,000 apprentices by the end of 2020. Apprenticeships have clear momentum and strong applicability to critical, strategic jobs, and federal, state, and local officials should capitalize on the opportunity to create a coherent strategy.

Policy recommendations

In order to maximize the potential of apprenticeship programs in key strategic areas, the Biden administration should focus on funding and coordinating resources, defining standards, and convening employers and higher education providers, including private sector providers who demonstrate strong outcomes. To achieve this, the Biden administration should focus on the following policies: 

Policy 1: The Department of Education should lead the creation of a national strategy for increasing apprenticeships and blended work-learn programs in key roles and industries, signaling to state and local governments that apprenticeships, especially in key functions and industries, will be a high priority.

  • Although the Department of Education will lead the “Strategic Apprenticeships” task force, it should adopt a “whole of government” approach and cooperate with the Department of Labor and the Department of Defense to create a unified strategy. Coordinating policy across several departments will ensure that appropriate standards are put into place (including those for jobs directly related to defense and security) and that individuals going through apprenticeship programs have pipelines to meaningful, well-paying jobs after graduating.
  • Where necessary, the task force should create standards for apprenticeship programs that qualify for federal funding. The Registered Apprenticeship Program provides a repository of federally or state validated apprenticeships. However, occupations in cybersecurity and software development have not yet been approved for apprenticeship programs. The task force should work with industry certifications and associations, such as the ISC(2) and ISSA, to develop skill acquisition standards that will form the backbone of new apprenticeship programs.
  • To ensure that students have multiple pathways to acquire additional education and credentials, the federal government should create a set of competency-based work and learning standards that equate on-the-job activities with classroom time, creating clear requirements for students in apprenticeships who want to later receive an associate’s or bachelor’s degree. While this applies to all apprenticeships (and is a defining feature of the very successful Swiss and German systems), creating federal learning standards for technology and manufacturing roles will improve the appeal of apprenticeship programs in strategic sectors while giving individuals a path to higher credentials and higher pay roles.
  • At the state and local levels, elected officials should work with local chambers of commerce, community colleges, universities, and alternative education providers such as coding bootcamps to translate learning standards into apprenticeship opportunities, course credit, and pathways to an associate’s or bachelor’s degree. Where possible, local officials should also engage with nonprofits to provide wrap-around supports such as career coaching, which have been shown to improve persistence and outcomes.

Policy 2: Congress should commit federal funds for apprenticeships in cyber, software engineering, and advanced trades (“apprenticeships for the future”). These funds can be part of a matching program or can be targeted towards specific hubs of talent (e.g., Tampa).

  • President Obama’s and President Trump’s increased funding for apprenticeship programs demonstrates broad bipartisan appeal for apprenticeships. This can be paired with the increasing bipartisan consensus on China, thereby linking job creation in key industries with national security. The Jumpstart Our Businesses by Supporting Students Act of 2019, which was eventually brought to the Senate floor by Tim Kaine (D-VA) and Rob Portman (R-OH), called for Pell Grants to be used for certain short-term learning programs. New legislation can go one step further by adding funding for short-term programs in “strategic roles.”  
  • In addition, funding policies can take into account other economic and social and racial justice priorities. For instance, funds could be earmarked for historically black colleges and universities (HBCUs) to work with employers to create apprenticeship programs in target industries. Expanding apprenticeships with HBCUs would advance Biden’s racial justice agenda while providing a strong economic vehicle that meaningfully raises living standards and supports local businesses, laying the foundation for strong bipartisan support.

Policy 3: In addition to providing further funding and guidance for universities and community colleges that support apprenticeship programs, the federal government should also facilitate and regulate new forms of short-form education, including bootcamp programs and certificates, that provide high-quality and affordable training options for strategic jobs.

  • The Department of Education can set up an innovation fund dedicated to alternative higher education, with a particular eye towards cyber, information technology, and skilled trades. This creates a win-win situation: in addition to developing education providers that solve key job shortages, the Department of Education will be in a position to capture essential learnings about the efficacy of different forms of instruction and delivery, enabling them to innovate on a policy level at a much quicker pace.
  • To ensure that funding is only provided to high-quality education providers, the Department of Education should reinstate the Gainful Employment Rule, which stipulated that any programs whose typical graduates’ debts exceeded 8 percent of their total income of 20 percent of their discretionary income would lose access to federal financial aid. While the specific amounts could be adjusted, the Gainful Employment Rule will help protect students from predatory practices while ensuring that federal funds are not wasted on high-cost programs.
  • In addition, the federal government should require all alternative education providers to publish “outcomes reports” which detail graduation rates, job-finding rates, and average starting salaries. The Council on Integrity in Results Reporting (CIRR) has published a framework that is considered the gold standard in the coding bootcamp space. Similar to the new College Scorecard for Title IV institutions, the Department of Education should go one step further and mandate that all alternative education providers adopt a reporting structure for outcomes. 

Conclusion

The rise of China, along with COVID-19 and domestic issues such as income inequality, are beginning to push America’s limits. While there is no silver bullet solution, a well-designed strategic jobs policy that focuses on roles of the future with national security implications – cyber, IT and software development, and skilled trades – can strengthen our economy, provide quality jobs, and protect American interests. The Biden administration would be wise to work with a growing bipartisan consensus on the intersection of defense and workforce development, providing hope and opportunity for a country desperate for both.

The post Middle skill jobs as a strategic imperative appeared first on Atlantic Council.

]]>
Reimagining a just society pt. 4: New maps for a world disrupted by climate change https://www.atlanticcouncil.org/blogs/geotech-cues/reimagining-a-just-society-pt-4-new-maps/ Wed, 31 Mar 2021 20:56:23 +0000 https://www.atlanticcouncil.org/?p=370803 On a radically transformed planet, different conceptual maps are necessary for understanding what today’s priorities must be. These maps, or mental models, inform the framing that policy and decision makers use to weigh their options. Limitations in our conceptual frames can drastically limit the scope of considered futures.

The post Reimagining a just society pt. 4: New maps for a world disrupted by climate change appeared first on Atlantic Council.

]]>
NASA’s Perseverance Rover (Source: NASA, https://mars.nasa.gov/mars2020/mission/overview/)

Last month, NASA’s Perseverance Rover landed on Mars to chart its environment and search for signs of ancient microbial life. It sent back thousands of images and was soon preparing to move out on “the unpaved road ahead,” according to NASA. How does the Rover know where to go? It turns out that the Rover is steering itself on the basis of onboard maps that enable it to know exactly where it is and to avoid hazards. Because Mars is so far away — some 130 million miles at the time of the vehicle’s landing — radio signals take too long to travel from Earth to Mars. Therefore, Perseverance’s travel couldn’t be managed manually by NASA. But  Perseverance’s movements are based on the most precise terrain maps of Mars ever created, thanks to the work of the US Geological Survey’s (USGS) Astrogeology Science Center. “When you’re planning to explore someplace new, it’s always a good idea to bring a map so you can avoid dangerous terrain,” explains the USGS on its webpage about Perseverance.

The advice from the USGS is even more relevant on Earth these days, where rapidly changing climatic conditions have created an unfamiliar geophysical context with alarming implications for humanity’s common future. On a radically transformed planet, different conceptual maps are necessary for understanding what today’s priorities must be. Unlike the Perseverance Rover now exploring Mars, humankind lacks the updated maps or mental models needed for traversing the uncharted geopolitical, geo-economic, and geo-environmental terrain of a climate change-disrupted Earth system. These maps, or mental models, inform the framing that policy and decision makers use to weigh their options. Limitations in our conceptual frames can drastically limit the scope of considered futures.  One such needed update is of the new “terrain” in which humanity can co-exist sustainably within the natural world. In general, such “maps” are composed of assumptions held about what systems or orthodoxies best explain how the world works, such as economic ideologies, belief in the continued value of standing armies and sophisticated weapons systems, or decisions about how economic value is determined. Maps influence everything from decision makers’ priorities to the design of university curricula, and they can lead society to prefer certain pathways over others.

The science that enabled the mapping of Mars’ surface and the search there for signs of ancient life also has been sounding alarms for decades about mankind’s destructive impacts on the Earth’s biosphere. In fact, nearly everything we know today about the dangers of human-induced climate change was effectively known by 1979

The ongoing COVID-19 pandemic can be seen as a warning of the unsuitability of current maps as guides in the new Earth-system reality. Unfortunately, humanity lacks the geopolitical equivalent of an Astrogeology Science Center to update our frameworks to address the future of human society on a radically transformed Earth. While the science behind the current climate-change situation is largely settled, and there is global agreement on the need to sharply reduce greenhouse gas emissions, there is disorientation on how to proceed. New maps would show, with the urgency they are due, the dangers of continuing “off-road” in a way that undercuts shared goals of curbing emissions and living sustainably with the natural world.

Whatever our individual beliefs, there are trail markers in this new world. The last time carbon dioxide concentrations in Earth’s atmosphere were as high as now was the Late Pliocene Era, three million years ago, according to scientists who have analyzed ice cores and ocean sediments in the coldest place on Earth. At that time, temperatures were several degrees Celsius higher and the oceans at least fifteen meters deeper than today. In recent years, moreover, the annual rate of carbon emissions due to human economic activities has been more than five times the rate of CO2 emissions during the late Pliocene era, as presented in the PBS video, “The Last Time the Globe Warmed.” Martin Siegert, co-director of the Grantham Institute at Imperial College London, said, “We’ve done in a little more than fifty years what the Earth naturally took ten thousand years to do.” 

Identifying the Closest Paleoclimatic Analogs for Near-Future Earth. Source: K. D. Burke, J. W. Williams, M. A. Chandler, A. M. Haywood, D. J. Lunt, B. L. Otto-Bliesner, Pliocene and Eocene provide best analogs for near-future climates, Proceedings of the National Academy of Sciences, December 2018, 115 (52) 13288-13293; DOI: 10.1073/pnas.1809600115, https://www.pnas.org/content/115/52/13288.

The ripple effects of a warming world are occurring so rapidly, and sometimes so silently, that crisis preparedness and response systems have trouble making sense of them in time. The Washington Post has reported, for example, that scientists, while examining soil in ice cores collected more than fifty years ago from the bottom of the Greenland ice sheet, recently discovered plant fossils beneath the mile high ice mass. Their excitement was quickly tempered by their realization that, if plants once grew on the surface of Greenland approximately one million years ago—when greenhouse gas concentrations in the atmosphere were far lower than current levels, and the Earth itself was rarely as hot as it is now—the Greenland ice sheet could collapse due to relatively small increases in temperature with dire implications for the world. As this case demonstrates, small changes can have large ripple effects that must be foreseen in order to take appropriate action to forestall them. Faulty “maps” that ignore or fail to identify these risks will cost human society precious time.

The demonstrated vulnerability of the Greenland ice sheet to climate change is just one indicator of the breakdown of a transformed Earth, and the web of life that enables all human economic and societal activity. In 2019, for example, a UN environmental report found that around one million species are at risk of extinction, many within decades, and more than ever before in human history. The report identifies the main drivers of biodiversity loss, in descending order of importance: (1) changes in land and sea use; (2) direct exploitation of organisms; (3) climate change; (4) pollution; and (5) invasive alien species. In addition, the “uncontrolled encroachment of humans into new habitats” has been tied to the increasing risk of zoonotic events involving disease transmission from animal species to humans, according to the Lancet COVID-19 Commission Statement on the occasion of the seventy-fifth session of the UN General Assembly. A mere 23 percent of terrestrial ecosystems remain intact.

The triple crises of biodiversity loss, climate change, and the increasing risks of emerging pandemic diseases are all interrelated, as a recent article, “An Urgent Call for a New Relationship with Nature,” in the Scientific American  noted. They stem from human activity that, in the article, United Nations Secretary -General António Guterres equated to “waging war on nature.” Guterres warns, “This is suicidal. Nature always strikes back — and it is already doing so with growing force and fury.”

The period from 1945 to the present is sometimes known as the “Great Acceleration,” when global institutions created in the aftermath of World War II largely succeeded in spreading, globally, fossil fuel-based improvements to economic productivity, health, and material prosperity. Paradoxically the period is also increasingly recognized as a dangerous experiment affecting the future of civilization. As emphasized in previous posts in this series, the COVID-19 crisis has provided a global “dashboard” indicating where existing conceptual maps rooted in the post-World War II period have failed modern society.  It also reveals where they have succeeded, as in the case of unprecedentedly rapid development of safe and effective vaccines. These lessons present an opportunity to prevent a reoccurrence of disaster on this scale, or worse.  The lessons will indicate needed shifts that are unlikely to be accomplished by mere policy tinkering at the edges; indeed, they point to the need for systemic transformation on a still largely unimagined scale.

A coming shift in intellectual paradigms

The cognitive shifts needed for this transformation require rethinking our concepts of security across economic, health, ecological, and even national and international security arenas.  Broader thinking about security ramifications would include such concerns but also would embrace a whole-of-society approach centered on the root causes of the global climate crisis.

Rediscovering the reciprocal connections between human and non-human life, and between society and nature, is necessary for new thinking. It will require overcoming an intellectual tradition that, at least in Western society, viewed the natural world as an outside “other”—separate, passive, and inexhaustible, existing for no other purpose than economic benefit. Overcoming this tradition requires learning from those who see the world and humankind’s place in it as an interdependent ecosystem. It also would benefit from openness to projects and people proposing new intellectual frameworks better matched to emerging realities. Recent examples include Bruno Latour’s Down to Earth: Politics in the New Climatic Regime and John A. Dryzek and Jonathan Pickering’s The Politics of the Anthropocene

Latour notes, for example, “We must face up to what is literally a problem of dimension, scale, and lodging: the planet is much too narrow and limited for the globe of globalization; at the same time it is too big, infinitely too large, too active, too complex, to remain within the narrow and limited borders of any locality whatsoever.” Although Latour’s observations pre-dated the pandemic, its effects have underscored the need for a new model of human co-existence within the limits of the natural world. These ideas are further developed in a “thought experiment” and exhibition now underway at the ZKM (Center for Art and Media Karlsruhe) in Karlsruhe, Germany, called “Critical Zones: Observatories for Earthly Politics,” a program Latour co-developed with ZKM Director Peter Weibel. Similarly, Dryzek and Pickering note that governance in a climate change-disrupted epoch requires institutions “to embody foresight…and a recognition that what worked in the past will not necessarily work in the future. Foresight must involve a capacity to anticipate human-caused state shifts and act before the shift occurs. This is a demanding criterion. It means embodying responsiveness to early warnings of the sort that only science seems capable of providing.”

Such efforts to raise public awareness about the natural world, first and foremost, and human impacts on it, secondarily, also were the lifelong pursuit of Alexander Humboldt, born in 1769. Humboldt was a Prussian naturalist, explorer, and author of Cosmos, who sought to increase cognizance of the world as a living whole and warned that agricultural activities were causing climate changes.

Prussian naturalist Alexander von Humboldt saw human society as an integral part of the natural world. Source: Humboldt-Bonpland Chimborazo, Wikimedia Commons, Public Domain.

Over two hundred years ago, a young Humboldt embarked on his five-year exploration of the wilds of South America, where he observed that colonial agricultural practices involving irrigation and deforestation were causing erosion and changes in soil conditions.  In his diary and years later, when writing Cosmos, Humboldt extrapolated his observations to a global level. He  warned, “Man can only act upon nature, and appropriate her forces to his use, by comprehending her laws.” As Andrea Wulf recounts in The Invention of Nature, Humboldt warned that humankind had the potential to destroy the environment and that the consequences could be catastrophic. His warnings proved prescient: “We humans are not just influencing the present. For the first time in the Earth’s 4.5 billion year history, a single species is increasingly dictating its future,” write Simon L. Lewis and Mark A. Maslin, professors of global change science and earth system science, respectively, in The Human Planet: How We Created the Anthropocene.

The need for new conceptual maps for the future of organized society is urgently indicated by the latest scientific findings of the United Nations’ Intergovernmental Panel on Climate Change (IPCC). In a Special Report issued in late 2018, IPCC scientists warned that by 2030 the world has to have reduced global emissions of heat-trapping gases by 45 percent to avoid still more severe emissions reductions scenarios. The report’s authors emphasized that this would “require rapid, far-reaching and unprecedented changes in all aspects of society.” 

Montreal Climate Strike, September 27, 2019. Source: Welfact, Unsplash.

New thinking about security

The latest science motivates young people around the world today to participate in school strikes, inspired by the example of Swede Greta Thunberg since 2018, to draw attention to the urgent need for action on climate change. Movements such as Thunberg’s “Fridays for the Future,” as well as that of Extinction Rebellion, aim to accelerate governments’ efforts to mitigate the effects of climate change, but they are sometimes denounced as radical or extremist.

Such criticisms “get things precisely backwards,” according to Simon Dalby, a professor of geography and environmental studies at Wilfried Laurier University in Canada. Dalby writes that what the current generation of school children is demanding “is in fact very conservative — a chance at a decent life in broadly predictable and relatively safe circumstances guaranteed by social arrangements focused on continuity of family, state, and society.” What school children demand is simply their “future national security,” he says.

The example of the young climate activists’ movements around the world is just one of many underlining the need for novel thinking about security in the new Earth-system context. Elevating climate change as a high-level national security priority, as the new US administration has done, is a step in the right direction. There’s a need, however, to question whether the maps, or mental models, through which traditional state-centered US national and international security frameworks evolved are well suited to securing the future national security of young people today. These post-World War II institutions evolved within an internationally competitive geopolitical, fossil-fuel-based economic worldview. Traditional US national security framing raises attention to climate change as a security issue, but it also can delegitimize alternative climate security discourses and actors without experience in national security policy, as Franziskus von Lucke, a researcher in international relations at the University of Tübingen in Germany, observes in a new book, Securitisation of Climate Change and Governmentalisation of Security comparing US, German, and Mexican approaches to climate change and security.

The Anthropocene context

Mankind’s harmful impact on earth systems, essential to the web of life on which human society depends, is recognized in the concept of the “Anthropocene.” The Anthropocene term, coined by the late Nobel Laureate chemist Paul Crutzen and limnologist Eugene F. Stoermer in 2000, refers to a new phase in planetary history in which humanity has become a force of nature that is changing the dynamics and function of Earth itself. The proposed era, the Anthropocene, marks a departure from the previous geologic epoch, the Holocene, which, for about twelve thousand years, saw relative climate stability conducive to the evolution of human civilization. The Anthropocene is generally accepted to have begun in the mid-twentieth century, while others maintain it began with European colonialization of the Americas.

Forests of all kinds are disappearing rapidly. Source: gryffyn m, Unsplash

This Anthropocene framing clarifies the need for new “geopolitical imagination.” New security thinking would embrace the experiences and voices of a wider array of stakeholders than traditional security thinking. It would involve a shift to Earth-centered values acknowledging the interdependencies of human security with the health of natural ecosystems.

While climate change affects traditional national security concerns such as military operational readiness at home and abroad, a narrow focus on such issues can distort the range of policy questions and options that must be considered.   Fortunately, the Biden Administration’s recently released Interim National Security Guidance explicitly calls for a “new and broader understanding of national security.”  Such a broader understanding would usefully entail creating and sustaining globally cooperative foresight systems that link citizens more directly with Earth-system science expertise, as Dryzek and Pickering also recommend in their aforementioned work.

Collective intelligence

In the uncharted territory of the Anthropocene, a new and broader understanding of national security would prioritize global cooperation in cocreating shared awareness of changing Earth-system risks and potential responses.  It would invite and elevate different ideas for problem-solving better suited to the complex, interdependent, and time-sensitive systemic challenges facing humanity in the twenty-first century. In addition, joint sponsorship of a global systemic crisis preparedness system would emphasize methods for global foresight, including through data visualizations and syntheses of the behaviors and interdependencies of manmade and Earth systems. Such a networked platform might be seen as an Earth-bound “Rover” with regularly updated maps “on board.”

Instead of a physical vehicle, the Rover would be a global network of networks with one purpose: synthesizing and rapidly sharing insights to accelerate needed climate change mitigation actions. Through easy-to-use dashboard displays, such a foresight system would foster opportunities to cocreate shared awareness of the urgency of climate change; accelerate discovery and learning across nations and disciplines, including those not related to climate change per se; highlight danger thresholds as seen in the example of the Greenland ice sheet discovery; and prompt innovations accessible by all. Through this open-access nexus of global cooperation, governments, non-government organizations, academia, and subnational communities—including marginalized and indigenous communities not typically engaged in security-related discourse—would participate. Their aim would be creating and implementing new methods of thinking about economic values, public health, societal priorities, and managing global risks. Concepts of security and priorities in science that have evolved in a nation-centric framework since the mid-twentieth century would be refocused on identifying and implementing more sustainable means of human coexistence within the Earth’s natural systems. This will require reimagining almost everything about modern life. 

The COVID-19 pandemic has highlighted how structural racism, widening inequality, poor healthcare, outdated infrastructure, destroyed habitats, fragile supply chains and energy systems, threats to democracy, and social media-fueled disinformation undermine human security everywhere. As climate change impacts are similarly global, a lesson of the pandemic is that new thinking must reconsider what it means to be secure, who defines security, who benefits from security, and what it means to secure the future of organized society. Naturalist David Attenborough has called climate change “the biggest threat to security that modern humans have ever faced.” Such a global crisis requires global cooperation on an unprecedented scale. And it requires creating new concepts, frameworks, and mental models — or maps — to guide us on the unpaved road ahead.

Previous installment

GeoTech Cues

Sep 7, 2021

Reimagining a just society pt. 5: “Is this working as intended?” — Global trends amid contested futures

By Carol Dumaine

The question of ‘is this working as intended’ is applicable to contemporary concepts of national and international security as well as of economic value, growth, and development. Given how our world is being reshaped by new technologies, data capabilities, and geopolitics, leaders in both the public and private sector need to pause and consider if governance and geopolitics in today’s world are actually working – or not.

Climate Change & Climate Action Security & Defense

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Reimagining a just society pt. 4: New maps for a world disrupted by climate change appeared first on Atlantic Council.

]]>
The Ventilator To Africa Project and why it matters https://www.atlanticcouncil.org/blogs/geotech-cues/ventilator-to-africa-project/ Wed, 31 Mar 2021 20:47:00 +0000 https://www.atlanticcouncil.org/?p=373663 In mid-2020, a multidisciplinary team of North Americans and African expatriates at The Mentor Project assembled to bring MVM ventilators to Sub-Saharan Africa. Coordinating with hospitals in seven West African countries, they identified over 500 intensive care beds with the infrastructure to operate ventilators. Here is their story.

The post The Ventilator To Africa Project and why it matters appeared first on Atlantic Council.

]]>

The Breath of Life Africa aims to deliver life-saving innovations where they’re most needed during the COVID-19 pandemic while establishing a foundation to foster global health resilience. 

Hundreds of scientists from some of the greatest research institutions in the world came together to invent the MVM ventilator in response to the Covid 19. Conventional ventilators cost tens of thousands of dollars. The MVM costs thousands, runs on open-source software, uses readily available parts and can be assembled quickly. It went from concept to FDA approval in six weeks. 

After projections in early 2020 that there wouldn’t be enough ventilators for Covid patients, the United States was over-supplied by August. In Africa, by contrast, there are fewer than 2,000 working ventilators to support hundreds of millions of people.

In Mid 2020, a multidisciplinary team of North Americans and African expatriates at The Mentor Project assembled to bring MVM ventilators to Sub-Saharan Africa. Coordinating with hospitals in seven West African countries, they identified over 500 intensive care beds with the infrastructure to operate ventilators.

While ventilators are counter-indicated for most Covid patients, they are essential for the most severely ill, estimated at 8%. Given the infrastructure required to  operate ventilators, the ability to scale capacity to treat the sickest patients is constrained in most African countries. Currently, these human beings are left to die. We aim to: 

  • Make ventilators available where they are needed and can be operated.
  • Train healthcare workers who will operate them.
  • Maximize the usability of each ventilator by establishing supply chains for ancillary equipment, supplies, and repair and work to lower the lifetime costs.
  • Minimize equipment loss through innovation and extraordinary partnerships.
  • Proactively identify unanticipated challenges by establishing information channels.
  • Actively address unanticipated challenges by engaging multi-disciplinary, global teams of experts: The Global Breath of Life Africa Network. 

Their mission

Deliver life-saving innovations where they’re most needed during the Covid pandemic while establishing a foundation to foster global health resilience. 

History of the Breath of Life Africa initiative

In late April 2020, one of our team members, a native of Timbuktu Mali, noted a spike in death among her community of origin. Trying to understand what was happening she reached out to the Regional hospital team and discovered the reality about COVID-19 in Africa. All COVID-19 tests performed at the regional hospital of Timbuktu at that time came back positive. The hospital was already over capacity and patients were placed under tents in the hospital yard with no possibility of medical evacuation to a more furnished establishment in the capital town Bamako. Travel by road being very risky due to armed rebels and no commercial planes deserving the town. The Timbuktu Regional hospital is not equipped to manage complicated cases of COVID-19. With a population of 56,000 souls Timbuktu has no ventilators available for the entire region. The entire country with a population of more than 20 millions inhabitants at the time of our assessment is disposing of 60 ventilators only. Becoming aware of this situation and several of our team members being from Africa we decided to act now to save as much lives as possible. We first contacted The Mentor Project (TMP) who helped us get started and introduced us with several low cost ventilators manufacturers. The Breath of Life Africa initiative was born.

The post The Ventilator To Africa Project and why it matters appeared first on Atlantic Council.

]]>
Event recap | Data science and social entrepreneurship https://www.atlanticcouncil.org/blogs/geotech-cues/event-recap-coordinating-data-privacy-public-interest/ Wed, 24 Mar 2021 19:04:00 +0000 https://www.atlanticcouncil.org/?p=370517 An episode of the GeoTech Hour featuring data scientists and entrepreneurs who discuss how to employ tech for good.

The post Event recap | Data science and social entrepreneurship appeared first on Atlantic Council.

]]>

Find the full GeoTech Hour series here.

Event description

On this episode of the weekly GeoTech Hour, the GeoTech Center is returning to the fourth episode of the Data Salon Series, hosted in partnership with Accenture. This episode focuses on the challenges and opportunities of employing data for social good, and how entrepreneurship can fill a unique gap to ensure sound business practices and ethics concerning how data is used.

Around the world, scores of individuals and organizations work to create a better reality for their communities, their nations, and the world. Yet, with so many players in the field, it is often difficult to coordinate between different streams of public, private, and nongovernmental data seeking to combat overlapping problems. During this episode, panelists discuss their efforts and outlined methods to connect data with the organizations who need it without exposing personal information of anyone involved.

Featuring


Valeria Budinich

Scholar-in-Residence, Legatum Center
MIT’s Sloan School of Management

Derry Goberdhansingh
CEO
Harper Paige

Bevon Moore
CEO
Elevate U

Hosted by

David Bray, PhD
Director, GeoTech Center
Atlantic Council

Previous episode

Event Recap

Apr 21, 2021

Event recap | Countering bot swarms, mass false accounts, and deep fakes, Part 1

By the GeoTech Center

An episode of the GeoTech Hour exploring the impacts of tech on propagating misinformation and the innovative solutions that can follow.

Americas Disinformation

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Event recap | Data science and social entrepreneurship appeared first on Atlantic Council.

]]>
Event recap | Coordinating data privacy and the public interest https://www.atlanticcouncil.org/blogs/geotech-cues/event-recap-coordinating-data-privacy/ Wed, 17 Mar 2021 19:48:00 +0000 https://www.atlanticcouncil.org/?p=363793 Data usage and the employment of data trusts has maximized individual privacy and private sector benefits. Both the government and the private sector are working towards developing strategies that emphasize individual privacy more than ever before, as the public continues to express greater interest in protecting their data. However, few institutions have landed upon successful solutions in practice that can protect user privacy while allowing for the high levels of analysis they have come to expect. As our digital landscape continues to evolve, panelists in this episode of the GeoTech Hour discuss intentional policy and design choices that could allow for greater data ownership within people-centered structures.

The post Event recap | Coordinating data privacy and the public interest appeared first on Atlantic Council.

]]>

Find the full GeoTech Hour series here.

Event description

On this episode of the weekly GeoTech Hour, the GeoTech Center is returning to the third episode of the Data Salon Series, hosted in partnership with Accenture. This episode focuses on data usage and employing data trusts to maximize individual privacy and private sector benefits.

The panelists discuss how governments and the private sector alike are working to develop strategies that emphasize individual privacy more than ever before, as the public continues to express greater interest in protecting their data. However, few institutions have landed upon successful solutions in practice that can protect user privacy while allowing for the high levels of analysis (including machine or AI-enabled learning) they have come to expect. As our digital landscape continues to evolve, it is time to consider what intentional policy and design choices could allow for greater data ownership within people-centered structures.

This recording will be available here, on the Atlantic Council’s YouTube channel, or on the GeoTech Center’s Twitter.

Featuring

Dr. Divya Chander, MD, PhD
Nonresident Senior Fellow, GeoTech Center
Atlantic Council

Krista Pawley
Nonresident Senior Fellow, GeoTech Center
Atlantic Council

Hosted by

David Bray, PhD
Director, GeoTech Center
Atlantic Council

Previous episode

Event Recap

Apr 21, 2021

Event recap | Countering bot swarms, mass false accounts, and deep fakes, Part 1

By the GeoTech Center

An episode of the GeoTech Hour exploring the impacts of tech on propagating misinformation and the innovative solutions that can follow.

Americas Disinformation

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Event recap | Coordinating data privacy and the public interest appeared first on Atlantic Council.

]]>
Event recap | The GeoTech Decade ahead https://www.atlanticcouncil.org/blogs/geotech-cues/event-recap-geotech-decade-ahead/ Thu, 11 Mar 2021 20:01:56 +0000 https://www.atlanticcouncil.org/?p=364125 On Thursday, March 11, panelists gathered to celebrate the first anniversary of the Atlantic Council GeoTech Center and discuss the “GeoTech Decade” ahead: what it means, what it could look like, and how it will affect us all. This date also represents the one-year anniversary of the COVID-19 pandemic, an event that has demonstrated to the world that choices regarding data and tech infrastructure and digital literacy significantly change our communities’ preparedness, resilience, and recovery from such an event.

The post Event recap | The GeoTech Decade ahead appeared first on Atlantic Council.

]]>

Find the full GeoTech Hour Series here.

Event description

On Thursday, March 11, at 12:00 p.m. ET, panelists gathered to celebrate the first anniversary of the Atlantic Council GeoTech Center and discuss the “GeoTech Decade” ahead: what it means, what it could look like, and how it will affect us all. This date also represents the one-year anniversary of the COVID-19 pandemic, an event that has demonstrated to the world that choices regarding data and tech infrastructure and digital literacy significantly change our communities’ preparedness, resilience, and recovery from such an event. 

In the year since COVID-19 was declared an official pandemic, the GeoTech Center has focused on connecting tech and data efforts across sectors and nations to ensure we emerge from the pandemic stronger and united. This has included in-depth analyses and actions associated with the defining contours and choices for the GeoTech Decade ahead. Specifically, if 2001-2011 was the Decade of Counter-Terrorism activities globally, and 2011-2021 was the Decade of Disillusionment where the decade began with hope and public trust in both the U.S. government and big tech companies and the decade ended with the exact opposite, then 2021-2031 is the GeoTech Decade where tech and new data capabilities will have disproportionate impacts on geopolitics, competition, and collaborations globally. Areas of significant importance include: 

  • Global scientific and technology leadership
  • Secure data and communications
  • Enhanced trust and confidence in the digital economy
  • Assured supply chains
  • Continuous global health protection
  • Assured space operations for public benefit

Featuring

Vinton G. Cerf
Chief Evangelist
Google

Melissa Flagg
Nonresident Senior Fellow, GeoTech Center
Atlantic Council

Shirley Ann Jackson
President
Rensselaer Polytechnic Institute

Michael J. Rogers
National Security Contributor and Host
CNN

Hosted by

David Bray, PhD
Director, GeoTech Center
Atlantic Council

Previous episode

Event Recap

Apr 21, 2021

Event recap | Countering bot swarms, mass false accounts, and deep fakes, Part 1

By the GeoTech Center

An episode of the GeoTech Hour exploring the impacts of tech on propagating misinformation and the innovative solutions that can follow.

Americas Disinformation

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Event recap | The GeoTech Decade ahead appeared first on Atlantic Council.

]]>
Event recap | Artificial intelligence, the internet, and the future of data: Where will we be in 2045? https://www.atlanticcouncil.org/blogs/geotech-cues/event-recap-artificial-intelligence-2045/ Wed, 10 Mar 2021 21:02:00 +0000 https://www.atlanticcouncil.org/?p=362871 An episode of the GeoTech Hour where panelists
look towards the future of artificial intelligence, discussing the GeoTech Decade ahead and beyond to 2045.

The post Event recap | Artificial intelligence, the internet, and the future of data: Where will we be in 2045? appeared first on Atlantic Council.

]]>

Find the full GeoTech Hour Series here.

Event description

This special edition of the GeoTech Hour pulls from a keynote address originally provided as an AI World Society Distinguished Lecture at the United Nations Headquarters on United Nations Charter Day June 26th, 2019 by Dr. David Bray, the current inaugural director for the GeoTech Center.

The address looks towards 2045: rapid technological change, global questions of governance, and the future of human co-existence. Made relevant even more by the events of 2020, this video will set the stage for a second special GeoTech Hour segment celebrating our first anniversary on March 11, 12:00 – 1:00 p.m.

Featuring

David Bray, PhD
Director, GeoTech Center
Atlantic Council

Previous episode

Event Recap

Apr 21, 2021

Event recap | Countering bot swarms, mass false accounts, and deep fakes, Part 1

By the GeoTech Center

An episode of the GeoTech Hour exploring the impacts of tech on propagating misinformation and the innovative solutions that can follow.

Americas Disinformation

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Event recap | Artificial intelligence, the internet, and the future of data: Where will we be in 2045? appeared first on Atlantic Council.

]]>
Successful new COVID-19 testing methods will require public trust https://www.atlanticcouncil.org/content-series/economy-of-trust-content-series/successful-new-covid-19-testing-methods-will-require-public-trust/ Mon, 08 Mar 2021 14:37:00 +0000 https://www.atlanticcouncil.org/?p=473304 The scientific community and policymakers keep exploring new ways of testing for COVID-19 infections. Among the most promising, wastewater testing and saliva testing stand out. Ultimately, the success of these new methods will be measured by the test’s accuracy, speed, cost, lack of pain,... and the public's trust in them.

The post Successful new COVID-19 testing methods will require public trust appeared first on Atlantic Council.

]]>

Editorial

According to the U.S. Center for Disease Control and Prevention (CDC), “reported… COVID-19 cases likely represent only a fraction of all SARS-CoV-2”, the virus that causes COVID-19 infections. They argue this may be because of “an unknown proportion of people” that either 1) have mild or no symptoms, 2) do not seek medical care, or 3) do not get tested when they sought medical care. To address this third cause, the scientific community and policymakers keep exploring new ways of testing for COVID-19 infections that go beyond the already well-known nose-swab PCRs, or the rapid diagnostic antigen tests (RDT). Among the most promising new COVID-19 testing means, wastewater testing and saliva testing stand out.

Wastewater testing is viewed as a “cost-effective way to survey transmission dynamics of entire communities,” avoiding the biases of other epidemiological indicators, and collecting data from people who lack access to healthcare. At the same time, it allows for a testing method that does not invade individual’s privacy and allows for higher levels of trust in the communities where it is applied. On February 2020, SARS-CoV-2 was detected in the sewage of five sites a week after the first COVID-19 case in the Netherlands. This event discovered that, even at low COVID-19 prevalence, sewage surveillance could be a sensitive tool to monitor the viral circulation.

A Stanford University study published last December 7, 2020 identified a wastewater testing approach capable of better detecting viral infection patterns in communities and tracking “whether the infection rates are trending up or down.” Stanford’s Michelle Horton informed that “testing wastewater – a robust source of COVID-19 as those infected shed the virus in their stool – could be used for more responsive tracking and supplementing information public health officials rely on when evaluating efforts to contain the virus, such as enhanced public health measures and even vaccines when they become available. The test,” she said, “works by identifying and measuring genetic material in the form of RNA from SARS-COV-2.” One of the report’s senior authors, Alexandria Boehm, explained that, through this mechanism, “wastewater data complements the data from clinical testing and may provide additional insight into COVID-19 infections within communities.” What is more, the researchers found “the settled solid samples had higher concentrations and better detection of SARS-CoV-2 compared to the liquid versions,” confirming early thinking that “targeting the solids in wastewater would lead to sensitive and reproducible measurements of COVID-19 in a community,” and eventually “tracking upward trends when cases are still relatively low.”

Using this same method, three leading European operators have recently joined forces to offer a “complete real-time decision management system”  tested in France. They say it is “highly operational and can be immediately deployed within a few days” anywhere in France and abroad.

Starting October 2020, an interdisciplinary team of scientists partnered with Syracuse University and the New York State’s Department of Health to monitor 14 counties and 12 universities with a wastewater surveillance platform. After their testing method proofed to have successful results, this group “identified the virus in samples [on campus and at various colleges], and students were saliva-tested and quarantined until their individual results were known.” Indeed, saliva-based testing has come forward as an attractive, low-cost alternative. It also offers an improvement over the standard nasopharyngeal swab because people can collect their own samples with minimal discomfort. The UK government, for example, recently partnered with the molecular diagnostics company Optigene to develop a pilot study involving more than 14,000 people to test the efficacy of its saliva test. In the U.S., the country’s Food and Drug Administration issued an emergency use authorization (EUA) to Yale School of Public Health last August 15th for its SalivaDirect COVID-19 diagnostic test, which uses a new method of processing saliva samples. The organization described this method as “yet another testing innovation game changer that will reduce the demand for scarce testing resources,” and encouraged test developers “to work with the agency to create innovative, effective products to help address the COVID-19 pandemic and to increase capacity and efficiency in testing.”

A year after the WHO characterized the spread of COVID-19 as a pandemic, scientists, researchers, and policymakers around the globe keep exploring new and more effective methods to help reduce the spread of the disease. Ultimately, the success of these new methods will be measured by the test’s accuracy, speed, cost, and lack of pain. This success will only be possible if individuals and communities place their trust in them first. 


Sincerely,

Christine Macqueen
Economy of Trust Foundation / SICPA
Dr. David Bray
Atlantic Council GeoTech Center
Borja Prado
Editor

Get the Economy of Trust newsletter

Sign up to learn about advances in technology and data activities that, through trust and more transparent frameworks, improve nations and sectors alike.

Latest Reseach & Analysis

The post Successful new COVID-19 testing methods will require public trust appeared first on Atlantic Council.

]]>
Event recap | Women’s leadership in the GeoTech Decade https://www.atlanticcouncil.org/blogs/geotech-cues/event-recap-women-in-tech/ Mon, 01 Mar 2021 21:48:41 +0000 https://www.atlanticcouncil.org/?p=357417 The start of the Geotech Decade has had disproportionate impacts on women and has shown the need for women’s leadership worldwide. Women however currently only make up 26 percent of workers in data and AI roles, 15 percent in engineering, and 12 percent in cloud computing. In this episode of the GeoTech Hour, Deputy Director and Senior Fellow Stephanie Wander discuss leadership in the GeoTech Decade with the four women leading the GeoTech Commission.

The post Event recap | Women’s leadership in the GeoTech Decade appeared first on Atlantic Council.

]]>

Find the full GeoTech Hour Series here.

Event description

If 2020 revealed anything to the world, it is the need for great leadership, especially in times of crisis. In the upcoming GeoTech decade, leaders of governments, corporations, and non-profits will need to react swiftly and effectively to emerging challenges, including climate change, increasing political polarization, the impacts of new technologies on work, inequity and poverty, and the long-term effects of COVID-19.

In order for this decade to be truly prosperous and peaceful for all, an increase in women leadership, especially in tech, policy, and entrepreneurship is imperative. In this special edition of the GeoTech Hour, Deputy Director and Senior Fellow Stephanie Wander discuss leadership in the GeoTech Decade with four incredible women that are leading our GeoTech Commission.

The start of the Geotech Decade has had disproportionate impacts on women and has shown the need for women’s leadership worldwide. The main path to parity for women is through the most promising economic sectors where women can make the greatest gains, one of which is the tech sector. Women currently only make up 26 percent of workers in data and AI roles, 15 percent in engineering, and 12 percent in cloud computing. The barriers to employment in these sectors for women include bias, lack of access to STEM education, and lack of funding.

Women leaders are integral to the GeoTech Center mission of championing positive paths forward that societies can pursue to ensure tech and data empower people. In this episode, four women will discuss this critical mission, and how we as a society make sure that women not only maintain participation in the workforce and the economy, but also flourish and lead through it.

Featuring

Shirley Ann Jackson
President
Rensselaer Polytechnic Institute

Susan M. Gordon
Former Principal Deputy Director of National Intelligence
Director
CACI International Inc.

Suzan DelBene
US Congresswoman (WA-1)
House of Representatives

Teresa Carlson
Vice President, Worldwide Public Sector
Amazon Web Services

Hosted by

David Bray, PhD
Director, GeoTech Center
Atlantic Council

Stephanie Wander
Deputy Director and Senior Fellow, GeoTech Center
Atlantic Council

Previous episode

Event Recap

Apr 21, 2021

Event recap | Countering bot swarms, mass false accounts, and deep fakes, Part 1

By the GeoTech Center

An episode of the GeoTech Hour exploring the impacts of tech on propagating misinformation and the innovative solutions that can follow.

Americas Disinformation

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Event recap | Women’s leadership in the GeoTech Decade appeared first on Atlantic Council.

]]>
Event recap | Synthetic data, privacy, and the future of trust https://www.atlanticcouncil.org/blogs/geotech-cues/synthetic-data-privacy-trust/ Wed, 24 Feb 2021 20:29:18 +0000 https://www.atlanticcouncil.org/?p=357495 A live GeoTech Hour where panelists discussedartificial intelligence and how to address the legal and ethical privacy concerns associated with synthetic data.

The post Event recap | Synthetic data, privacy, and the future of trust appeared first on Atlantic Council.

]]>

Find the full GeoTech Hour series here.

Event description

Over the last decade, the business of data has disrupted nearly every business category with its promise of technological, industrial, and human advancement. Data continues to captivate our interest as entrepreneurs, executives, and policymakers for its potential to democratize the next wave of productivity with artificial intelligence and machine-to-machine advancements. To advance this wave of productivity, new models of data have been invented: Synthetic Data 

As it suggests, synthetic data is completely artificial and offers the promise of both usefulness and privacy.  Artificial intelligence that is trained on real-life information often contains a baked-in bias: algorithmic decision-making in fields such as criminal justice and credit scoring shows evidence of racial discrimination. The promise of synthetic data allows organizations and governments to overcome geographical, resource, and political barriers. It can be applied to solving some of the world’s biggest problems, from international medical research, fairness in lending, to reducing fraud and money laundering.  By 2022, Gartner estimates over 25% of training data for AI will be synthetically generated.  It is already being used in healthcare, banking, crime detection, manufacturing, telecom, retail, and several other fast-moving industries to accelerate learning. 

However, its usefulness hinges on privacy: that anybody utilizing synthetic data could make the same statistical decisions as they would from the true data — without being able to identify individual contributions.

On this episode of the GeoTech Hour that took place on Wednesday, February 24, at 12:00 p.m. ET, experts discussed how if the privacy thresholds can be legally and ethically addressed, synthetic data can be the best way to safely unlock the potential of the data economy.

Featuring

Jacqueline Musiitwa
Research Associate, China, Law & Development Project
University of Oxford

Krista Pawley
Nonresident Senior Fellow, GeoTech Center
Atlantic Council

Michael Capps
CEO
Diveplane

Steven Tiell
Nonresident Senior Fellow, GeoTech Center
Atlantic Council

Stuart Brotman
Howard Distinguished Endowed Professor of Media Management and Law, University of Tennessee, Knoxville; International Advisory Council member, APCO Worldwide

Hosted by

David Bray, PhD
Director, GeoTech Center
Atlantic Council

Previous episode

Event Recap

Feb 17, 2021

Event recap | Data-informed nutrition policy and practices

By the GeoTech Center

An episode of the GeoTech Hour where panelists discuss challenges and opportunities for data-informed nutrition solutions, and how a multisectoral approach can improve global health.

Climate Change & Climate Action Inclusive Growth

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Event recap | Synthetic data, privacy, and the future of trust appeared first on Atlantic Council.

]]>
Event recap | Information warfare: An all-domain military and civil deception, from today to 2030 https://www.atlanticcouncil.org/blogs/geotech-cues/information-warfare-military-civil-deception-today-2030/ Tue, 23 Feb 2021 20:23:09 +0000 https://www.atlanticcouncil.org/?p=356826 A discussion hosted by NDIA where panelists debate the evolving nature of "information warfare" and how advanced technologies are blurring the lines between military and civilian deception operations.

The post Event recap | Information warfare: An all-domain military and civil deception, from today to 2030 appeared first on Atlantic Council.

]]>
In The Art of War, Sun Tzu writes: “all warfare is based on deception.” As the world becomes more connected, our threats are increasingly merging at the intersection of military, economic, social, and diplomatic efforts. Now with advanced communications, satellite, and computing technologies, deception and information warfare (IW) are starting to bleed out of purely military operations. On February 3, director of the GeoTech Center, Dr David Bray, contributed to a panel discussion at a Virtual Expeditionary Warfare Conference that examined this phenomenon. The panel was hosted by the National Defense Industry Association’s (NDIA) Expeditionary Warfare Division and was moderated by GeoTech fellow and Vice Chair of NDIA’s Information Warfare subdivision, Mr. Richard J. Cordes. The other panelists included Mr. Alex Ruiz of Phaedrus LLC, an advisor to the Office of the Secretary of Defense; and Ms. Dana Hudson, Chair of NDIA’s Division on Special Operations and Low Intensity Conflict (SO/LIC). 

Gray zone and information warfare

The event began with a discussion on how IW is evolving. Dr Bray noted  that weaponized misinformation is not new. What has changed, however, is the ability to amplify and swiftly spread misinformation on a wide scale. Today, more than half of the world’s population has access to the internet; not only do those 3.6 billion people have influence, there are now even bots that can pose as humans, adding further layers of complexity to IW. Acknowledging that the internet is also optimized for engagement,  Dr Bray forecasted that all future conflicts will involve misinformation operations in which each side seeks to control its adversary’s “perceptions of reality”. This, Dr Bray noted, could result in the emergence of a new “Cognitive Cold War,” where states compete to build weapons designed to mislead foreign publics. Modern military theory and practice has already begun to form around these changes, both in terms of IW campaigns and defense. Key examples include: Systems Warfare Theory, concepts built on Gerasimov Doctrine, and Cognitive Security.

Destabilization and polarization

Ms Hudson continued the discussion by highlighting how political polarization could exacerbate the dangers of IW. She outlined how one tactic for future adversaries may be to get Americans to mistakenly perceive other Americans as the true enemy. Such attempts have already occurred. Russian interference efforts in 2016 were prime examples. While those attempts occurred in peace time, Ms Hudson predicted misinformation would be even more devastating during war. It will thus be imperative to create the social mechanisms to counter foreign interference attempts. Mr Ruiz added that such necessities warrant a rethinking of our institutions as technology is accelerating faster toward interconnectivity than our institutions are.

Civil deception impacts on expeditionary warfare

The discussion then shifted to forecasting how expeditionary operations may change with new IW tactics. Dr. Bray wondered if “any future general or admiral [will] have the confidence that their plans won’t be taken out of context the moment they try to execute them.” It was a provocative question as there are now private companies that can launch cheap, high-quality satellites capable of mapping the world at 25cm resolution. Dr. Bray wondered whether or not adversaries would become capable of flooding US airwaves with disinformation in attempts to portray valid military operations as war crimes. Mr. Cordes noted that such tactics are already common; he cited Rand Waltzman’s testimony to the Senate Armed Services Committee in 2017, which described an incident where the corpses of terrorists were rearranged so that it looked as if they had been praying rather than fighting. Dr. Bray warned such deceptions could drastically hinder operational effectiveness. He recalled that during his time in Afghanistan, “90 percent of Afghanis thought the United States was there to extract Afghanistan’s heroin and opium,” undermining trust and cooperation with locals.

https://www.youtube.com/watch?v=ojFzHeastaA&feature=youtu.be

The future impacts of AI

Having analyzed the past and present states of IW, Mr. Cordes then shifted the discussion to discussing how future technologies may make combating IW even harder. Citing Admiral Sawyer, he noted that future wars will largely be “unscripted.” Mr. Ruiz continued to argue kinetic conflicts will probably decrease and shift toward the cyber realm. Thus, the tactical and operational advantages will rest with the side that can faster identify and respond to threats. Mr. Ruiz called this phenomenon a potential information arms race. However, he also warned of its potential downsides as service members may begin to look to AI decision-making systems like the Greeks did to oracles–infallible sages rather than battlefield tools.

A way to mitigate such dangers, the panel argued, was to enhance human-data partnerships and promote interoperability. Mr. Ruiz argued that the defense industry should prioritize promising technologies that could help integrate data systems into battlefields. Ms. Hudson then added institutions need to help accelerate this push. She suggested that the public sector communicate with industry so that the civil, military, and commercial sectors can all push toward the same goals.

Whole-of-industry, whole-of-nation, whole-of-society

This led the panel to discuss other core themes of the day: whole of nation approaches and interdisciplinarity. Mr. Cordes argued that change will not happen through one entity or institution but rather through a national effort or whole-of-nation approach. He cited General John Allen who argued in an earlier session that a strong national identity will be required to stay competitive in the future. Dr. Bray concurred and suggested expanding the pool of contributors under a “whole-of-society” approach. He believed it would be important to engage our own public to see how actors who heretofore have been excluded from national security operations can contribute.

Dr. Bray also argued that military doctrine must change; he suggested that not only should officers’ promotions be tied to their abilities to ensure their warfighters be fit to fight, they should also be required to ensure their battlefield data is fit to fight. That way, not only would strong officers be competent leaders, they would also be incentivized to reach out to the commercial sector and strengthen their ties to tech industries. He also stressed that with IW, states may do everything right kinetically but lose overall wars if they do not secure the information realm. Thus, Dr. Bray also suggested the DoD work with organizations like NDIA to design ‘red team’ scenarios with IW.

Ms. Hudson then added to the discussion by noting Congress must also play a role in this effort. She argued for funds specifically dedicated to next-generation capabilities and wished for the creation of a “program of record” to prioritize those capabilities. She even stressed that Russia and China have already institutionalized such programs. Mr. Ruiz further added to the discussion by reiterating the need to adopt new doctrines. While he pointed out the United States cannot abandon legacy platforms, he cited the Solarwinds hacking attack as an example where cyber attacks will attack all aspects of our society and thus must be addressed as such.

In the final minutes of the event, each panelist reemphasized how critical it will be to counter misinformation on future battlefields. Ms. Hudson began by citing Congress’ need to prioritize processes that will enable the US to have anti-IW systems. Mr. Ruiz followed stating the necessity for military institutions to further utilize “white card” techniques – one of many exercises techniques designed to train servicemen in simulated, degraded information environments. Finally, Dr. Bray concluded the panel by summarizing the scope of future IW risks. He argued that not only will IW entail false misleading militaries, adversaries will attempt to confuse US citizens on what they themselves will be doing. To counter such measures, Dr. Bray stated that he would like to see the creation of satellite networks that may be able to, in real-time, disprove attempts at misinformation. 

Conclusion

IW will be one of the most fundamental pillars of future conflicts. As kinetic ‘hot wars’ increasingly shift toward ‘gray zone’ and cyber wars, it will be increasingly more important to create institutions to counter potential adversaries. Neither the civilian nor military sectors will be able to counter future challenges alone. Threats will continue to merge among civil and military lines, and it will be up to leaders to create the mechanisms to address and ultimately prevail in this evolving battlespace. 

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Event recap | Information warfare: An all-domain military and civil deception, from today to 2030 appeared first on Atlantic Council.

]]>
Event recap | Data-informed nutrition policy and practices https://www.atlanticcouncil.org/blogs/geotech-cues/event-recap-data-informed-nutrition-policy-and-initiatives/ Wed, 17 Feb 2021 21:21:22 +0000 https://www.atlanticcouncil.org/?p=354237 An episode of the GeoTech Hour where panelists discuss challenges and opportunities for data-informed nutrition solutions, and how a multisectoral approach can improve global health.

The post Event recap | Data-informed nutrition policy and practices appeared first on Atlantic Council.

]]>

Find the full GeoTech Hour Series here.

Event description

Across the world, one in five deaths are currently associated with poor diet, which contributes to a variety of chronic and deadly diseases such as cardiovascular disease, diabetes, and cancer. Population level interventions that are evidence-based are crucial to lowering this burden, and can be made even more effective through innovative public-private partnerships. Solutions can include adopting current methods that incentivize environmental and social impacts of a company or group, such as the ESG criteria, which can benefit from an additional factor that includes the impact of that group on human health. 

This factor must be informed by the most current data on the relationship between diet and disease. Developing modern datasets from multiple sources must also overcome the hurdle of interoperability between public, private, and research agencies to ensure that all groups can make the best evidence-informed policies and programs. They must also be communicated successfully to the population so that consumers can make the best possible choices for their health.

Featuring

Julie Meyer
Founder and Co-CEO
Eat Well Global

Dr. Dariush Mozaffarian
Dean, Friedman School of Nutrition Science and Policy
Tufts University

Joshua Smith
Director
Manna Tree Partners/VIGR

Tambra Raye Stevenson
Founder
Women Advancing Nutrition Dietetics and Agriculture

Taylor Wallace
Principal and CEO
Think Healthy Group

Hosted by

Daniella Taveau
Nonresident Senior Fellow, GeoTech Center
Atlantic Council

Previous episode

Event Recap

Apr 21, 2021

Event recap | Countering bot swarms, mass false accounts, and deep fakes, Part 1

By the GeoTech Center

An episode of the GeoTech Hour exploring the impacts of tech on propagating misinformation and the innovative solutions that can follow.

Americas Disinformation

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Event recap | Data-informed nutrition policy and practices appeared first on Atlantic Council.

]]>
Flemming Awards: Celebrating exceptional public service https://www.atlanticcouncil.org/blogs/geotech-cues/celebrating-public-service/ Mon, 15 Feb 2021 16:55:00 +0000 https://www.atlanticcouncil.org/?p=349822 Established in 1948, the Flemming Awards honor outstanding federal employees. Recognized by the president of the United States, agency heads, and the private sector, the winners are selected from all areas of the federal service.

The post Flemming Awards: Celebrating exceptional public service appeared first on Atlantic Council.

]]>
The Atlantic Council GeoTech Center joins the Arthur S. Flemming Awards in recognizing exceptional public service amid the COVID-19 pandemic. Established in 1948, the Flemming Awards honor outstanding federal employees. Recognized by the president of the United States, agency heads, and the private sector, the winners are selected from all areas of the federal service. The Awards aimed to:

  • to recognize outstanding service;
  • to attract and recruit outstanding talent to the public service; and
  • to retain the “best of the best” in government service, for the benefit of the Nation at large.

The Arthur S. Flemming Award stands out among the more than 40 awards associated with government service.  It has always been run entirely by the private sector, with financial support from major corporations.  Apart from nominating candidates for the Award, government agencies have no involvement whatsoever.  The Award brings no financial consideration.  Its prestige is considered to be reward enough in and of itself. 

The Flemming Awards alumni include many whose names are well-known. To name a few, past award recipients include Daniel Patrick Moynihan, Paul Volcker, Jr., John Chancellor, Neil Armstrong, Mary Elizabeth Hanford (now Elizabeth Dole), Robert Gates, Dr. Anthony Fauci, and William Phillips (Nobel laureate in 1997). More than 700 individuals have received the award to date.

After a one-year hiatus for 1996 out of respect for Dr. Flemming’s passing, The George Washington University (GWU) assumed sponsorship and overall responsibility for the program in 1997. The Trachtenberg School of Public Policy and Public Administration at GWU has been the home and has managed the Flemming awards ever since.  The Arthur S. Flemming Awards Commission, the Atlantic Council, Federal Management Systems, National Academy of Public Administration, and the Trachtenberg School of Public Policy & Public Administration (George Washington University) would like to celebrate some of the award winners.

Celebrating medical research with Andrea Apolo

In this video we recognize 2020 Arthur S. Flemming award winner Dr. Andrea Apolo who is interviewed by GeoTech Center inaugural director Dr. David Bray regarding their career and efforts in public service. Dr. Apolo is a Lasker Clinical Research Scholar, Tenure-Track Investigator, and Chief of the Bladder Cancer Section of the Genitourinary Malignancies Branch of the National Cancer Institute.

Celebrating public health with Dr. Duncan MacCannell

In this video we recognize 2020 Arthur S. Flemming award winner Dr. Duncan MacCannell who is interviewed by GeoTech Center inaugural director Dr. David Bray regarding their career and efforts in public service. Dr. MacCannell is the chief science officer for the CDC’s Office of Advanced Molecular Detection (OAMD). 

Celebrating labor relations with Samantha Thomas

In this video we recognize 2020 Arthur S. Flemming award winner Samantha Thomas who is interviewed by GeoTech Center inaugural director Dr. David Bray regarding their career and efforts in public service. Samantha Thomas is an Associate Regional Solicitor (Region 3) at the US Department of Labor.

gtc silhouettes of people gathered together

Event Recap

May 6, 2020

Video: Public service remains the essential undertaking that brings communities together

By GeoTech Center

Here at the Atlantic Council, we recognize that working to benefit people, prosperity, and peace for all globally requires committed public servants. Being a committed public servant can be especially hard in today’s world.

Civil Society Resilience & Society

Arthur S. Flemming

The Award bears the name of the quintessential civil servant Arthur S. Flemming. He served more Presidents in an official capacity than any other person, before or since.  In 1939 President Franklin Roosevelt appointed him to the US Civil Service Commission.  At 34 Flemming was the youngest person ever to have been appointed to such an office.  His career was non-partisan – Presidents of both parties retained his services – and he served in a significant capacity under every President from Roosevelt to Clinton, except for Reagan, who dismissed him from his chairmanship of the US Commission for Civil Rights for being too outspoken in his views. He was Secretary of Health, Education and Welfare in the second Eisenhower administration 1958-61.  President Clinton awarded him the Presidential Medal of Freedom in 1994.  He was still working at the age of 91, as a member of the Commission on Aging and as co-chair of the S.O.S. (Save Our Security) Coalition, when he died in September 1996.

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Flemming Awards: Celebrating exceptional public service appeared first on Atlantic Council.

]]>
Event recap | The geopolitics of emerging tech during the pandemic https://www.atlanticcouncil.org/blogs/geotech-cues/event-recap-geopolitics-of-emerging-tech-pandemic/ Wed, 10 Feb 2021 18:36:00 +0000 https://www.atlanticcouncil.org/?p=352680 A GeoTech Hour with panelists sharing insights on lessons learned, ongoing challenges, and requisite next steps to be taken when considering the intersection of geopolitics, modern technologies, and COVID-19 pandemic.

The post Event recap | The geopolitics of emerging tech during the pandemic appeared first on Atlantic Council.

]]>

Find the full GeoTech Hour Series here.

Event description

Corporations and governments alike continue to struggle with technology policy, especially under the strain of a global pandemic that struck at a moment when the internet was mature enough to alleviate many of COVID’s harms despite facing novel geopolitical challenges to its design and use at the same time. These turbulences have not just tested every facet of the digital world and its ability to handle crises but also altered international relations significantly. Many countries are at a crossroad when it comes to emerging technologies and security during a global pandemic.

On this episode of the GeoTech Hour on Wednesday, February 10, experts shared insights on lessons learned, ongoing challenges, and requisite next steps that can be taken when considering the intersection of geopolitics, modern technologies, and COVID-19 pandemic.

To take a closer look at previous work on transatlantic tech relationships, check out the recap of the partially public, partially private virtual event  held by the Embassy of Finland and the Atlantic Council GeoTech Center in December.

Featuring

Divya Chander, PhD MD
Nonresident Senior Fellow, GeoTech Center
Atlantic Council

Charina Chou, PhD
Global Policy Lead for Emerging Technologies
Google

Andrea Little Limbago, PhD
Vice President, Research and Analysis 
Interos Inc.  

Antti Niemela
Head of Section for Sustainable Growth and Commerce
Embassy of Finland in Washington, D.C.  

Hosted by

David Bray, PhD
Director, GeoTech Center
Atlantic Council

Previous episode

Event Recap

Feb 17, 2021

Event recap | Data-informed nutrition policy and practices

By the GeoTech Center

An episode of the GeoTech Hour where panelists discuss challenges and opportunities for data-informed nutrition solutions, and how a multisectoral approach can improve global health.

Climate Change & Climate Action Inclusive Growth

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Event recap | The geopolitics of emerging tech during the pandemic appeared first on Atlantic Council.

]]>
Event recap | Data salon episode 6: Digital identity https://www.atlanticcouncil.org/blogs/geotech-cues/event-recap-data-salon-episode-6-digital-identity/ Mon, 08 Feb 2021 10:59:00 +0000 https://www.atlanticcouncil.org/?p=350539 On Thursday, December 17, the GeoTech Center hosted the fifth installment of the Data Salon Series in partnership with Accenture to discuss the future of digital identity and the potential benefits of and hurdles to its widespread adoption.

The post Event recap | Data salon episode 6: Digital identity appeared first on Atlantic Council.

]]>

View the full series here.

Event description

On Thursday, December 17, the GeoTech Center hosted the sixth installment of the Data Salon Series in partnership with Accenture to discuss the future of digital identity. The panel featured Mr. Dante A. Disparte, Vice Chairman and Head of Policy and Communications at the Diem Association, Ms. Dakota Gruener, Executive Director at ID2020, Mr. David Treat, Senior Managing Director at Accenture, and Ms. Sheila Warren, Head of Blockchain, Data, and Digital Assets, and Member of the Executive Committee at the World Economic Forum. GeoTech Center Director Dr. David Bray co-hosted the Data Salon with Mr. Steven Tiell, Senior Principal, Responsible Innovation and Data Ethics at Accenture. 

The discussion focused on both the possibilities and problems that digital identity poses in the COVID-19 recovery and beyond. Panelists and participants discussed the role of standards and the need for private companies to create solutions that are interoperable with systems provided by other vendors. However, many of the difficulties with digital identity are issues of public policy rather than underlying technologies. 

Given the desire to restart international travel, vaccine passports have received a lot of attention for tracking vaccination. However, some panelists felt that it could be counterproductive, as a significant proportion of the population is not likely to travel abroad soon or even have a traditional passport. While vaccine passports are critical to restarting international travel, if used to regulate access to public services and workplaces, they could end up causing exclusion. Over the coming months, people might end up living in one of two worlds as the lack of a digital identity, a mask, or a recent COVID-19 test could lead to freedoms being curtailed. One panelist noted that this would further complicate the “freedom vs. security” tradeoff that has long existed and exacerbate existing inequalities. 

Data security was discussed at length, particularly how the existing system often pays for the cost of breach notification to the company, rather than the harm to the individuals whose data was exposed. Policy decisions about how to value data could include methods for ascribing financial cost to damages, thus pricing in the harms to individuals of data breaches. One participant wondered both whether personal data might be more like “radioactive material” than “oil” given the risks of its exposure, and whether greater interoperability might create a larger target for attackers.  

Later in the conversation, panelists discussed the scope of an alternative internet architecture that encrypts data at source and of models under which organizations only hold parts of information. Users could then control their data relationships by remotely revoking access to portions of the data collected about them as they see fit. Over the course of the discussion, conversations about gauging population health through digital identity were intertwined with a broader focus on technology, mobility, privacy, data governance, individual rights, financial liability, law enforcement, and public policy, among other issues. 

Arjun Mehrotra is a Young Global Professional at the GeoTech Center and holds a Bachelor’s degree from Georgetown University’s School of Foreign Service, where he majored in Regional and Comparative Studies focusing on geoeconomics in Asia, along with a certificate in International Business Diplomacy. With previous internship experience at the Center for Strategic and International Studies, the Brookings Institution, Invest India, and the Government of Maharashtra Chief Minister’s Office, he is interested in the role that technology will play in geopolitics, the structure of international trade and investment, and changes in domestic politics and political economy.

Previous episode

gtc mountains at night

Event Recap

Oct 22, 2020

Event recap | Data salon episode 5: Indigenous data sovereignty: opportunities and challenges

By Henry Westerman

On Thursday, October 22, the GeoTech Center hosted the fifth installment of the Data Salon Series in partnership with Accenture to discuss the challenges to achieving data sovereignty for indigenous communities. The panel featured Dr. Tahu Kukuthai, Professor of Population Studies and Demography at the University of Waikato, Dr. Ray Lovett, associate professor of Aboriginal and Torres Strait Islander Epidemiology for Policy and Practice at Australian National University, Dr. Desi Rodriguez-Lonebear, Assistant Professor of Sociology and American Indian Studies at UCLA, and Ms. Robyn Rowe, Research Associate and PhD Candidate at Laurentian University. GeoTech Center Director Dr. David Bray moderated the panel and the discussion that followed.

Digital Policy Economy & Business

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Event recap | Data salon episode 6: Digital identity appeared first on Atlantic Council.

]]>
Event recap | Tech-enabled dis- and misinformation, social platforms, and geopolitics https://www.atlanticcouncil.org/blogs/geotech-cues/event-recap-tech-enabled-dis-and-misinformation/ Wed, 03 Feb 2021 18:22:35 +0000 https://www.atlanticcouncil.org/?p=347709 A wide-ranging discussion exploring the human, business, and technological incentives that have driven the growth of mis- and dis-information globally, and what a weaponized information space means for the world, jointly hosted by the Atlantic Council's GeoTech Center and DFRLab.

The post Event recap | Tech-enabled dis- and misinformation, social platforms, and geopolitics appeared first on Atlantic Council.

]]>

Find the full GeoTech Hour Series here.

Event description

In this special episode of the GeoTech Hour, cohosted with the Digital Forensics Research Lab on Wednesday, February 3, from 12:00 – 1:00 p.m. EST, panelists examined the influence of new technologies on dis- and misinformation via social media platformsThey covered the various challenges caused by the era of the “free internet” and social media’s ability to provide a mass audience with unchecked, unregulated content.   

Panelists first explored increased internet access worldwide and the caveats on its expansion, which has helped propagate dis- and misinformation. The lack of regulation of online communities and content creation has created massive echo chambers, shifting the way society operates. Experts touched on the role of the free internet, specifically through the growth of targeted advertisements in the social media business model. Panelists identified the model’s financial incentives and their role in the expanding reach and harm of misinformation. They concluded that social media users must be informed of how much of their data is actually collected and what it is used for. Further, panelists agreed that social media in the West is treated much like the news media and should consequently be held up to the same regulations journalistic outlets are held to in order to ensure truthful information.  

Panelists also discussed the future of privacy and its newfound placement as a luxury product. Companies like Apple and ProtonMail have begun selling privacy and security as a feature to set themselves apart in an era of mass data collection. Experts spoke on the relationships among privacy, democracy, and disinformation and on how increased security could drastically reduce content targeting. Panelists discussed constructive efforts to combat disinformation by educating users, taking down botnets, and emphasizing transparency. In addition, acknowledging the presence of information deserts and working to eliminate them could prevent disinformation from filling the gap. Sophisticated techniques, such as utilizing advertisements in disinformation spaces to provide a diversified range of views, could also prove effective in altering radicalized echo chambers. Panelists mentioned the US government’s reputation for creating laws and regulating only after an incident has occurred. To get ahead of growing challenges, experts recommended the introduction of a US information agency through public-private partnership and the creation of more applicable constraints and regulations. With technology rapidly improving and accelerating, achieving digital literacy is imperative for society. Overall, counter misinformation efforts are going in the right direction – they must simply improve faster and continue to provide effective outcomes.  

Sana Moazzam is a recent graduate from American University’s School of International Service, where she majored in International Studies with a concentration in Global Economy and minored in Finance. Sana has previously interned for Congress, the US Department of the Treasury, and other organizations. Sana’s research interests include international trade, data privacy, and ethics in artificial intelligence. 

Hosted by

David Bray, PhD
Director, GeoTech Center
Atlantic Council

In conversation with

Pablo Breuer
Nonresident Senior Fellow, GeoTech Center
Atlantic Council

Rose Jackson
Director, Policy Initiative, Digital Forensic Research Lab
Atlantic Council

Bevon Moore
CEO
Elevate U

Sara-Jayne Terp
Nonresident Senior Fellow, GeoTech Center
Atlantic Council

Previous episode

Event Recap

Feb 17, 2021

Event recap | Data-informed nutrition policy and practices

By the GeoTech Center

An episode of the GeoTech Hour where panelists discuss challenges and opportunities for data-informed nutrition solutions, and how a multisectoral approach can improve global health.

Climate Change & Climate Action Inclusive Growth

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Event recap | Tech-enabled dis- and misinformation, social platforms, and geopolitics appeared first on Atlantic Council.

]]>
Why trust is vital to meet COVID-19 vaccination challenges https://www.atlanticcouncil.org/content-series/economy-of-trust-content-series/why-trust-is-vital-to-meet-covid-19-vaccination-challenges/ Sun, 31 Jan 2021 14:05:00 +0000 https://www.atlanticcouncil.org/?p=473318 The rush to vaccination against the COVID-19 virus has highlighted numerous supply and logistic complexities but also the risks of fraud. More than ever, trust is crucial. The situation is evolving and new information on the virus and impact of the vaccine is constantly emerging. The public needs to be sure that information available is trustworthy, authorities and vaccine manufacturers are transparent, and that the vaccines with which they are being injected are genuine.

The post Why trust is vital to meet COVID-19 vaccination challenges appeared first on Atlantic Council.

]]>

Editorial

The rush to vaccination against the COVID-19 virus has highlighted numerous supply and logistic complexities but also the risks of fraud. In developed countries, this comes against a background of doubts sown by the anti-vaxxer movement. In developing countries, desperation at the prospect of inadequate supplies in the longer term encourages parallel markets. Vaccines have arrived in full force, and with them come a host of initiatives aiming to pave the way for digital health passports, a credential tied to an individual’s COVID-19 vaccination status.

More than ever, trust is crucial. The situation is evolving and new information on the virus and impact of the vaccine is constantly emerging. The public needs to be sure that information available is trustworthy, authorities and vaccine manufacturers are transparent, and that the vaccines with which they are being injected are genuine. To open up the economy, travel, and essential activities we all need to be sure that test and vaccine status can be reliably verified. As President Biden said in his inaugural address, we face unprecedented challenges. We need to make best use of the technologies that can deliver the trust we need so acutely.
 
Across the world, numerous startups are developing technologies for ‘track and trace’ for the pharma industry. They range from holograms provided by the Irish Optrace, contactless chips by the Pakistani company Pharma TRAX, edible labels by America’s TruTag. International institutions have entered the game with system architectures being proposed by the International Air Travel Association and the WEF. The WHO also is piloting an initiative to produce a digital international vaccination certificate. New digital solutions, interoperably combined, are a key to opening doors for the global economy as fake COVID-19 vaccine advertisements increase exponentially. If a digital passport or “wallet” app also includes past vaccinations and other essential health information, this could provide an opportunity for individuals to have increased ownership over all of their health data and autonomy to decide when and how it gets used in clinical trials.    
 
While digital identities are often positioned as a method of inclusion, where the ID holder is granted access to the aspects of life we have lost in the past year (dine-in eating, concerts, regular air travel, etc.), some fear they risk replicating the current racial and geographic inequities in society given access to healthcare which remains in certain countries unbalanced. In the United States, 68 percent of rural counties lack testing sites and are facing similar shortages in personnel to administer the vaccine. The ethical implications of mandating vaccine passports as a prerequisite for travel or employment merits serious consideration. Rolling out a global digital identity strategy without significantly scaling up testing and vaccine distribution threatens to further these inequities. Digital health passports are well on their way to becoming the norm. Care should be taken to ensure that the data provided by these passports is secure, empowers individuals, and does not create a new underclass by default. Universal testing, vaccine distribution, and continued education of the importance of vaccines must be elevated alongside these digital solutions to rationally confront both our current situation and improve the resiliency of communities against any future pandemic.

Sincerely,

Christine Macqueen
Economy of Trust Foundation / SICPA
Dr. David Bray
Atlantic Council GeoTech Center
Borja Prado
Editors

Get the Economy of Trust newsletter

Sign up to learn about advances in technology and data activities that, through trust and more transparent frameworks, improve nations and sectors alike.

Latest Reseach & Analysis

Research & Analysis

From our GeoTech fellows & friends

The post Why trust is vital to meet COVID-19 vaccination challenges appeared first on Atlantic Council.

]]>
Event recap | An immune system for the planet https://www.atlanticcouncil.org/blogs/geotech-cues/event-recap-immune-system-for-the-planet/ Thu, 28 Jan 2021 02:50:00 +0000 https://www.atlanticcouncil.org/?p=343504 In this special edition of the GeoTech Hour, held weekly on Wednesdays from 12:00 - 1:00 p.m. EST, the GeoTech Center airs a recording of the most recent installment of its Immune System for the Planet private roundtables.

The post Event recap | An immune system for the planet appeared first on Atlantic Council.

]]>

Find the full GeoTech Hour Series here.

Event description

In this special episode of the GeoTech Hour, the GeoTech Center aired a recording of the most recent installment in its Immune System for the Planet private roundtable series, where experts from the medical, technological, and pharmaceutical fields discuss the various challenges in and benefits of creating a planetary network to monitor and predict future health crises. This episode’s discussion highlighted the ongoing challenges in combatting COVID-19 and outlined what a planetary immune system might look like.  

Panelists focused on the obstacles that continue to impede the pandemic response. Access to tests and vaccines are the most immediate hurdles, but with respect to vaccines, supply is no longer a limiting factor. Rather, clinics’ limited capacity slows vaccine rollout. As a remedy, panelists advocated for further partnerships among hospitals, large pharmacies such as CVS or Walgreens, and local clinics, as well as increased funding for local clinics. In the long term, however, panelists identified misinformation and the public’s distrust of vaccines as the most significant challenges. They recommended that the US government offer counter-messaging through trusted figures and partner visibly with local, community organizations to restore citizens’ trust in government.  

Experts also envisioned a system able to predict and prevent future pandemics. By analyzing sensor data, internet traffic, and other non-traditional metrics, such global monitoring could identify, detect, and track outbreaks in real-time. With the necessary technologies still a few years away, policymakers must lay the groundwork now in preparation for their arrival. Panelists pointed specifically to the urgent need for data integration policies to connect disparate systems. Government must act quickly—it cannot afford to squander the opportunity to implement transformative public health initiatives.

Ben Schatz is a junior at Georgetown University’s School of Foreign Service where he studies Science, Technology, and International Affairs (STIA), concentrating on security. He also minors in Latin American Studies and Computer Science. His coursework focuses on the intersection of technology and international development, and he intends to continue learning about how new technologies can solve global issues. 

Hosted by

David Bray
Director, GeoTech Center
Atlantic Council

Previous episode

Event Recap

Feb 17, 2021

Event recap | Data-informed nutrition policy and practices

By the GeoTech Center

An episode of the GeoTech Hour where panelists discuss challenges and opportunities for data-informed nutrition solutions, and how a multisectoral approach can improve global health.

Climate Change & Climate Action Inclusive Growth

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Event recap | An immune system for the planet appeared first on Atlantic Council.

]]>
Reimagining a just society pt. 3 | A coming shift in perspective https://www.atlanticcouncil.org/blogs/geotech-cues/reimagining-a-just-society-pt-3-a-coming-shift-in-perspective/ Thu, 21 Jan 2021 16:54:59 +0000 https://www.atlanticcouncil.org/?p=343013 In retrospect, the COVID-19 pandemic may mark a paradigm shift in global society if governments and their citizens worldwide today embrace its lessons, including many still emerging. One of these lessons concerns the dangers of ignoring knowledge we already had about interconnections between global public health, economic and national security, and ecological degradation.

The post Reimagining a just society pt. 3 | A coming shift in perspective appeared first on Atlantic Council.

]]>
In retrospect, the COVID-19 pandemic may mark a paradigm shift in global society if governments and their citizens worldwide today embrace its lessons, including many still emerging. One of these lessons concerns the dangers of ignoring knowledge we already had about interconnections between global public health, economic and national security, and ecological degradation. As Dr. M. Sanjayan, CEO of Conservation international, observed, “2020 has shown our complete interdependence with nature.” Resistance to change is natural, even if sometimes illogical, and has been manifest throughout the crisis in opposition, particularly in the United States, to following (belated) public health guidelines to wear masks, avoid crowds, and limit time in enclosed poorly ventilated spaces.  But what would differentiate an “anti-masker’s” stance from those who, on a national or global scale, call for a return to “normal” in the face of evidence that it was business-as-usual that contributed to the pandemic disaster?  Anti-maskers promoted disinformation about the public health dangers of COVID-19; moving on from this disaster without acknowledging and acting on new knowledge we have now gained about pandemics would be a similarly deadly form of reality avoidance.

Disregarding knowledge we already had about pandemic preparedness has proven to be costly in terms of many thousands of lives lost to COVID-19, lost economic opportunity for billions of people, and trillions of dollars in economic damage. Newly acquired knowledge about the coronavirus’s origins and dangers imply new responsibilities on individual, local, state, national, and global levels.  Since the novel coronavirus itself emerged from a natural environment known to host many more pathogens potentially at least as dangerous to humankind, science-based policymaking with an eye on averting, or at least mitigating, future calamities needs to drive global changes. This will entail an integration of science, including public health and environmental sciences, with public policy, diplomacy, international economics, and national and international security arenas, including in professional training settings and curricula.  In addition, enhanced emphasis in policymaking and education on the systemic interconnections between disciplines will be necessary.  Fundamental changes in how we see how world will naturally spill over into other areas, including legal regimes, criminal justice reform, combatting disinformation, immigration policies, childcare and education, food security, managing of wildlife habitats, architecture, and agricultural practices, to name a few. 

Embracing a return to “normal” is equivalent to averting our eyes from the societal and economic pathologies, including politicization of the virus, that have made the pandemic so deadly. In this new global context, even assuming successful vaccination of the entire global population, it is becoming clear that returning to business-as-usual will be as dangerous on a global scale as anti-mask attitudes have been in this crisis. As COVID-19 is a harbinger of still greater threats to humankind in a rapidly changing and climate-disrupted environment, it is incumbent upon policymakers, influencers, and informed citizens to emphasize the needed changes, and to be open to new ideas.

Fortunately there is much work already focused on what a new normal might look like, including for building ventilation, work spaces, and urban design, and architecture generally.  Similarly, a new “architecture” that emphasizes biodiversity is necessary for modern life in the 21st century, one that seeks not only to prevent pandemics but build for resilience and long-term public and global health, as the leaders of an architectural initiative to build healthy hospital environments in Africa have advocated since beginning their work more than a decade ago. Heeding the recommendations of experts on how to avoid the next pandemic, including their calls for enhanced multilateral cooperation, is a needed first step, as will be inclusive approaches to economic and public health. Concerted global cooperation and investment in ensuring conditions conducive to innovation and breakthrough thinking also will be necessarily.  In addition, we can look for inspiration to those countries whose pandemic response has been more successful to date than others.

Although the culprit of the calamity is thought to be an infected bat, science tells us that manmade environmental conditions, including the destruction of wildlife habitats and a growing reliance on factory farming, are mainly to blame. As if to emphasize the point, coronavirus outbreaks detected since early November among mink farm populations in Denmark and, more recently in Poland, have been traced to humans and then, in a reverse phenomenon known as “spillback,” from humans to minks. Scientists suspected that the coronavirus mink-associated variant could undermine the effectiveness of human vaccines, leading the Danish government to order a controversial nationwide culling of all minks.  The many unintended consequences emerging from this government policy aiming to protect people provide a useful case study in the increasing complexity that public health challenges will present for policymakers in the coming years.

As the Danish experience shows, it is difficult to overstate the immensity of the challenges now facing policymakers.  Leaders across the spectrum of expertise and professional responsibility independently warn that a return to normal would be unwise even if possible. World-renowned naturalist and primatologist Dr. Jane Goodall warns, for instance, that “humanity is finished” if it should fail to drastically change its food systems in response to the pandemic and the climate crisis, which both originate in mankind’s destruction of the natural environment. The Lancet COVID-19 Commission has similarly highlighted the need for new precautions, such as ending deforestation and protecting conservation areas and endangered species, as means to curb the transmission of pathogens from animals to humans. In November 2020, the World Health Organization (WHO) launched an investigation to improve understanding of the coronavirus’s transmission pathway to humans. Such understanding is necessary to forestall future viral outbreaks. Similarly, earlier this month, a bipartisan group of US lawmakers met with conservation experts, including Dr. Goodall, at a “Conservation and National Security” event sponsored by The Hill; speakers and participants emphasized that the relationship between national security, economic interests and environmental protection needs to be “rethought and reformed”.

A coming shift in perspective is poised to emphasize the pathologies of the modern human condition. Mankind’s priorities are exposed as upside down by leading global institutions, such as the World Health Organization. “It would take 500 years to spend as much on investing in preparedness as the world is losing due to COVID-19,” according to the latest report, “A World in Disorder,” of its Global Preparedness Monitoring Board. “It is hard to stare directly at the biggest problems of our age,” writes Ed Yong in The Atlantic on “How the Pandemic Defeated America.” He explains, “Pandemics, climate change, the sixth extinction of wildlife, food and water shortages — their scope is planetary, and their stakes are overwhelming. We have no choice, though, but to grapple with them. It is now abundantly clear what happens when global disasters collide with historical negligence.”

The facts are difficult to conceal or distort. Since the start of the pandemic in early 2020, over eight million additional people have been pushed into poverty in the United States—the richest country in the world—and worldwide over one million people have died due to COVID-19. The havoc of the pandemic has caused the world economy to contract by 4.4 percent in 2020 and is estimated, by the International Monetary Fund, to strip $11 trillion of output by next year. Given the human and economic costs already incurred, how well positioned are the United States and the world for future challenges?

“How will the U.S. fare when ‘we can’t even deal with a starter pandemic,’” asked Zeynep Tufekei, a sociologist at the University of North Carolina in an interview with Yong. What will a “just society” look like in the coming era of inevitably greater internal and international migration pressures due to more frequent and intense wildfires, floods and droughts as well as recurrent public health emergencies including pandemics?

Implications for human, national, and global security 

National and international crisis preparedness policymaking will need to take a systemically integrated and forward-looking view of global challenges such as pandemics and climate change. When it’s unmistakably clear, as it is now, that the health of societies determines the health of the world economy, can the realization be far behind that the health of the natural environment similarly impacts everything? Recognition of this interdependence has sweeping implications for educational curricula and conventional measures of growth and productivity, as well as for nation-centric Cold War era practices of requiring nearly all security-related issues to be handled secretly. 

In the world as it is becoming, with tens of millions of people compelled to migrate to escape hunger and poverty exacerbated by climate change, conflict, and disease, what will concepts of peace and a “just society” entail? Will public engagement in open and transparent knowledge-sharing networks be a feature of the needed new security arrangements? Can we simply proceed with the metrics and institutional norms established in the pre-pandemic era despite our new, greater awareness?

Previous installment:

GeoTech Cues

Dec 18, 2020

Reimagining a just society pt. 2 | The end of an era

By Carol Dumaine

This blog post series will explore the meaning of a “just society” through multiple lenses and in the context of today’s challenges, including but not limited to the coronavirus pandemic. With contributions from multiple authors, it aims to stimulate thinking and questions that distill the prerequisites and responsibilities for “just societies” in our times. COVID-19 spotlights […]

Coronavirus Inclusive Growth

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Reimagining a just society pt. 3 | A coming shift in perspective appeared first on Atlantic Council.

]]>
Event recap | Government and tech improvements for the delivery of public services https://www.atlanticcouncil.org/blogs/geotech-cues/event-recap-improvements-to-public-services/ Wed, 20 Jan 2021 05:00:00 +0000 https://www.atlanticcouncil.org/?p=342260 In this episode of the GeoTech Hour, experts discuss strategies and examples of leadership in a period of rapid technological change.

The post Event recap | Government and tech improvements for the delivery of public services appeared first on Atlantic Council.

]]>

Find the full GeoTech Hour Series here.

Event description

This episode of the GeoTech Hour examines how the US government may quickly adopt new technology, foster innovation, and effectively deliver public services. Panelists offered novel solutions to create a modern, competitive, and nimble US government.  

In the immediate future, the new administration must address gaps in technology adoption and innovation. Neglected legacy systems remain the norm for government agencies. To address this accumulation of technical debt, decisionmakers should pursue two policy tracks: foster public-private partnerships and seek greater funding from Congress. Public-private partnerships offer a well-known path to technology adoption in government, but the government can and should demand more from its private partners (e.g., create a USPS for email, or Gmail for the public sector). Of course, these partnerships, along with updating outdated government systems, must be financed, so the administration must prioritize securing funding for technology adoption and innovation.  

New technology cannot streamline government services alone. Ultimately, it comes down to people and culture. Agencies must be realistic about their goals and capabilities and can no longer view technology as a silver bullet to their problems. Instead, leaders ought to create cultures where decision makers are encouraged to “fail fast” and quickly pivot to better policies. In the digital age, the road to effective governance starts with effective leadership.  

Most important, the United States must rekindle trust from its citizens in new technologies and government programs. Fruitless initiatives and a deluge of misinformation has eroded citizens’ trust in institutions and government. The panelists suggested, however, that this narrative can be reversed by first focusing on the end-users (ordinary citizens). Public services ought to be easy to use and meet expectations, and taxpayers are the ultimate stakeholder, after all. Delivering effective digital services has the secondary effect of improving the government’s public image. The US government’s bureaucracy has a messaging problem—it is synonymous with waste and unreliability. Securing policy wins at the local level will give citizens the impression that their government has changed, and with renewed faith in government, the new administration can use technology to implement policy and promote good governance with popular support.

Ben Schatz is a junior at Georgetown University’s School of Foreign Service where he studies Science, Technology, and International Affairs (STIA), concentrating on security. He also minors in Latin American Studies and Computer Science. His coursework focuses on the intersection of technology and international development, and he intends to continue learning about how new technologies can solve global issues.   

Featuring

Byron Caswell
Vice President
ICF

Brittany Galli
Chief Success Officer
mobohubb

Derry Goberdhansingh
CEO
Harper Paige

Evanna Hu
Nonresident Senior Fellow, Scowcroft Center for Strategy and Security 
Atlantic Council

Dustin Laun
CEO 
mobohubb

Hosted by

David Bray
Director, GeoTech Center
Atlantic Council

Previous episode

Event Recap

Feb 17, 2021

Event recap | Data-informed nutrition policy and practices

By the GeoTech Center

An episode of the GeoTech Hour where panelists discuss challenges and opportunities for data-informed nutrition solutions, and how a multisectoral approach can improve global health.

Climate Change & Climate Action Inclusive Growth

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Event recap | Government and tech improvements for the delivery of public services appeared first on Atlantic Council.

]]>
Event recap | AI, China, and the global quest for digital sovereignty – Report launch https://www.atlanticcouncil.org/blogs/geotech-cues/event-recap-the-global-quest-for-digital-sovereignty-report-launch/ Wed, 13 Jan 2021 18:30:28 +0000 https://www.atlanticcouncil.org/?p=340009 In this episode of the GeoTech Hour, hosted January 13, 2021, we launch the report “Smart Partnerships amid Great Power Competition,” hold a conversation about AI, China, and the global quest for digital sovereignty, and gather experts to discuss regional specifics and the report authors’ alternative futures for global technology cooperation.

The post Event recap | AI, China, and the global quest for digital sovereignty – Report launch appeared first on Atlantic Council.

]]>

Find the full GeoTech Hour Series here.

Event description

Over the past year and half, experts from the Atlantic Council’s GeoTech Center organized meetings in ParisBrussels, and Berlin; traveled to Beijing and Shanghai; and held virtual conferences with Indian and African experts, all to find answers to one question: how can countries successfully collaborate on data, AI, and other modern technologies amid the widening geopolitical gyre?

The resultant report Smart Partnerships amid Great Power Competition” captures key takeaways from the conversations, identifies the challenges and opportunities that different regions of the world face when dealing with emerging technologies, and evaluates China’s role as a global citizen. In times of economic decoupling and growing geopolitical bipolarity, it highlights opportunities for smart partnerships, describes how data and AI applications can be harnessed for good, and develops future scenarios, forecasting where an AI-powered world might be headed. Given the experimental nature of emerging technologies, it will come as no surprise that the emphasis of the report is thereby on the need for regulatory cooperation, even as AI development has become the next playing field for great power competition.

Join us for a conversation about AI, China, and the global quest for digital sovereignty as previous roundtable participants discuss regional specifics and the report authors’ alternative futures for global technology cooperation.

Opening remarks by

David Bray, PhD
Director, GeoTech Center
Atlantic Council

Kevin O’Neil, PhD
DirectorData and Technology
The Rockefeller Foundation

Report summary by

Matthew Burrows, PhD
Director, Foresight, Strategy, and Risks Initiative, Scowcroft Center for Strategy and Security
Atlantic Council

Julian Mueller-Kaler
Resident Fellow, GeoTech Center and Foresight, Strategy, and Risks InitiativeScowcroft Center for Strategy and Security
Atlantic Council

Featuring

Luis Viegas Cardoso
Digital, Technology, and Innovation Advisor to the Presidency of Ursula von der Leyen, I.D.E.A. Advisory Service
European Commission

Eniola Mafe
Lead, 2030 Vision, Technology and Sustainable Development
World Economic Forum

Ambassador Latha Reddy
Co-Chair
Global Commission on the Stability of Cyberspace

Kaan Sahin
Research Fellow for Technology and Foreign Policy, German Council on Foreign Relations; Strategic Advisor for Cyber Diplomacy and the EU Presidency, Auswärtiges Amt (Federal Foreign Office of) Germany

Hosted by

Edward Luce
US National Editor and Columnist
Financial Times

Previous episode

Event Recap

Feb 17, 2021

Event recap | Data-informed nutrition policy and practices

By the GeoTech Center

An episode of the GeoTech Hour where panelists discuss challenges and opportunities for data-informed nutrition solutions, and how a multisectoral approach can improve global health.

Climate Change & Climate Action Inclusive Growth

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Event recap | AI, China, and the global quest for digital sovereignty – Report launch appeared first on Atlantic Council.

]]>
Cooperation in a bipolar world https://www.atlanticcouncil.org/blogs/geotech-cues/cooperation-in-a-bipolar-world/ Tue, 12 Jan 2021 21:00:54 +0000 https://www.atlanticcouncil.org/?p=337723 Taking into account China’s growing influence around the world, discussions often alluded to an uncomfortable truth: In order to avoid catastrophe, even rivals must cooperate, which is why participants, particularly at roundtables in Europe, were keen to identify a number of areas that could lower the tensions and help build trust among antagonistic stakeholders.

The post Cooperation in a bipolar world appeared first on Atlantic Council.

]]>
Taking into account China’s growing influence around the world, discussions often alluded to an uncomfortable truth: In order to avoid catastrophe, even rivals must cooperate, which is why participants, particularly at roundtables in Europe, were keen to identify a number of areas that could lower the tensions and help build trust among antagonistic stakeholders. By emphasizing the global nature of the challenges at hand, French leaders pointed to lessons learned from the United Nations Framework Convention on Climate Change (UNFCCC) process. Allegedly, consultations at the expert level could help establish a universally agreed baseline on the harms versus the benefits of the AI revolution. Such an acknowledged picture of the total effects from modern technologies might then inform policy makers as to the needed regulatory steps to minimize negative externalities, while maximizing potential benefits. Individual countries and multilateral organizations such as the Group of Twenty (G20), the International Monetary Fund (IMF) and the World Bank, or regional organizations like the European Union could then all start from the same set of agreed facts concerning AI and the various aspects of the emergence of modern technologies—and coordinate on needed social, economic, data, and ethical protections. 

Cooperation, however, needs to begin at the domestic level by building trust and confidence first between governments, companies, and consumers on AI and related technologies. In many cases, the public trust does not exist, due to concerns over job insecurity, privacy, and the future of work. To avoid such negative public perceptions, governments and private companies should share their failures as much as their successes in employing AI. Regulatory efforts to build public trust will require experimentation, and lessons learned would certainly benefit from comparisons with attempts elsewhere. Such sharing, across multiple efforts, could then help establish international guidelines to define the rules of the game, prevent escalating conflicts, and enable reconciling social needs with uses of the new technologies. 

With the enactment of binding rules for all players, collaboration could further help erase fears of falling behind in the global AI race. Such an approach was advocated particularly by European roundtable participants, while Chinese and US discussants highlighted a level playing field as more important for tempering the ongoing competition. Interestingly enough, Chinese officials that contributed to this project were open to developing regulatory frameworks, though many Western counterparts believed that they could stifle innovation and hamper economic growth.

The full text of this report is split across a collection of articles to give readers the opportunity to browse in any order. To return to the main page click here.

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Cooperation in a bipolar world appeared first on Atlantic Council.

]]>
An unequal world https://www.atlanticcouncil.org/blogs/geotech-cues/an-unequal-world/ Tue, 12 Jan 2021 21:00:50 +0000 https://www.atlanticcouncil.org/?p=338037 An unequal world is probably the base case, exacerbated by the social and economic effects of the ongoing pandemic. In this future, emerging technologies have deepened divisions and inequalities instead of leveling the playing field domestically and internationally.

The post An unequal world appeared first on Atlantic Council.

]]>
An unequal world is probably the base case, exacerbated by the social and economic effects of the ongoing pandemic. In this future, emerging technologies have deepened divisions and inequalities instead of leveling the playing field domestically and internationally. With governments struggling to understand the social impacts of the new technologies, there have not been enough initiatives to counter the invidious effects of technological advances. The economic slowdown due to Covid-19 is likely to have further incapacitated governmental efforts, as they are starved of the resources needed to invest in raising education and skill levels, for example. With opportunities drying up at home, more of India’s AI developers have emigrated to the United States and Europe, where there is increased demand for their expertise, irrespective of tightening immigration policies. Those that remain at home build applications for Western firms, have only their wealthy customers in mind, and create a two-level economy and society. Given the aftermath of combined health and economic crises, governments do not have the bandwidth to move ahead on data-sharing regulations that would boost responsible AI use and development.

With Covid-19, low-skilled workers have been hit the hardest and their overall wealth has declined as income inequality worsens and businesses try to automate further to recapture profit margins. At the same time, AI-based automation is moving up the value chain and more skilled professions see increasing disruption and fears of job insecurity. For the lucky ones, comprehensive algorithms will add to human-machine partnerships, but many will see their professions disappear—a process accelerated with increased digitalization efforts due to the pandemic.

In this world, the United States and China are still in an AI race, but not to the point of eliminating all cooperation with each other. Consumed by growing domestic instabilities, there’s an incentive for both to cooperate minimally. There is more norm-setting led by the European Union, which builds on its privacy standards (GDPR), and the EU Commission and member states push for international regulation of dual-use AI-based technologies, such as autonomous weapons. The G20 develops benchmarks for AI safety and security at the front end, with the hope of preventing future cybersecurity problems that occurred in earlier internet days. Because of the provisions for norm-setting, standards on e-commerce, and increasingly AI-based technologies, more countries, even outside of the Pacific region, are joining the Comprehensive and Progressive Agreement for Trans-Pacific Partnerships (CPTPP). There remains competition nevertheless and the United States and Europe worry about the expansion of Chinese 5G technology to Belt and Road countries. Once the US-developed ORAN software becomes competitive, Huawei’s attractiveness is diminished for many countries outside of the Chinese orbit and the United States further increases its investments in AI technologies, consolidating its traditional leadership role.

The full text of this report is split across a collection of articles to give readers the opportunity to browse in any order. To return to the main page click here.

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post An unequal world appeared first on Atlantic Council.

]]>
India’s quest for digital sovereignty https://www.atlanticcouncil.org/blogs/geotech-cues/indias-quest-for-digital-sovereignty/ Tue, 12 Jan 2021 21:00:40 +0000 https://www.atlanticcouncil.org/?p=337980 Similar to Europe’s “Third Way Approach,” and in order to navigate between the US and the Chinese models, India is also trying to develop a concept of digital sovereignty, all the while mitigating negative externalities of great power competition.

The post India’s quest for digital sovereignty appeared first on Atlantic Council.

]]>
Similar to Europe’s “Third Way Approach,” and in order to navigate between the US and the Chinese models, India is also trying to develop a concept of digital sovereignty, all the while mitigating negative externalities of great power competition. While some argued that the time is right to take sides in the geopolitical contest, many Indian experts dislike the idea that investment decisions are going to be binary choices in the future. Skepticism towards the PRC, however, is rising: while Chinese money was welcomed until recently, there are growing security concerns in light of increased Indo-Chinese tensions, as well as worry over too much influence from India’s biggest neighbor. Chinese companies already have a large say in India’s digital space, and the balance between security and economic interests has yet to be struck—a similar situation to other places in the world. 

Another thought-provoking concept brought forward by participants at the India roundtable, was the suggestion to alter international law and adjust respective jurisdictions for private data ownership. Similar to the EU’s GDPR, Indian participants spoke about the desirability of the universal individual right to privacy being upheld, with secondary data ownership still allowed—irrespective of the data’s geographical location and a country’s sovereignty and jurisdiction. It would guarantee that consumers have primary ownership of their personal information, while acknowledging the respective government’s secondary ownership. 

The full text of this report is split across a collection of articles to give readers the opportunity to browse in any order. To return to the main page click here.

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post India’s quest for digital sovereignty appeared first on Atlantic Council.

]]>
Worries about AI externalities https://www.atlanticcouncil.org/blogs/geotech-cues/worries-about-ai-externalities/ Tue, 12 Jan 2021 21:00:32 +0000 https://www.atlanticcouncil.org/?p=337732 There is no doubt that emerging technologies have gained significant importance over the last couple of years, but a sense of caution is required when it comes to the hype surrounding AI. Technologies have so far remained a tool and their applications won’t be solving all of humanity’s problems anytime soon.

The post Worries about AI externalities appeared first on Atlantic Council.

]]>
There is no doubt that emerging technologies have gained significant importance over the last couple of years, but a sense of caution is required when it comes to the hype surrounding AI. Technologies have so far remained a tool and their applications won’t be solving all of humanity’s problems anytime soon. Of course, underestimating the tech revolution is not the right way forward either, as speakers at roundtables in China suggested that AI applications will have very similar effects to the internet— disrupting societies on the one hand, but creating huge markets on the other. Mitigating risks along with efforts to exploit opportunities will be the challenge of the coming decades because it is only a question of time until social tensions arise. The Chinese government already creates around 16 million jobs annually—many of them without commercial purpose. In order to keep the social peace, that number will likely have to grow as unskilled labor becomes automated.

Irrespective of social externalities, the greater accessibility of big data, which is needed to train smart algorithms, puts China at an important advantage. In the West, the publics’ desire for privacy, democratic accountability, and a clear differentiation between the private and public sectors hamper the availability of big data for tech entrepreneurs. Due to the lack of infrastructure and data regulation in India, for example, software engineers have to train their algorithms with European or American data sets, making it rather difficult to adapt AI applications to local conditions. Health experts at the India roundtable also talked about the lack of financial incentives for AI development and use in their sector. In advanced economies, market conditions, such as the high cost of labor, have been a spur to develop automated systems using AI. In developing countries where labor is cheap and widely available, the same incentives don’t apply and lead to different effects. Without the market pull, Indian state authorities need to find ways to boost AI in order to improve services and ensure India’s ability to plug its extensive software industry into the global economy.

The full text of this report is split across a collection of articles to give readers the opportunity to browse in any order. To return to the main page click here.

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Worries about AI externalities appeared first on Atlantic Council.

]]>
Technology for good https://www.atlanticcouncil.org/blogs/geotech-cues/technology-for-good/ Tue, 12 Jan 2021 21:00:22 +0000 https://www.atlanticcouncil.org/?p=337984 By focusing on healthcare, food security and agriculture, education, or infrastructure, global AI competition could be given a very different spin, mitigating the rivalry aspect of politics. How modern technologies should be centered on serving those broader global interests was at the core of the discussions in the roundtable focused on Africa.

The post Technology for good appeared first on Atlantic Council.

]]>
By focusing on healthcare, food security and agriculture, education, or infrastructure, global AI competition could be given a very different spin, mitigating the rivalry aspect of politics. How modern technologies should be centered on serving those broader global interests was at the core of the discussions in the roundtable focused on Africa. Participants underlined that AI applications are not yet constrained by extensive legal systems, presenting many opportunities, but also raising challenges. The fact that African countries provide a good testing bed for AI applications is exactly the reason why governments need to be careful. If there’s no framework, digital infrastructure, or laws and regulations, it is an open playing field without security measures and necessary consumer protections.

Missing regulatory frameworks are already a challenge in Western countries, which highlights the fact that African states are experiencing even further difficulties with developing laws and regulations. Similar to the lessons learned from India, capacity building will be essential for the development of modern technologies and their potential application to developmental challenges. Across the continent, Africa will need to invest much more to educate tech practitioners for the dynamic environment and the future of broader AI usage. There is overall confidence, however, that African societies are well-positioned to leverage their strengths, taking into consideration favorable demographics and the fact that the consequences of the ongoing pandemic do not seem to be as devastating in Africa as they are elsewhere. 

The full text of this report is split across a collection of articles to give readers the opportunity to browse in any order. To return to the main page click here.

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Technology for good appeared first on Atlantic Council.

]]>
A bipolar world https://www.atlanticcouncil.org/blogs/geotech-cues/a-bipolar-world/ Tue, 12 Jan 2021 21:00:07 +0000 https://www.atlanticcouncil.org/?p=338044 A Bipolar World is where Sino-US competition edges out any possibility of cooperation—not just on data and AI. Countries in Europe and Asia are forced to choose between Washington and Beijing while desperately trying to develop their own digital sovereignty.

The post A bipolar world appeared first on Atlantic Council.

]]>
A Bipolar World is where Sino-US competition edges out any possibility of cooperation—not just on data and AI. Countries in Europe and Asia are forced to choose between Washington and Beijing while desperately trying to develop their own digital sovereignty. The United States announces publicly, as well as behind closed doors, that the adoption of Chinese 5G by other countries means a loss not only of US intelligence assistance but also potentially economic or security partnerships. European, Japanese, South Korean, Middle Eastern, and Indian tech firms are further threatened with (secondary) sanctions if they do not end their collaboration with Chinese and Russian counterparts. For economic reasons, Southeast Asian countries refuse US strictures and lean more towards Beijing, while the EU tries to push back but has mixed success in protecting its businesses from US punitive measures. As the Gulf countries now export the bulk of their oil to East Asia, they are also pushing back against Washington, despite their reliance on US security protection. A Biden administration continues the United States’ decoupling efforts and tries to isolate China on the global stage—the consequence of which is an intensification of great power competition.

The PRC boosts its tech and other assistance to Belt-and- Road countries, and most remain loyal to Beijing. Others want to be neutral and stay out of the Sino-US fight, but risk falling behind technologically if they cannot get tech assistance from either the United States or China. The free flow of knowledge is hampered by new firewalls erected not only by the PRC but also by the United States. Amongst growing security concerns, Chinese students are pushed out of Western universities and innovation slows down globally. AI development becomes more focused on military uses and quantum, and each side vows to be first. Multilateral institutions lose even more power, and a sophisticated tech reform remains a distant hope in a divided world. De-globalization is the new normal and the likelihood of conflict increases significantly over time.

The full text of this report is split across a collection of articles to give readers the opportunity to browse in any order. To return to the main page click here.

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post A bipolar world appeared first on Atlantic Council.

]]>
A multilateral resurgence https://www.atlanticcouncil.org/blogs/geotech-cues/a-multilateral-resurgence/ Tue, 12 Jan 2021 21:00:04 +0000 https://www.atlanticcouncil.org/?p=338046 A multilateral resurgence is a world that evolves after significant Sino-US confrontations occur on the scale of the 1963 Cuban Missile Crisis. Post-pandemic, both the United States and China step back from the precipice, realizing that their unrestrained, full-spectrum competition with one another could lead to disaster and mutual destruction.

The post A multilateral resurgence appeared first on Atlantic Council.

]]>
A multilateral resurgence is a world that evolves after significant Sino-US confrontations occur on the scale of the 1963 Cuban Missile Crisis. Post-pandemic, both the United States and China step back from the precipice, realizing that their unrestrained, full-spectrum competition with one another could lead to disaster and mutual destruction. Technology becomes an area for gradually increased cooperation, and trust is developed with the help of confidence-building measures such as mutual high-level delegation visits. Multilateral agreements are renegotiated, the United States and China cooperate on sophisticated World Trade Organization (WTO) reform, and international frameworks for AI regulations are passed. There is increased transparency between the two superpowers on technology development. Chinese researchers are welcomed back into the United States, and China allows US academics to work in some of their institutes, too. Similar to arms control agreements with the Soviets, Washington and Beijing enter into negotiations with each other on standards for autonomous weapon systems plus ethical, safety, and privacy guidelines for the deployment of modern tech—later, additional partners also ascribe to them. These agreed rules and regulation standards boost research and development and the diffusion of new technologies, including to the developing world.

The years of protectionism, competition, and confrontation following the pandemic have taken a toll, ushering in a long economic recession for the developing world, an era of the impoverishment of the middle classes, and widespread political upheaval. A new phase of globalization begins slowly, yet thoroughly. Rules and fair regulations increase global trade, and the taxation of big multinational corporations enables growing state capacity. China and the United States back an effort for ensuring universal 5G for the whole world, enabling developing countries to leapfrog into a new age, sharing in the advantages of the Internet of Things (IoT). Steps are taken to mitigate resource scarcities, all the while engendering safer and more secure urbanization. Green technology becomes more the norm and biological breakthroughs, enabled by AI, facilitate increased food supplies and better healthcare, including protections against diseases. Tech researchers in emerging markets have access to international data and expertise, allowing them to develop applications that are tailored to their countries’ special needs and contexts.

The full text of this report is split across a collection of articles to give readers the opportunity to browse in any order. To return to the main page click here.

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post A multilateral resurgence appeared first on Atlantic Council.

]]>
Third parties don’t want to choose sides https://www.atlanticcouncil.org/blogs/geotech-cues/third-parties-dont-want-to-choose-sides/ Tue, 12 Jan 2021 21:00:02 +0000 https://www.atlanticcouncil.org/?p=337425 Many worry about what could follow Pax Americana, especially since providing global security has always been a costly endeavor. A European Union (EU) approach was that Europe could serve as a bridge between the United States and China, somehow mitigating the ever-intensifying rivalry.

The post Third parties don’t want to choose sides appeared first on Atlantic Council.

]]>
Shocked by the warm dispute, particularly at the Paris meeting, international scientists, academics, business, and think tank representatives not only stressed the importance of cooperation when it comes to AI, but also worried about the negative externalities of wide-ranging competition. German experts at the Berlin workshop, for instance, went so far as to almost agree with the Chinese view that, for the last four years, unpredictability in the global system did not come from China, but more from the United States and the highly erratic Trump administration. 

For Europe, a continent that has benefited from the liberal international order like no other, the trajectory of economic decoupling could not be more concerning. Germany in particular exhibits a growing panic in its decision-making circles about being put in a position where the country is forced to choose sides. Already today, China is starting to create guidelines that are incompatible with international standards. European-made computers sold to the Chinese market, for example, have to include Chinese produced control programs (CPM), which exemplifies the difficult trade-offs between national security concerns and a desire for market access. 

Furthermore, many worried about what could follow Pax Americana, especially since providing global security has always been a costly endeavor. A European Union (EU) approach talked about in detail at the Brussels and Berlin roundtables was that Europe could serve as a bridge between the United States and China, somehow mitigating the ever-intensifying rivalry. The perceived success of the EU’s privacy law, also known as the GDPR, encouraged some to believe that Brussels could use Europe’s market power to set norms that others would have to follow, if they were to continue doing business in the world’s largest and wealthiest marketplace. Additionally, the countries on the continent have the expertise and infrastructure (talent, universities, and regulations) to develop what many call “a Third Way,” separate from China’s state-focused and the US’ free market development of technologies. 

Experts indicated that the PRC was a complex partner for Europe, which has encountered cooperation, competition, and sometimes confrontation in dealing with China. Not too long ago, the EU named the People’s Republic a “systemic rival”  and, similar to the United States, European member states worry about IP theft as well as Chinese acquisitions of Western firms with sensitive technology. But there is no black-and-white approach, particularly due to some member states’ economic dependence. Europe’s default would always be cooperation, even if some restrictions on economic ties need to be put in place. China might be destined to become the largest economic power in the world, and it continues to hold sway over export-oriented economies, but the majority of discussants still saw Germany and the EU fully embedded in the Western system. In order to manage that difficult balancing act, some supported the notion of a three “M-approach” for Europe in dealing with China: multilateral, non-militaristic, and Machiavellian.

The full text of this report is split across a collection of articles to give readers the opportunity to browse in any order. To return to the main page click here.

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Third parties don’t want to choose sides appeared first on Atlantic Council.

]]>
Europe’s hurdles https://www.atlanticcouncil.org/blogs/geotech-cues/europes-hurdles/ Tue, 12 Jan 2021 21:00:00 +0000 https://www.atlanticcouncil.org/?p=337719 Economists and technologists worried about Europe’s ability to reconcile privacy restrictions with a thriving tech economy. The logic is simple: In order to keep up, companies must be able to train AI systems with accessible data, which is why the EU has become more attuned to the need to facilitate data flows.

The post Europe’s hurdles appeared first on Atlantic Council.

]]>
There was little disagreement over the fact that the systematic collection of data is more difficult for private companies in the West than for China’s tech giants. For that reason, economists and technologists worried about Europe’s ability to reconcile privacy restrictions with a thriving tech economy. The logic is simple: In order to keep up, companies must be able to train AI systems with accessible data, which is why the EU has become more attuned to the need to facilitate data flows, as exemplified by its recent free trade and investment treaty with Japan. 

At the Berlin roundtable, which included more private sector representation, there was even greater concern that Europe is falling behind in the global AI race. For German entrepreneurs in Europe’s leading economy, the lack of essential EU funding, nonexistent unity among member states, and a difficult environment for the collection and application of data are all indications that Europe is not living up to its full potential. Examining proficiency in emerging technologies from a foreign policy perspective has, unlike in the United States, never had strong traction in Europe, and it is only slowly starting to change. But many agreed that the EU risks becoming even more dependent on external players if it does not develop a stronger policy stance on emerging technologies altogether. 

Divisions among EU member states, however, make this a very difficult endeavor, with regards to both a coordinated tech and China policy. It is no surprise that southern and eastern EU member states want to be more accommodating to the PRC, given the fact that their economies have benefitted greatly from Chinese investments, adding to their recovery from the 2008 financial crisis. Alongside the geographical splits, there’s an ideological one, too. While some believe that Europe should look at China through more cooperative lenses, understanding the relationship as a healthy competition; others were more critical and urged caution, highlighting the importance of infusing algorithms with democratic and liberal norms.

The full text of this report is split across a collection of articles to give readers the opportunity to browse in any order. To return to the main page click here.

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Europe’s hurdles appeared first on Atlantic Council.

]]>
China’s ambiguity https://www.atlanticcouncil.org/blogs/geotech-cues/chinas-ambiguity/ Tue, 12 Jan 2021 21:00:00 +0000 https://www.atlanticcouncil.org/?p=337730 Speaking more broadly, interlocutors in Beijing emphasized that international cooperation has always been important to China’s economic development, alluding to the fact that the most successful innovations and AI advances often come from international research collaborations.

The post China’s ambiguity appeared first on Atlantic Council.

]]>
Speaking more broadly, interlocutors in Beijing emphasized that international cooperation has always been important to China’s economic development, alluding to the fact that the most successful innovations and AI advances often come from international research collaborations. At least on paper, the PRC’s eight AI principles emphasize collaboration, knowledge sharing, and a reliance on open source methods. One might question the sincerity of such proclamations, but the issuance of similar AI statements by the United States, the EU, and other countries are a sign of hope that a potential baseline could one day be established. In that regard, the Chinese viewed the G20 meeting in 2019 as a milestone, since it at least signaled global agreement on the guiding principles for AI.

Pre-pandemic, Chinese experts suggested that irrespective of the growing bilateral tensions, there are indeed shared views between the United States and China that could enable cooperation. Allegedly, both countries put emphasis on talent and research, which is why contributors to this project thought that both governments could undertake joint investments in digital infrastructure and/or develop binding political guidelines for the use of AI in order to ensure the improvement of applications for the general public. People in the tech world continue to emphasize the importance of an open source community and many Chinese organizations remain keen on cooperating with international and American entities such as think tanks or universities—channels that must be kept open to lay the groundwork for government-to-government talks in the future. Many agreed that dialogue between civil organizations can enable government cooperation in the long run, as decentralized governance will be key anyway, given the fact that modern technologies have already surpassed the regulatory capacity of most national and international entities. Even though no governance needs to be mutually exclusive, good and reliable frameworks are getting more complicated from year to year, due to the growing dual-use capabilities of the new technologies and the chaotic state of global cyber regulations. To put it bluntly, the world is running out of time. 

The full text of this report is split across a collection of articles to give readers the opportunity to browse in any order. To return to the main page click here.

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post China’s ambiguity appeared first on Atlantic Council.

]]>
Smart partnerships for global challenges https://www.atlanticcouncil.org/blogs/geotech-cues/smart-partnerships-for-global-challenges/ Tue, 12 Jan 2021 21:00:00 +0000 https://www.atlanticcouncil.org/?p=338016 In order to give the global AI competition a different spin and emphasize the “technology for good” approach, it would be wise to highlight organizations that focus on AI applications in healthcare, education, food security and agriculture, or infrastructure endeavors, particularly in a post-Covid-19 recovery.

The post Smart partnerships for global challenges appeared first on Atlantic Council.

]]>
Smart partnerships on the international and domestic levels, particularly between governments and private sectors, could play an essential role in ensuring AI is geared towards solving global challenges. African scientists in the field of AI, for instance, use game theory models to help stakeholders find contextual policies for dealing with emerging technologies. Other attempts include efforts to localize or regionalize data collections. African contributors were proud to point to examples of modern technologies already working hand-in-hand with infrastructure and human capital investments. Together with Zipline, a drone delivery company that specializes in providing access to vital medical supplies, the Rwandan government, for instance, administers drug and blood testing through drones; Zindi, the first data science competition platform in Africa, offers opportunities to solve specific challenges identified by companies, civil society organizations, and governments, based on best practices; a company named Lydia bridges the credit gap in many African markets by helping small businesses access credit within short periods of time, using trained algorithms instead of traditionally onerous financial screening; and all over the continent, modern technologies are also used in the fight against the novel Coronavirus. Closing the gap between expectation and reality, of course, remains the biggest challenge but there is reason to be hopeful that with the right incentives and government policies, African countries can move quickly to exploit emerging technologies, accelerate economic development, and host an increasing number of tech hubs in the future. 

Key Areas for Cooperation

  1. Governments must establish universally agreed baselines on the harms versus the benefits of the AI revolution, which could inform multilateral and national institutions as to the needed regulatory steps to minimize negative externalities, while maximizing potential benefits. An effort that could be modeled on the United Nations Framework Convention on Climate Change (UNFCCC) process that is broadly recognized as providing the objective and factual basis for considering necessary climate change policies. Using the broad agreements on AI principles completed by the United States, the European Union, China, and others can be a first step towards developing such common guidelines on AI implementation. 
  2. We call for a mechanism for sharing failures as much as successes in the employment of AI. Such sharing across multiple efforts could help establish international guidelines to define the rules of the game, prevent escalating conflicts, and enable the reconciliation of social needs with new technologies. International organizations and non-governmental bodies could help develop such platforms of exchange while simultaneously providing for a regional emphasis. Some African and Indian technologists thought they could learn more from other developing countries and their experiences in employing technologies than they would from advanced economies. 
  3. With countries at odds with one another, non- governmental track-two exchanges, particularly between the United States and China, on governing approaches towards emerging technologies are key for building trust, developing effective policies, and laying the groundwork for future government-to- government negotiations. 
  4. Bringing together multi-stakeholder groups within countries to lay the groundwork for governments to develop capacity-enabling regulations is essential, too, as technologies develop faster than governments can absorb. Hence, decision makers are slow every- where to help in optimizing the benefits of emerging technologies and leave populations vulnerable to negative externalities. 
  5. In order to give the global AI competition a different spin and emphasize the “technology for good” approach, it would be wise to highlight organizations that focus on AI applications in healthcare, education, food security and agriculture, or infrastructure endeavors, particularly in a post-Covid-19 recovery. 

The full text of this report is split across a collection of articles to give readers the opportunity to browse in any order. To return to the main page click here.

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Smart partnerships for global challenges appeared first on Atlantic Council.

]]>
Event recap | Tech and data recommendations for the new administration https://www.atlanticcouncil.org/blogs/geotech-cues/event-recap-tech-and-data-recommendations-2/ Wed, 06 Jan 2021 18:00:00 +0000 https://www.atlanticcouncil.org/?p=339083 In this episode of the GeoTech Hour, hosted Wednesday, January 6, from 12:00 – 1:00 p.m. EST, panelists make recommendations on how the new administration can prioritize data and tech applications.

The post Event recap | Tech and data recommendations for the new administration appeared first on Atlantic Council.

]]>

Find the full GeoTech Hour Series here.

Event description

In this episode of the GeoTech Hour, hosted on Wednesday, January 6, from 12:00 to 1:00 p.m. EST, panelists make recommendations on how the new administration can prioritize data and tech applications.

Many challenges await in the realms of data and tech, including ensuring the security of the government’s and nation’s software and hardware, improving cloud computing and machine learning abilities, and advancing biotechnology as the COVID-19 pandemic continues to unfold.

Government interventions to ensure a firm recovery from the pandemic will be essential. Creating working groups, councils, and alliances to develop and distribute vaccines and guarantee food security is essential to promoting security and peace. Such coalitions would be best initiated at the local and state level by creating groups that represent people in an authentic way. Philanthropic donors and organizations should be incorporated into solutions to these challenges.

Featuring

Joseph T. Bonivel Jr., PhD
Nonresident Senior Fellow, GeoTech Center
Atlantic Council

Dr. Divya Chander
Nonresident Senior Fellow, GeoTech Center
Atlantic Council

Melissa Flagg, PhD
Nonresident Senior Fellow, GeoTech Center
Atlantic Council

Bob Gourley
Nonresident Senior Fellow, GeoTech Center
Atlantic Council

Hosted by

David Bray, PhD
Director, GeoTech Center
Atlantic Council

Previous episode

Event Recap

Feb 17, 2021

Event recap | Data-informed nutrition policy and practices

By the GeoTech Center

An episode of the GeoTech Hour where panelists discuss challenges and opportunities for data-informed nutrition solutions, and how a multisectoral approach can improve global health.

Climate Change & Climate Action Inclusive Growth

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Event recap | Tech and data recommendations for the new administration appeared first on Atlantic Council.

]]>
Event recap | Practical steps forward: Improving global efforts to advance digital content safety https://www.atlanticcouncil.org/blogs/geotech-cues/event-recap-digital-content-safety/ Wed, 23 Dec 2020 02:34:00 +0000 https://www.atlanticcouncil.org/?p=334753 On Thursday, December 3, the Atlantic Council's GeoTech Center and the World Economic Forum partnered to host a private roundtable under the Chatham House Rule to discuss the possible practical policies for improving digital content safety. The following notes summarize the event's discussion.

The post Event recap | Practical steps forward: Improving global efforts to advance digital content safety appeared first on Atlantic Council.

]]>
On Thursday, December 3, the Atlantic Council’s GeoTech Center and the World Economic Forum partnered to host a private roundtable under the Chatham House Rule to discuss the possible practical policies for improving digital content safety. The following notes summarize the event’s discussion.

Key topics

  1. How policymakers can best regulate companies to more effectively reduce harmful content online, considering various goals of safety, innovation, competition, privacy, and free expression;
  2. How regulatory frameworks requiring increased transparency in and consistency of content curation practices can protect users and improve trust; and
  3. How new methods of collaboration, governance, and measurement can improve safety of spaces online.

Key industry and expert insights to prevent and counter the spread of harmful content online:

 1.    Regulation and competition should work hand in hand to improve online safety. Enabling consumers to understand the choices they are making by using specific platforms, including through regulation requiring transparency, would help incentivize positive change

  • From an anti-trust perspective, promoting competition in the online world means protecting consumer choices. With consumers unable to make realistic choices between social media platforms, competition cannot exist.
  • In order for consumers to make educated choices about which platforms to use, platforms must disseminate information regarding their content curation practices so that users can understand what goes on behind the scenes of the content they see (and don’t see).
  • Without transparency and consistency in curation practices, consumers cannot fully trust what they are seeing because they cannot trust the platform. Consumers want to feel safe online, but they also want to see and take advantage of free expression and an unrestricted flow of ideas.
  • Regulation, in this case, can have positive or negative impacts on competition. Section 230 has long promoted entry into the market for companies of any size. Abrupt regulatory changes to that policy would likely hamper competition as a result. That said, change of some kind is clearly warranted to mitigate the cycle of harm that currently exists.
  • Through enhanced competition, users will be empowered to demand features that they want from a platform, including improved curation tools to prevent the spread of harmful or fake content.
  • User-originated movements to break down the dangers of an ad-based revenue model could yield substantial progress. However, in the current market, true competition is insufficient, if it exists at all, for users to have enough sway over company practices.
  • Differences in regulatory frameworks across countries largely reflects the different priorities of distinct cultures when it comes to expression and online behaviors. But from the standpoint of a company or platform, these differences make it difficult to operate in a truly global manner.
  • Therefore, convergence and synchronization across countries, for example through multilateral organizations like the OECD, is essential to any efforts to change the digital landscape.
  • Considering best practices for content curation from around the social media space can provide a model for future recommendations to be applied more ubiquitously
  • In the current market, each platform maintains its own system of curation and moderation almost entirely distinct from its competitors.
  • Companies such as Reddit have demonstrated the effectiveness of a multi-layered, community led moderation system, through which user-moderators are empowered to restrict content in their community to keep their peers safe. Moderators create their own set of rules for specific communities, which include guidelines preventing harmful content or curating the type of content permissible. Reddit reports that 99 percent of moderation activities are made by these volunteer user-moderators. Severe cases, especially of criminal behavior or moderator misconduct or neglect, can then be moved up the chain to Reddit-employed administrators, and in some cases to government authorities or law enforcement.
  • As far as curation, Reddit also operates on a democratic model of up- and down-votes, which organically promotes higher quality content while sorting out harmful, negative, or low-quality posts. Reddit’s algorithms for curation and moderation are also open-sourced and available to researchers and investigators alike.
  • However, Reddit’s model would not work on all platforms, especially those that do not rely on community or group structures.  Nonetheless, recognizing that such models exist and are successful can give users and regulators alike the leverage to demand more and higher quality moderation from other platforms.
  • Already, platforms are beginning to emerge that are specifically designed around principles of people-centered curation and moderation so it is up to incumbents to adapt

3.     At the same time, the current information ecosystem is clearly broken, especially in terms of the enforcement mechanisms and their capacity to generate real change by platform providers. Regulators and social media companies alike must recognize this failure and act.

  • Governments must move away from content-based rules for online activities that attempt to address each type of illegal activity separately. Instead, a flexible framework approach can help to protect consumers online, by ensuring that platforms clearly divulge how they moderate content, where they apply curation methods, how they communicate moderation activities with users, and how users can appeal decisions quickly and efficiently.
  • When reporting on moderation activities, companies must take care to disclose not only the impact, but also the source code behind their moderation and curation algorithms to researchers/regulators/auditors. Using this information, independent auditors can verify these reports, and note places where companies are discriminating illegally or encouraging (intentionally or otherwise) greater access to hateful, false, or dangerous content.
  • Companies must also be required to report on moderation trends, so that outsiders, including policymakers and researchers, can understand changes within the information ecosystem.
  • With all this in mind, governments must be careful to establish a regulatory regime that does not overstep the bounds of censorship and idea manipulation. Worldwide, many cases exist where social media regulation has been used to mandate which ideas can and cannot be shared online.
  • Countries around the world must come together to discern what content must be restricted, and what lines must be drawn to prevent repression.

4.    Content moderation has become a professionalized global industry. Incidents worldwide have revealed the danger of these human-based systems breaking down. Without proper steps taken to empower curators and reduce vulnerabilities and blind-spots, real-world damage will follow.

  • Commercial content moderation involves people in tandem with computational tools and has emerged as a growing industry in the wake of the explosive expansion of social media. Workers are often outsourced to other countries where employees moderate the content of communities across a particular country or region. Previously, companies did not even disclose the existence of these workers. Though transparency has improved somewhat, much still remains hidden regarding the specific practices and governance specific to each platform’s moderation.
  • Due to the COVID-19 crisis, outside observers have noted the dangerous outcome of removing these humans from the moderation equation. In the Philippines, mandated quarantine required social media moderation to fall wholly onto computational tools, which then resulted in a lag in enforcement, failing to restrict the spread of dangerous content for a time. In other cases, algorithms employed to unilaterally enforce moderation have become overzealous, lacking human input to govern properly.
  • Overall, the public lacks a good sense of how and where content moderation takes place, while also remaining blind to the human actors within the system. Increased transparency will benefit all and empower content moderators to take bolder action where necessary.
  • In the area of child safety, certain regulations means that reports of CSAM and other illegal material not responded to within ninety days are required to be purged, lest the company retain illegal content past the legal limit. Legislators are already considering realistic changes to these regulations that would boost law enforcement capacity to respond to reports, while also lessening burdens on reporting authorities.

In partnership with

This image has an empty alt attribute; its file name is WEF-1-1024x893.jpg

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Event recap | Practical steps forward: Improving global efforts to advance digital content safety appeared first on Atlantic Council.

]]>
Vaccine hesitancy part 2: Effective strategies for a human-centered health campaign https://www.atlanticcouncil.org/blogs/geotech-cues/vaccine-hesitancy-part-2/ Tue, 22 Dec 2020 17:29:39 +0000 https://www.atlanticcouncil.org/?p=334715 Dr. Tiffany Vora continues her insights on vaccine hesitancy by laying out a human-centered health campaign approach that multiple sectors, from public health to social media companies, can consider.

The post Vaccine hesitancy part 2: Effective strategies for a human-centered health campaign appeared first on Atlantic Council.

]]>
In my previous article, we examined how quantitative analyses empowered by online networks can both reveal and implement effective human-centered strategies for health interventions. But once we’ve identified a strategy to test, refine, and deploy, how do we sustain an intervention’s effectiveness?

Trust is health’s most valuable player

COVID-19 has made clear that the politicization of health is incredibly dangerous; the health of billions of individuals is at stake, but so too is trust in pharmaceutical companies, in the scientific process, and in government and our regulatory agencies. For example, the White House’s efforts to block updated FDA requirements—which pushed regulatory approval beyond November’s election—and the subsequent reversing of that position shook the public’s trust in the safety of any potential COVID-19 vaccine. That trust was already teetering. Notably, the Reagan-Udall Foundation for the FDA recently reported major public concerns about potential vaccines expressed by members of historically underrepresented groups as well as frontline healthcare workers.

The overall goal of a human-centered public health campaign is to produce effective messaging and counter-messaging to support specific behaviors. Influencing (even manipulating) emotions to promote behaviors must be done with care because it has been prone to abuse, as seen in neuromarketing and political interference. On the other hand, by refusing to use the tools and strategies that successfully amplify disinformation, we may deny ourselves the opportunity to craft and deliver messages that saves lives—while leaving spaces for mis- and disinformation to flourish. Notably, the quantitative analysis of vaccination views on Facebook by Johnson et al., explored in my previous article, failed to uncover evidence of a dominant, top-down, deliberate disinformation campaign around vaccination. Nonetheless, decentralized anti-vaccination efforts continue to threaten campaigns against COVID-19 and other diseases.

The good news is that we can exploit lessons learned from disinformation while keeping people at the center of our ethical efforts. For example, to allay public fears about vaccination, we can use human stories, shared language, and narrative diversity to message reliable health information in concert with meta-messaging about how data transparency makes it harder for mistakes to persist and for lies to be spread. Both the scientific literature and the lay press could amplify such messaging from public health agencies. Similarly, meta-messaging around fact-checking is becoming prominent in many contexts; such analyses have the added benefit of unearthing insights about where misinformation arises and how it spreads (here is a notable Canadian effort focused on COVID-19).

Overall, evidence-based thinking isn’t just about what we know: it’s crucial to think about how we know what we (think we) know. Resource-strapped public-health messaging may overlook that aspect of digital health and science literacy. But by revealing the very human processes that drive science and medicine, we create ongoing opportunities to nurture public trust.

As Michael Caulfield wrote in his extensive guide for student fact checkers, “The truth is in the network.” Therefore, digital literacy is paramount, for both individuals and society. It is uncertain whether the landscape of reliable information will improve or worsen as time goes on, but individuals, parents, educators, and officials should all contribute to making digital literacy a cornerstone concept of “citizenship.” Here is one example of a free online curriculum in digital literacy being developed by the Stanford History Education Group. Digital health literacy and digital science literacy are useful additions to that toolbox. While some unreliable information is deliberately formulated and spread by disinformers, much amplification seems to come from well-intentioned sharing to members of one’s own tribe—particularly during times of crisis. Society-level efforts to increase digital literacy should support the oft-repeated recommendations that individuals should slow down, check our emotions, scrutinize sources, and make an informed judgment about sharing.

Social media constitutes an avenue to build trust based on values and behaviors that can be—but are not necessarily—decoupled from geography or historical descent. In my Athens vignette, because my colleague and I were ostensibly members of the same ideological (pro-technology, pro-data, future forward) and physically proximate tribe, it simply had never occurred to me that we could hold radically different positions on vaccination. How many other opportunities are missed to earn and maintain trust?

Fortunately, several social media giants are responding to the COVID-19 infodemic through initiatives that support reputable global health agencies, including prominent placement of reliable information, free advertising, and financial support (e.g. Facebook, Twitter, and Google), as well as algorithmic updates and labeling of misleading information from other sources. Many of these initiatives dovetail with similar initiatives around political elections (e.g. Twitter and Google). Independent oversight of these efforts, as well as transparent reporting of datasets about interventions and their outcomes, could be crucial to identifying effective interventions and building the public’s trust in the messaging that seeks the common goal of safeguarding health.

Overall, we must emphasize human connection, trust, empathy, and ongoing evaluation to avoid complacency as the information terrain shifts over time. No matter how heavily we rely on data and technology, we must keep people in the center of our efforts. Data is the means to effective health interventions, not the goal. Our common goal is healthy citizens supported by trusted health systems.

Effective online health messaging must be actively maintained

Today we have a crucial opportunity to craft effective interventions that support public health, acknowledge individuals and their dignity, and nurture trust in government, experts, and each other. Such strategies and tactics will not only aid in the fight against COVID-19 but also position all of us for success in future pandemics and health crises.

Perhaps most importantly, successful health-focused interventions will help establish pipelines for delivering trustworthy and actionable information for other crises, such as climate change, where science, politics, identity, and trust intersect and often clash. At the highest level, there are no “sides” in these “battles.” Humanity needs a home. We all need to be healthy.

For vaccination, the sharing of large anonymized datasets from social networks would empower researchers and organizations to investigate network ecology and online sentiment. For example, based on the insights from the study from Johnson et al.  explored in the previous post, we could design automated early-warning systems that watch meta-data for changes suggestive of concerted, top-down disinformation campaigns, such as a sudden and/or statistically significant transition from inactivity to sharing of anti-vaccination messaging across previously unaffected network domains (particularly as COVID-19 vaccines begin to come on the market). Dynamic data would also enable hypothesis-driven testing of the effectiveness of pro-health interventions while assessing (and potentially controlling for) parameters such as geographic location, age, and educational status.

Health agencies, universities, NGOs, and other organizations should provide explicit training, support, and rewards systems for narrative diversity and resource sharing, with the goal of facts-based yet human-centered communication. For example, just as graduate students currently supported by an NIH training grant must take a dedicated course on scientific ethics, so too should effective communication be integrated into curricula. Private foundations could consider providing such support as an element of fellowships for graduate students, postdoctoral researchers, and faculty. Notably, Johnson et al.’s mathematical analysis suggested that including pro-vaccine messaging in online conversations that are tilting or are heavily anti-vaccination (increasing the “heterogeneity” of these conversations) can slow the rate of linkage of anti-vaccination clusters and therefore the spread of unreliable information.

Similarly, all sectors should make explicit commitments to digital communication with the general public in their review and promotion systems, establishing rewards systems for activities that forge human-centered, evidence-based links between online communities (such as between the pro-vaccination and the undecided communities). These commitments should include basic training in digital literacy and empathetic communication to empower all generations to contribute.

The time to empower health interventions is now

This article seeks to distill many years of interdisciplinary research into blog-sized bites. It is an opening conversation and an invitation to dive deeper into these crucial areas of investigation rather than the final word on a complex and urgent problem that requires complex and validated solutions.

Admittedly, the recommendations described here, which explore only a small corner of the space of possible solutions, impose financial and time burdens at every level, from individuals to organizations to societies. They require fundamental shifts in our views of trust, responsibility, identity, and the future. They raise important questions about the ethics of intervening in speech and influencing emotions, particularly in liberal democracies.

Nonetheless, there is real urgency around the anti-vaccination crisis. As Tom Nichols has observed, “The Internet creates a false sense that the opinions of many people are tantamount to a ‘fact’.” The most alarming insight from Johnson et al. is their prediction that at today’s network trajectory, anti-vaccination views will outnumber pro-vaccination views on Facebook in 2030—only ten years from now. We will need to grapple with the effects of anti-vaccine sentiment far sooner as the world rapidly approaches broadly available COVID-19 vaccines.

As seen here and in my previous post, evidence is critical for scientifically grounded health interventions as well as for effective messaging strategies. Further, crafting health messaging that is based on evidence is not a “versus” campaign. It is a campaign to safeguard the health of all people while acknowledging and fulfilling the emotional needs that drive them toward unreliable information, both well-intentioned and ill-intentioned.

The scientific method still sorts valid from invalid conclusions over time. However, the time horizon over which this process can play out, particularly under intense public scrutiny, has notably shortened (for better and for worse). The stakes have gotten higher in today’s densely connected world, where information has been weaponized and tribalism increasingly threatens to undermine civilization’s foundational institutions.

While data and digital platforms underlie both problems and solutions, we must never lose sight of the humans at the center of these issues. That night in Athens, I realized that my colleague was seeking the very thing that I want for my family: good health, supported by trusted partners. As the first COVID-19 vaccines move toward distribution even as infections and deaths continue to rise, we face an important choice: to talk with each other instead of about each other. Fortunately, the choices we make now have the potential to serve as invaluable templates for our responses to crises yet to come.

Read part I

GeoTech Cues

Dec 14, 2020

Vaccine hesitancy part 1: Using connections to drive human-centered approaches for health

By Tiffany Vora

Dr. Tiffany Vora shares insights from a recent peer-reviewed investigation of online interactions about vaccination and integrates these insights with an orthogonal approach to understanding vaccine hesitancy.

Coronavirus Disinformation

About the author

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Vaccine hesitancy part 2: Effective strategies for a human-centered health campaign appeared first on Atlantic Council.

]]>
The future of food: Imagining our food system in the decades to come https://www.atlanticcouncil.org/blogs/geotech-cues/imagining-our-food-system-in-the-decades-to-come/ Tue, 22 Dec 2020 16:51:14 +0000 https://www.atlanticcouncil.org/?p=333089 Our global food system is complex, with trade-offs existing between efficiency, equity, and human and environmental health. Managing a transition, even without cultural factors and vested interests is highly challenging.

The post The future of food: Imagining our food system in the decades to come appeared first on Atlantic Council.

]]>
Our global food system is complex, with trade-offs existing between efficiency, equity, and human and environmental health. Managing a transition, even without cultural factors and vested interests involved in food systems, is highly challenging.

However, a number of promising technologies, including hydroponics, cultured meat, innovations across aquaculture and the use of bacteria in nutrient production are critical in the future of food.

Challenges for the future of food

COVID-19 has exacerbated a dire global malnutrition crisis, where calories have been prioritized over nutrients, and a small minority of people have balanced and healthy diets. Vitamin and mineral deficiencies exist throughout the world, and leading to both obesity and emaciation. The number of people malnourished globally in 2019 prior to the COVID-19 pandemic was 820 million, making United Nations’ Sustainable Development Goal 3 of zero hunger by 2030 especially challenging.

Climate change presents unprecedented challenges to agriculture, whilst agriculture and the food distribution system at large contribute a significant portion of the world’s greenhouse gas emissions, either directly through livestock and plant production or indirectly through changes to land structures and wildlife habitats.  The increasing incidence and severity of natural hazards, soil degradation, a decline in arable land, climate-related migration and conflict, all contribute to the challenges we are facing to food security, whilst the need to reduce emissions to limit global warming provides challenges for agriculture.

Public health implications for the future of food

The last year has shown the stark deficiencies in every country’s healthcare infrastructure. For too long rising costs and other priorities eroded at government budgets set aside for public health and the effects have made themselves felt.

In many nations, rich and poor alike, simple preventable acts like wearing a mask, washing our hands and maintaining physical distance have proved much more effective than expensive therapies or repeated tests. Moving towards simple and preventable interventions, and integrating our knowledge of food and nutrition into these strategies, is a fundamental variable in which small investments can lead to large dividends and significantly improve our overall health and wellbeing.

The global population is set to grow from 7.8 billion today to 9.7 billion by 2050, with urbanization increasing from 55.7% today to 68% by 2050, driven partially by climate change induced migration. This will undoubtedly increase pressures of production and distribution. However, given both the emissions produced by agriculture and associated vulnerabilities to natural and manmade hazards, alternative food production could be the future.

What is a food system beyond agriculture?

Alternative proteins

71% of the earth is covered in water, and using more of it for food can relieve pressure on land. However, globally declining fish stocks leave aquaculture as the main option to scale ocean-based food production systems.

Seaweed is a highly nutritious food source and doesn’t need any land, fresh water or fertilizer to grow. Using just 9% of the ocean to grow seaweed produces enough food for the world, enough biomass for global energy demand and would absorb global total CO2 emissions.

“Scaling seaweed production could both ensure global food security whilst making a major contribution to nature based carbon capture and storage”

Moreover, alternative proteins that can be ‘manufactured’ and don’t require sunlight are already being used for animal feed. Solar Foods produces food from CO2, air and electricity, fully disconnected from agriculture, and according to them  “It’s 100 times more climate-friendly than meat and 10 times better than plants”. Novonutrients is another startup that uses bacteria and waste CO2 emissions (CO2 + hydrogen) to produce protein-based fishmeal. According to them, “The annual CO2 emissions from a large cement plant would create 3 billion dollars of our protein flour, worth the same as the entire annual soy production of the state of Nebraska — 330 million bushels a year”.

Both natural hazards, such as volcanoes and man-made hazards such as nuclear weapons can cause solar shading, and inhibit sunlight, significantly reducing agricultural yields. Technologies such as converting bacteria to protein and scaling seaweed production would allow the food production gap to be met in such scenarios.

The recent news of Singapore becoming the first country in the world to allow the sale of lab grown meat has highlighted how quickly this industry is evolving. In a 2019 report, Barclays predicted that alternative meat could capture 10% of the $1.4-trillion global meat market over the next decade. According to a Nielsen report from May 2020, the sale of plant-based meats, which have been available in retail outlets and restaurants since 2018, grew by 264% in the US. Well-funded startups the world over, like Memphis Meats, Mosa Meat, BlueNalu, Finless Foods, Aleph Farms and more are hoping to capitalize on this growing consumer trend and change the way we eat.

Although alternative foods are highly promising, they are unlikely to replace agriculture entirely. Changes in bio-tech, gene editing, and GMO applications are guaranteed to affect current agriculture practices and must be taken into account.

Gene editing via CRISPR-Cas-9

CRISPR- Cas9 has quickly become an essential plant breeding tool, which has been reflected in the level of interest generated in the plant breeding community. Part of its popularity is due to its ability to edit multiple gene loci simultaneously by introducing multiple DNA strand breaks, while still being relatively easy to use. Recent advancements in the biotechnological techniques using this tool has led towards the augmentation of various foods to enhance the macro and micronutrient aspects of these items.

The primary concern currently is consumer acceptance to these commercially available edited food products, specifically due to the varying regulatory provisions in different countries. A global scientific consensus and uniform regulatory measures across countries may provide the necessary  catalyst this industry needs to move beyond the research setting.

These tools could be a potential means to helping solve issues of under or poor nutrition, particularly for low income settings. It is conceivable to expect such techniques to become more mainstream and alter the way food is grown, harvested, prepared and consumed in the developed nations as well over this coming decade.

Hydroponics

Hydroponics is the process of growing plants in a nutrient-mineral solution without using soil. It uses up to 90% less water, can produce 3-10x in the same space and many crops can be produced twice as fast. As unit costs of the required hardware fall, and IoT devices enable fully automated hydroponic farms, the cost of plant production is likely to decrease dramatically in upcoming years.

It is clear that the challenge to feed our planet in upcoming decades is not a technical one, but a human one. The power of the “agriculture lobby” in the USA, EU and elsewhere make food innovation challenging, as exhibited in the lawsuit won by dairy farmers against plant based dairy alternatives in the EU in October 2020.

It has been historically accepted that whilst the advent and widespread adoption of agriculture allowed for the most efficient means of creating consumable food, its practice has narrowed the variety of nutrients we consume. As technology has now advanced over the past few decades, we are reaching a potential stage and must consider that the way we view agriculture can change drastically, away from farms and land parcels, towards labs and oceans.

Over one in four people on the planet work in agriculture, and transitioning to a more efficient system could cause significant unemployment, especially if non-agricultural components play a major part. Alternative livelihoods must be provided if this shift is to be managed smoothly, potentially providing farmers with economic incentives towards more climate-friendly activities, such as reforestation.

Finally, it is important to mention that food systems are equally demand led as supply led. Cultural factors involving food preferences, and palatability are crucial for the future of food, with consumer adoption of taste and texture a necessary prerequisite to transform food systems.

Event Recap

Dec 9, 2020

Event Recap | AgriTechAction 2020

By Borja Prado, Claire Branley

On Tuesday, November 17 the GeoTech Center hosted AgriTechAction 2020, a three-day conference that explored the relationship between agriculture and technology, with the goal for future solutions to food security challenges to be accessible and sustainable for all. In the conference, experts and leaders in agriculture, technology, and national security came together to discuss and help guide the further deployment of data and technology in agriculture; specifically in food production, processing, distribution, security, efficiency, and sustainability

Climate Change & Climate Action Inclusive Growth

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post The future of food: Imagining our food system in the decades to come appeared first on Atlantic Council.

]]>
Reimagining a just society pt. 2 | The end of an era https://www.atlanticcouncil.org/blogs/geotech-cues/the-end-of-an-era/ Fri, 18 Dec 2020 18:06:07 +0000 https://www.atlanticcouncil.org/?p=332912 This blog post series will explore the meaning of a “just society” through multiple lenses and in the context of today’s challenges, including but not limited to the coronavirus pandemic. With contributions from multiple authors, it aims to stimulate thinking and questions that distill the prerequisites and responsibilities for “just societies” in our times. COVID-19 spotlights […]

The post Reimagining a just society pt. 2 | The end of an era appeared first on Atlantic Council.

]]>

This blog post series will explore the meaning of a “just society” through multiple lenses and in the context of today’s challenges, including but not limited to the coronavirus pandemic. With contributions from multiple authors, it aims to stimulate thinking and questions that distill the prerequisites and responsibilities for “just societies” in our times.

COVID-19 spotlights the need for people everywhere to insist on collective action to create a better future. Specifically, the impacts of COVID-19 so far underscore the need for bold new policies grounded in novel thinking better matched to the enduring twin challenges of pandemics and climate change. The COVID-19 disease has hit the most neglected communities worldwide the hardest, as disasters tend to do. As the virus causes death, destruction, and tragedy around the world, human society has gained a sort of pandemic intelligence dashboard about the hot spots of modernity’s failures. In many ways, the pandemic offers us a chance to learn and test new responses to the ever-more challenging future disasters that are inevitably bearing down upon mankind in the 21st century.

Amid a contracting global economy, fraying international ties, and the urgency of discovering a medical solution, it’s easy to miss how the ongoing catastrophe has marked the end of an era.

Preparing overflow graves for COVID-19 victims: a work by Behzad Alipour from https://www.farsnews.ir/photo/13990110000751/%D8%AE%D8%A7%D9%86%D9%87-%D8%A7%D8%A8%D8%AF%DB%8C-%D9%85%D8%AA%D9%88%D9%81%DB%8C%D8%A7%D9%86-%DA%A9%D8%B1%D9%88%D9%86%D8%A7-%D8%AF%D8%B1-%D9%87%D9%85%D8%AF%D8%A7%D9%86

The COVID-19 pandemic has unleashed a socioeconomic cataclysm that compels us to reimagine our modern world. Other than the 1918 global flu pandemic, there is little modern historical precedent comparable to this disaster. Some past catastrophes have catalyzed new thinking about mankind’s understanding of its place in the universe, and the scale of this current crisis should make its implications for the concepts of peace, prosperity, justice, and security hard to ignore. In a rational world prioritizing human survival and well-being, the zoonotic origins of the virus imply the need for such new thinking. Evolving theories about a “just society” in a rapidly changing world can act as vectors to spur new action and inform necessary reforms. “Lessons learned,” alternatively, can be and often are ignored (perhaps even relegated to a forgotten stack of documents in a back office), leading to greater disasters in the future;  public health experts note that it is this past cycle of concern and inaction that has worsened the effects of the COVID-19 pandemic.

Prevailing concepts of peace, prosperity, justice, and security are rooted in a now-defunct epoch of relative environmental stability. In the past, plagues occurred, wars were fought, and peaces negotiated, but the climate at least was relatively stable. By contrast, our times are increasingly characterized by weather extremes that are a product of a radically changing climate and environmental degradations such as deforestation. The mid-twentieth century origins of many modern geopolitical, economic, and international security and human rights conventions mean that they did not anticipate these global challenges and their impacts on human society. A particular mismatch involves intensifying and more frequent incidents of wildfires, droughts and floods, as well as the growing risks of recurrent pandemics — both phenomena stemming from the accumulated impacts of human activities on natural habitats.

While far from the first instance of zoonotic disease transmission to humans, the novel coronavirus is the first to shut down modern global society and actively harm billions of people’s prospects for survival and economic opportunity. Its origins in the nexus of human and wildlife activities tell us that that this economically destructive pandemic won’t be the last.

There is no health security without social security” 

A World in Disorder. Global Preparedness Monitoring Board Annual Report 2020.

Few foresaw that a novel coronavirus would expose the vulnerability of modern society, the global economy, and national and international security. Expert-level commissions warned of the need for improved international pre-pandemic crisis preparedness, but the rapid unfolding of this disaster exceeded most worst-case concerns. “COVID-19 has taken advantage of a world in disorder,” according to the World Health Organization’s Global Preparedness Monitoring Board. Deeply entrenched systemic racism, economic inequality, international distrust, and inadequate societal preparedness have amplified the pandemic’s devastation. “We have created a world where a shock anywhere can become a catastrophe everywhere, while growing nationalism and populism undermine our shared peace, prosperity and security,” according to the WHO. The same advances that have improved quality of life around the world have created “unprecedented vulnerability to fast moving infectious disease outbreaks by fueling population growth and mobility, disorienting the climate, boosting interdependence, and generating inequality.”

“We have created a world where a shock anywhere can become a catastrophe everywhere, while growing nationalism and populism undermine our shared peace, prosperity and security”

A WORLD IN DISORDER. GLOBAL PREPAREDNESS MONITORING BOARD ANNUAL REPORT 2020.

More people now understand that recurrent pandemics, intensified by the increasingly destructive effects of climate change and economic inequality, are inevitable without sweeping changes in human society and its behaviors. Yet, amid a contracting global economy, fraying international ties, and the urgency of discovering a medical solution, it’s easy to miss how the ongoing catastrophe has marked the end of an era.

Humanity itself will be redefined in the coming epoch largely because the pandemic’s socioeconomic and health effects, while unevenly distributed, have touched everyone. The pandemic has widened global fissures between the haves and the have-nots. Those with means have been able to work from home, where they are safer, while many others working in healthcare, food processing facilities, and schools are forced to choose between keeping their jobs and protecting their health.

Children in particular are affected with schools generally closed for in-person learning in many countries while, in others, there is also a rising incidence of child marriages.  Everywhere, for those without access to the Internet, who live in crowded spaces, or were homeless to begin with, keeping up with their education may be impossible.

The future course of global society is not predetermined, but it assuredly will be affected by the pandemic’s toll. Imaginable scenarios include a more dystopian world that, while dominated by artificial intelligence, ubiquitous surveillance, and disinformation, is composed of more impoverished people without basic democratic freedoms or access to affordable healthcare, education, or economic opportunity. Alternatively, the COVID-19 disaster could foster greater awareness of the interdependence of nations with the natural environment within an infinite array of possible scenarios.

Who is responsible? 

Can there be a “just society” without someone or something to take responsibility for preventable human loss of life and opportunity? Even though the novel virus itself is not man-made, the underlying conditions of economic activities and inadequate societal preparations have left billions in harm’s way.

Yet, questions of society’s accountability generally go unasked. The sources of such collective responsibility are unclear, as is the method of engaging all the affected parties on so broad a topic. After all, who or what is responsible for the current cataclysm? And who is responsible for imagining ways to build upon the catastrophe’s lessons for a better future? A recent report from the Council on Foreign Relations notes that “Pandemic threats are inevitable, but the systemic U.S. and global policy failures that have accompanied the spread of this coronavirus were not.” Will new US and global policies integrate the realities of a permanently altered and more disruptive environmental context in efforts to address inequities that worsen the effects of the current pandemic?

Notions of justice, peace, prosperity, and a “just society” will need updating to avoid still worse catastrophes. In an era of global challenges rooted in collective action failures, moreover, what will be the costs of not anticipating massive migration flows exacerbated by a changing climate? What is the cost to ordinary citizens of retrenchment by individual nations, including the United States, from the type of multilateral engagement, trust-building, and burden-sharing that can best prevent such epic disaster? It is clear that the answers to these questions of accountability will not be found on the usual profit-and-loss ledgers.

As the COVID-19 death toll continues to grow, questions persist that even the most sophisticated artificial intelligence cannot answer. Does it matter whether preventable human deaths occur not as a direct result of actions by common criminals or at the direct behest of a criminal regime but instead indirectly result from socio-economic causes, racial discrimination, contempt of science and scientists, and inadequate global crisis coordination? Some say, “It is what it is,” while others say, “It didn’t have to be this way.” This age-old contest over the extent of humankind’s responsibilities for its actions and decisions has been thrown into stark relief in 2020.  The only certainty is that decisions we take today will tip the scales in one way or the other on the inevitability and societal acceptability of preventable human tragedy. In the coming era, our responses to the condition of humanity around the world will define what it means to be human.

Previous installment:

GeoTech Cues

Dec 7, 2020

Reimagining a just society pt. 1 | Is a different world possible?

By Carol Dumaine

The GeoTech Center’s mission is to define practicable initiatives to ensure new technologies and advances in data capabilities benefit people, prosperity, and peace in open societies. Its overarching goal is a “world comprised of just societies.” The GeoTech’s mandate is an ambitious one and, while focused on applying new technologies to solutions to global problems, is anchored in an explicit assumption that its efforts will promote just societies.

Civil Society Coronavirus

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Reimagining a just society pt. 2 | The end of an era appeared first on Atlantic Council.

]]>
Pretrial risk assessment tools must be directed toward an abolitionist vision https://www.atlanticcouncil.org/blogs/geotech-cues/pretrial-risk-assessment-tools-must-be-directed-toward-an-abolitionist-vision/ Fri, 18 Dec 2020 18:06:04 +0000 https://www.atlanticcouncil.org/?p=332995 The United States criminal justice system is increasingly turning to risk assessment tools in pretrial hearings—before a defendant is convicted of a crime—as well as in sentencing procedures. Risk assessment tools give judges a numerical metric that indicates a pretrial defendant’s risk of failing to appear in court, or threat to the community prior to their pretrial hearing. Judges set bail based on this tool. Facing an incredibly high volume of pretrial detainees, risk assessment tools are designed to help quickly and effectively determine pretrial detention and ease courts’ burdens. To truly address the failures of the criminal justice system, however, public sector leaders must:

The post Pretrial risk assessment tools must be directed toward an abolitionist vision appeared first on Atlantic Council.

]]>
The Atlantic Council GeoTech Center seeks to provide public and private sector leaders insight into how technology and data can be used as tools for good. This publication examines the use of risk assessment tools in the United States criminal justice system. The analysis concludes that risk assessment tools must be framed as abolition technology if they are to address the systemic failures and oppressive practices of the criminal justice system.

The United States criminal justice system is increasingly turning to risk assessment tools in pretrial hearings—before a defendant is convicted of a crime—as well as in sentencing procedures. Risk assessment tools give judges a numerical metric that indicates a pretrial defendant’s risk of failing to appear in court, or threat to the community prior to their pretrial hearing. Judges set bail based on this tool. Facing an incredibly high volume of pretrial detainees, risk assessment tools are designed to help quickly and effectively determine pretrial detention and ease courts’ burdens. To truly address the failures of the criminal justice system, however, public sector leaders must:

  • Utilize risk assessment tools as part of a broader effort to reducing the role of incarceration in American society;
  • Account for the biases in crime data that stem the country’s racist history and the criminalization of historically marginalized communities; and
  • Correct data and create the conditions for more accurate data that feed into the algorithms of risk assessment tools

Introduction

In today’s data-driven era, it was inevitable that algorithmic-based technology would be introduced into the deeply flawed US criminal justice system. Risk assessment tools have been hailed as a possible technocratic savior to protect the accused from the biased and unfair rulings made at the discretion of human judges, while also helping speed up pretrial detention decisions. Courts can use data-based risk assessment tools to make quick and supposedly unbiased decisions about pretrial bail and release. These algorithms use various data points including criminal history, socio-economic status, family background, and case characteristics to generate a score. This score is then provided to a judge to inform them of the likelihood that the defendant will appear for a future trial if released on bail. Based on the risk score, judges can grant and set bail amounts accordingly.

In the United States, incarceration takes the form of jails and prisons. Although specifics change based on jurisdiction, local law enforcement runs jails to hold people awaiting trial or people who have been convicted of a minor crime; prisons are for holding a person convicted of a serious crime, such as a federal offence, and are for serving a longer sentence. Approximately 70 percent of the US jail population is made up of pretrial detainees who have not yet been convicted. Many of these individuals must remain there awaiting trial because they are unable to afford cash bail—a practice that effectively criminalizes marginalized and impoverished communities and bloats United States’ jail populations. On average, pretrial detainees are incarcerated in jails for fifty to two hundred days before their trial. This is due to the sheer volume of pretrial detainees, who must wait for their day in court, and the limited capacity of courts themselves.

Proponents claim that risk assessment tools are one way to efficiently and effectively identify low-risk individuals who should not be in jail simply because they cannot post bail. Essentially, risk assessment tools are seen as a way to manage the large volume of pretrial detainees. Risk assessment tools are incorporated or mandated across the country in almost every state. Most recently, California’s Proposition 25 was drafted to replace cash bail with algorithms. Proposition 25, however, failed to pass after opposition by groups such as the NAACP and ACLU. The reason for the opposition was that risk assessment algorithms fail to reduce the volume of people who are arrested and may digitally replicate systemic and historic oppression by relying on data built through a history of criminalizing the black and brown body.

Overreliance on risk assessment tools turns them into prison technology. We define prison technology as existing and emerging technology that aims to disrupt the criminal justice system but ultimately ties new knots of the same inequality that underlies the foundations of the US criminal justice system that it attempts to correct. Investing in such reform actually diverts resources away from more critical reforms, exacerbating bias, unfairness, and inaccuracy in the system, all while maintaining the same levels of policing. While police officers and judges on the ground may try to resist the unjust practices of the criminal justice system, laws and policies in their current form perpetuate harm. As algorithms, risk assessment tools are designed to learn from these rules and their consequences using biased data collected by biased individuals working in an oppressive system. Risk assessment tools can help create a more just society, but only if they are reformed into abolition technology, which envisions the use and implementation of technology to reduce the role of incarceration and policing in the criminal justice system through alternatives that address the underlying causes of crime. If framed towards an abolitionist goal, risk assessment tools can help begin to create a justice system that is fair, effective, and rehabilitative.

Criminalizing America—risk assessment tools as prison technology

The criminal justice system must account for risk assessment tools’ substantial shortcomings before considering them and their underlying algorithms as technological saviors. Complex technology cannot solve age-old societal problems alone. These shortcomings turn risk assessment tools into prison technology. The Partnership on AI explains that “it is a serious misunderstanding to view tools as objective or neutral simply because they are based on data.” When considering the technical details and the human-computer interface of risk assessment tools, the Partnership on AI found three key types of challenges.

First, risk assessment tools are often invalid, inaccurate, or biased in predicting real world outcomes. For example, many algorithms are programmed to measure the likelihood of an individual incurring another arrest, not whether they are a threat to public safety. The tools are simply using the wrong metric. Additionally, algorithms are likely to reflect the existing biases and oppressive practices of over-policing and criminalization of historically marginalized communities. Second, risk assessment tools rely on judges and lawyers to understand how the prediction works and make fair interpretations. This means people in power must effectively interpret statistical information, confidence intervals, and error bands, and be well-versed in the uncertainty of the results. Third, risk assessment tools require effective governance, transparency, and accountability. Algorithms cannot exist behind a black box, but must be made accessible for public examination, debate, and ongoing regulation both inside and outside the courtroom by plaintiffs, defendants, and the general public.

Beyond the technical challenges, risk assessment tools have serious societal implications by continuing to criminalize historically marginalized communities. While this paper cannot go through all the communities who have been disproportionately policed, we particularly note a few including people of color, LGBTQ people, the poor, and the mentally ill. Risk assessment tools, as prison technology, simply automate the criminalization of these and other historically marginalized communities. Criminalization comes from a racist history that cannot be disentangled from the data fed into algorithms. As the Equal Justice Initiative notes, “today’s criminal justice crisis is rooted in our country’s history of racial injustice…and it’s legacy.” Unjust policing standards and the war on drugs evolved in direct response to the civil rights gains of Black, Indigenous, and communities of color to restrict their freedom. Consequently, these communities are intentionally and disproportionally stopped, searched, arrested, and charged, making them more likely to appear as high risk. Due to these practices, the input data for risk assessment tools is at the outset biased, only serving to reflect the racism in our criminal justice system.

This bias is rampant in the criminal justice system. One example can be found in the United States Department of Justice’s investigation of the Ferguson Police Department following the murder of Michael Brown in 2014. The Department of Justice found that officers see “those who live in Ferguson’s predominantly African-American neighborhoods, less as constituents to be protected than as potential offenders and sources of revenue.” Police were not interested in protecting the community: they were lining their coffers. Ferguson is not an isolated incident but instead representative of a broader culture of racism in the criminal justice system that is entrenched in the beliefs and actions of those who enact “justice.” In response to recent protests for racial justice North Carolina police officer and twenty-two-year veteran, Michael “Kevin” Piner, was recorded saying “We are just going to go out and start slaughtering them f—— n——…I can’t wait. God, I can’t wait.” A policing culture that fosters these types of sentiment cannot provide the unbiased and fair data necessary for risk assessment tools. This racism is built into the algorithms of risk assessment tools. In 2016, ProPublica found that COMPAS, a popular risk assessment tool used in Broward County, Florida, disproportionately categorized black defendants as high-risk, even when they did not go on to commit another crime. ProPublica’s study demonstrated how COMPAS was simply prison technology, ineffective at correcting the racist practices residing in our police departments while also upholding racism into its model.

While the criminal justice system has targeted communities of color, more broadly it unfairly punishes people facing poverty and mental illnesses. People experiencing homelessness are arrested for public intoxication, loitering, jaywalking, panhandling, or sleeping in public spaces, and almost all are unable to pay fines. Rather than receiving support, they are more severely policed and more likely to face conviction relative to white collar criminals, who are policed less, treated more leniently, and seen as “good people who made a mistake.” This means that people facing poverty who have committed nonviolent crimes are overrepresented in the data, which feeds into risk assessment tools outputs. This is despite the fact that the cost of white collar crime to the US economy stands at more than $300 billion. The policing of homeless people has also meant that LGBTQ youth are disproportionately targeted—LGBTQ youth make up 40 percent  of the homeless youth population. Trans people are also particularly vulnerable, as they are incarcerated at twice the rate of cis people and experience high levels of mistreatment and sexual assault by the police.

Similarly, people with mental illnesses are simply more likely to be arrested and put in jail rather than receive medical treatment. For one, 20 to 25 percent of homeless people have at least one serious mental health issue. 20 percent of inmates in jail and 15 percent of inmates in prison have a serious mental illness—instead of being hospitalized, they are incarcerated and put into inhumane facilities that deny treatment, are overcrowded, and fail to punish rampant physical and sexual abuse by corrections officers. Today, approximately ten times more people are in jails and prisons for mental illnesses than in state hospitals. The result is a disproportionate number of people who are sentenced due to a mental illness, and a high recidivism and incarceration rate because the underlying mental health conditions are not dealt with in prisons. In the end, people with mental illnesses simply enter a never-ending cycle of incarceration, filling up prisons as their mental health is criminalized and untreated.

Other OECD countries with low incarceration rates do not rely on risk assessment tools

In the United States, risk assessment tools were envisioned to help manage the large number of pretrial detainees awaiting trial. The OECD countries with the lowest incarceration rates do not use risk assessment tools to determine pretrial detention. These OECD countries do not face the same volume of pretrial detainees because they have reduced the overall number of people in prison through alternatives to incarceration: low minimum sentences, emphasis on community service and supervision, and extensive social welfare programs. The OECD countries with the lowest levels of imprisonment have reduced crime by addressing its underlying causes and relying less on incarceration. Imprisonment is reserved for only certain crimes and even then, rehabilitation and reentry are emphasized to prevent recidivism. The result is a substantially lower incarceration rate than the United States’ despite a similar level of pretrial detainees—this solves the pretrial volume problem facing the United States criminal justice system. The table below details the ten OECD countries with the lowest incarceration rates plus the United States and whether they use pretrial risk assessment tools, their incarceration rate per 100,000 people using the national population, and the percent of people in prison and jail who are being held as pretrial detainees.

The United States has a similar rate of pretrial detainees to the other OECD countries, but about ten times the incarceration rate. The United States simply polices people more, locks up people at a higher rate after conviction, keeps them in prison for longer—the average prison sentence in the United States is three years—and creates conditions responsible for an unacceptably high level of recidivism. This is due to an emphasis on and inherent belief in policing and punishment. Prison conditions and relegation to a second-class citizen status without adequate resources for reentry contribute to the United States’ high recidivism rate. The United States Department of Justice found that 68 percent of people released in 2005 were rearrested within three years. On the other hand, Norway’s two-year recidivism rate is 20 percent , Finland’s is 36 percent, and the Netherlands’ is 35 percent. Risk assessment tools are meant to help manage this volume, but a better way may be to address the source of the volume: policing and criminalization.

Scandinavian countries focus less on policing and more on rehabilitation and alternatives to incarceration. This means fewer people entering the criminal justice system and of those who do enter, they leave with a lower likelihood of returning to prison. In Denmark, an emphasis on lower incarceration and no minimum sentencing requirements have enabled judges to assign volunteer hours or court-ordered supervision. To make these decisions, judges use pre-sentence reports, or ‘personal investigation reports,’ to consider a wide range of factors to determine risk. These factors include details such as childhood, employment status, mental health, and other personal or social problems that judges in the United States are not required to consider. In the United States, many people often fail to receive adequate counsel, resulting in sentencing by judges who do not account for other critical factors. Underlying the use of this information in Scandinavian countries is an emphasis on rehabilitation and justice, rather than retaliatory punishment. This is all grounded in a shared commitment held amongst elected officials, prison authorities, and civil servants to police less.

Germany prioritizes “normalization,” to make life in prison as close as possible to life in the community. The German Prison Act states that “the sole aim of incarceration is to enable prisoners to lead a life of social responsibility free of crime upon release.” This has meant resocialization to improve the reentry process and reduce recidivism. The German criminal justice system also employs alternatives to incarceration including fines, suspended sentences (similar to probation in the United States), and task penalties (work and training rehabilitation programs). Similarly, Iceland has implemented low minimum sentencing requirements and often uses electronic monitoring and community service. The European countries detailed above have kept incarceration and recidivism rates low. This is due to a commitment by politicians, judges, lawyers, and law enforcement officers to rely less on incarceration as a solution to the social problems resulting in crime. The United States can learn from these countries, rather than look to risk assessment tools as a way to manage the volume of pretrial detainees.

The United States, however, cannot look to Europe to solve the biases in its criminal justice system. In the United States, the black community makes up 13 percent of the population, but 40 percent of the prison population. The criminal justice system was designed largely to preserve a racial hierarchy. Similarly, Europe‘s colonial legacy and distrust of certain foreigners and ethnic minorities is represented by the people who are incarcerated there. In most Scandinavian countries, for example, people with a foreign background are overrepresented in prisons compared to the general national population.

Risk assessment tools in their current form as prison technology are ineffective because they fail to address the sheer volume of people who are incarcerated, and they operate on data that is biased against historically marginalized communities. Furthermore, the development and implementation of risk assessment tools may divert political and economic capital away from real criminal justice reform. Proposition 25, for example, was estimated to cost the public hundreds of millions of dollars a year to implement. Risk assessment tools in theory can reduce the number of pretrial detainees, but they do little to advocate for lower sentencing, alternatives to incarceration, racial justice, poverty alleviation programs, and drug and mental health treatment. The United States faces a unique set of challenges, which can only be solved by reckoning with its racist past and changing the country’s over-reliance on policing.

Transforming risk assessment tools into abolition technology

Data and technology must be used to uplift communities and break systems of oppression. In their current form, risk assessment tools utilize datasets built by unjust practices that historically marginalize communities, while retaining a white, heteronormative power structure. Scores only reflect the conditions in society today without solving underlying problems like over-policing, criminalization of communities, mental illness, and poverty. Risk assessment tools as abolition technology can do three things: divert people away from incarceration, create democratic mechanisms for the greater public and marginalized communities to influence how judges determine pretrial detainment, and make space to address unjust policing and the causes of recidivism.

First, risk assessment tools should be used with a commitment to limit incarceration. The underlying data used in risk assessment tools fail to accurately represent crime in America. Those developing risk assessment tools must assume this and require a high threshold for pretrial detainment. Risk assessment tools should only determine someone to be high risk if the evidence is overwhelming—and even here the data is likely to be biased. At the least, however, the number of errors might be reduced and judges encouraged to incarcerate less. Judges will have discretion on how risk assessment tools are used, so developers must require judges to override risk assessment tools and explicitly explain their reasoning if they decide someone poses a serious risk to society. This strategy allows for tech to play an important role in bail determination in a way that does not allow judges to hide behind algorithms as they set high levels of bail, while also potentially reducing incarceration. Additionally, risk assessment tools cannot serve as a primary source of decision-making but must instead be a secondary tool that informs judges. The Center for Court Innovation, for example, found that if a judge uses a moderate-high and high-risk threshold algorithm as a secondary tool for defendants charged with a violent felony or domestic violence offence, they reduced overall pre-trial detention and almost eliminated pre-trial detention biases against Black and Hispanic defendants.. In comparison, reliance on only judicial discretion or a high-risk threshold risk assessment tool resulted in both higher levels of detainment and greater levels of bias against Black and Hispanic defendants. Racial biases and incarceration levels were highest when decisions were made primarily through judicial discretion.

Second, open design frameworks are needed to ensure that the data being used is approved by the general public. Arrested people fail to receive adequate legal counsel far too often and must navigate a Kafkaesque legal system. In this case, risk assessment tools can be weaponized to simply serve as another inaccurate and unfair justification for incarceration. Risk assessment tools with open design approaches, however, give an opportunity for greater public involvement and awareness, and they can allow democratic mechanisms to make corrections to the data and change the way judges make decisions. Today, we are able to measure biases in data, as demonstrated by ProPublica in Florida. This means the underlying data can be adjusted, certain egregiously biased datasets removed (e.g. income), and models tweaked in order to reflect a fairer system—all that requires the technical examination of the data and output grounded in an understanding of the systemic issues at play and how they manifest themselves in the data. Critical for this process, however, will be ensuring that the communities most impacted by unjust policing have a substantial voice in the development and implementation of the risk assessment tool. Otherwise, open design frameworks will only continue the tradition of ignoring the voices of historically marginalized communities.

Third, risk assessment tools should not divert resources away from broader criminal justice reform. While risk assessment tools will need funding if they are to be improved, they will be largely ineffective unless more money and resources are directed towards transforming the system into one that is more just. Risk assessment tools cannot be used as an alternative to solving the problems of an unjust system that results in long sentences and recidivism. Along with being morally repugnant, the cycle of recidivism is expensive, and states have limited budgets. Approximately 25 percent of people entered prison in 2017 because of technical probation or parole violations, and a disproportionate number of them were people of color. Risk assessment tools will not solve this problem, the fact that people with mental illnesses are often arrested rather than admitted to hospitals, nor the lack of adequate counsel for people with little financial resources. Instead, risk assessment tools as abolition technology must be used as part and parcel of broader criminal justice reform enacted by states and the federal government.

A more just system based on good technology

Technological innovation should be used to challenge existing power structures to make a more equitable and just society. Risk assessment tools, if directed with an abolitionist vision, can help challenge the status quo of a criminal justice system that relies on incarceration and is predicated on a legacy of racial injustice. Risk assessment tools must be linked together with other initiatives and developed in partnership with the communities most affected. In order to do this, politicians, judges, lawyers, developers, and the public must:

  • Reduce the role of incarceration by changing minimum sentencing requirements, utilizing alternatives to incarceration, and emphasizing rehabilitation over punishment;
  • Refrain from considering data a savior given its bias against historically marginalized communities; and
  • Create the conditions for more accurate data collection by addressing the underlying causes of crime through social welfare programs and the dismantling of oppressive institutions (e.g. laws, policing, business practices).

American leaders and the public must take responsibility for how risk assessment tools are used. An active commitment backed by political and economic capital to help the most marginalized and forgotten is needed. This is no easy task, but true criminal justice reform asks for nothing less.

Hannah Biggs is currently a consultant at the Atlantic Council GeoTech Center where she focuses on researching international innovations in data and technology to promote peace and prosperity. She has held multiple research assistant roles at organizations like the National Criminal Justice Association (NCJA) and Starting Over, and has therefore been able to focus on a diverse portfolio of topics at the state, federal, and international levels. She is passionate about economic development in low and middle-class economies, ethical governance and accountability, and creating equitable and just societies.

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Pretrial risk assessment tools must be directed toward an abolitionist vision appeared first on Atlantic Council.

]]>
Silicon Valley’s role in foreign policy and what others can learn from it, Part II: Ecosystem building advice and policy recommendations https://www.atlanticcouncil.org/blogs/geotech-cues/silicon-valleys-role-in-foreign-policy/ Fri, 18 Dec 2020 15:09:57 +0000 https://www.atlanticcouncil.org/?p=331016 In the last twenty years, one of the United States’ key exports has been the technology coming out of Silicon Valley—and along with it, its particular brand of innovation culture.

Unsurprisingly, innovation has risen to the top of policy makers’ agendas around the world. Yet, creating carbon copies of Silicon Valley is not the answer. To compete in the increasingly global innovation arena, countries and companies are writing a new playbook.

The post Silicon Valley’s role in foreign policy and what others can learn from it, Part II: Ecosystem building advice and policy recommendations appeared first on Atlantic Council.

]]>
Alexandre Lazarow is a guest author who works as global venture capitalist and author who contributes to GeoTech Center work on tech innovation and funding. He is presently a venture capitalist with Cathay Innovation, a global fund that invests across North America, Europe, Asia and Africa. He teaches entrepreneurship at the Middlebury Institute for International Studies at Monterey.

Spurring innovation and entrepreneurial ecosystems is at the top of policy agendas worldwide. However, building the next Silicon Valley is not easy. Increasingly, it is also not the right framework for the diverse world we live in.

As a follow-up to Part 1, where we explored the critical advantages to local innovation ecosystems, in Part 2, we will explore strategies to bolster local startup ecosystems, in particular the role of government and other ecosystem builders. While success requires a unique strategy for each location, there are broad principles that should be considered.

Principle 1: Don’t copy Silicon Valley– Leverage local strengths

Silicon Valley is its own unique ecosystem. It is hard to replicate.

Here is the bad news: not every country or ecosystem is going to be the global center of excellence for innovation, or the capital for a particular sector. Yet, at the same time, every region has unique advantages and specialties. When considering what to prioritize, ecosystems need to consider what industries will thrive within their borders. These decisions can be based on existing and successful multinational industries, geographic features, natural resources, strengths in rule-of-law, etc.

For example, London became a hub for financial technology firms because its finance and banking industry was both long-established and global. The fact that it still maintained currency independence while being a part of the European Union and in close proximity to its marketplace made it even more enticing to start a financial business there. Minneapolis, too, branded itself as an American medical technology hub because of its top-tier research and hospital institutions, which spawned the growth of medical device companies like Medtronic, which in turn caused more small companies to headquarter themselves there to tap the talent and resources invested in the healthcare industry. Estonia is positioning itself to become a leader in e-government based on local strengths and infrastructure. Tel Aviv, likewise, is becoming a hub for cyber security.

Each innovation ecosystem operates in a unique environment defined by a political economy,  macroeconomic circumstance, and an ecosystem of individuals in the sector. Any ecosystem also necessarily includes a broader industry environment and set of expertise. These can be leveraged to scale local innovation ecosystems.

Principle 2: Support ecosystem infrastructure

Launching a startup today is easier than ever thanks to platforms like Amazon Web Services, which allows anyone to rent a super-computer by the hour instead of building their own servers internally, or Shopify, which allows new ecommerce retailers to have a modern store, payments and logistics set-up with just a few clicks. Yet at the same time, many ecosystems have acute innovation infrastructure gaps that hamper the growth of the ecosystem. Over 3 billion people don’t have an address, making online deliveries challenging or impossible. Over 1.5 billion people are unbanked, roadblocking payments to online merchants.

Some countries take it upon themselves to offer infrastructure through national programs. In India, for example, the government is leading a project called Aadhaar, a national identification system. This not only provides access to government services, but in the innovation world it is part of the customer authorization “stack” that aims to reduce fraud. This way, the system becomes a shared resource that should raise all ships  while also protecting citizens from data privacy violations by corporations. As Nandan Nilekani, cofounder of Infosys, told me, “The objective with these programs is to create digital public goods. The first was Aadhaar, which provides a public, verifiable identity. Subsequently, the National Payments Corporation of India offers a successful interoperable payment network called UPI. The next stage is data empowerment, where data is put in the hands of users to use for their own benefit. Our vision is that, enabled with all this infrastructure, magic can happen. All kinds of products and services can be reimagined.” Instead of one or two companies building this themselves and creating an identity authentication tool by their own rules, the government has stepped in to level the playing field. The creation of Aadhaar has used the power of APIs to act as a catalyst for new types of innovation to be built upon a shared technological resource.

Principle 3: Build a launching pad to go global

As discussed in Part 1, innovation today is “born global.” Over 45 percent of South East Asia’s billion-dollar businesses are in Singapore, a country with less than 1 percent of the region’s population. Dubai raised the only non-Israeli MENA billion-dollar business with only 1.5 percent of the area’s population. Singapore and the UAE thrive off their ability to provide an easy place in which to do business while opening up a regional market. Singapore is second in the entire world in the World Bank’s ease-of-doing-business rankings, and the UAE is first in its region and sixteenth overall.

Becoming a global launching pad is no easy feat and requires some basic building blocks like efficient and flexible regulation, a stable currency, and legal and financial services organizations among other infrastructure. Regulatory environments need to be flexible when it comes to experimentation, which allows companies to test products and make more informed estimates about how the product will fare in similar markets in the region. In that same vein, IP protection needs to be robust and should allow companies to have stronger legal protection regionally as well as nationally.

To develop, and to attract the talent required to develop, a startup ecosystem necessitates equal levels of thoughtfulness and strategy.Even before the current pandemic, more and more startup teams were starting to become more distributed across the world. No longer do companies need to have headquarters in downtown offices. One of the best ways to capitalize on this shift towards remote work and distributed teams is to become a hub for global talent, even if the company itself is not located within a country’s borders.

Countries should make two efforts to this end. First, they can build the talent pool by educating citizens in relevant fields like entrepreneurship and computer science. Second, they can improve immigration laws. Again, these new minds may be absorbed into the local startup pool, but with the growth of distributed teams, these are great opportunities to also increase employment (and therefore income tax revenue) that would not occur otherwise, while the actual startup headquarters is elsewhere.

Principle 4: Support cross-pollination of ideas

On the one hand, it is critical for countries to develop ecosystems that can thrive in an increasingly born-global environment. The flip side is to also support cross-pollination of people and ideas. In a post-COVID-19 world, it may seem easy to devalue connectivity across borders, but the reality is that it is essential to innovation’s development. New ideas rarely come out of thin air but are instead iterations on other concepts that were iterations before that, all passed on via an innovation supply chain. Go-Jek, for example, took lessons from Uber, the premier global ridesharing app, and from a number of Chinese super-apps like WeChat, to come up with its own regional flavor of an on-demand rideshare, courier, and financial services app. Its model and evolutions, in turn, influenced the original.

We are seeing the importance of global idea cross-pollination as COVID-19 reshapes the innovation supply chain. Companies, trade organizations, and industry organizations are having to pivot to invent new ways to interact, engage partners, and share ideas. Virtual collaboration is leading to new global hackathons and even vaccine collaboration. International communication and collaboration thus are critical to generating the next wave of innovators.

Nations can accomplish this starting with their education systems. States cannot be too insular when it comes to international study from an import and export perspective. International students need to be incentivized (or at the very least the state should not  impede them ) for the same reason that some local students should be incentivized to study abroad–this exchange leads to new ideas and new solutions to problems both global and local. Research has found a correlation between GDP growth and the rate of international education. The opposite is also true: a lack of cross-pollination may hamper innovation.

States can also support industry dialogue, either through conferences, sister city programs, or by assisting joint ventures or cross-border R&D projects. Programs like Start-Up Chile and Start-Up Brazil are state-run and look to institutionalize cross-pollination by encouraging entrepreneurs from around the world to start their businesses locally. While these programs have had mixed success in relocating startups, they can drive cross-pollination, an even more important objective.

In a world with COVID-19, conferences and meet-ups are going to look different, surely. But this makes the need for cross-pollination even more dire. It will require more creative solutions so that nations and startup communities do not miss out on the benefits of collaboration across borders.

Principle 5: Incentivize corporate as well as philanthropic involvement

Governments have only so much influence in startup ecosystem development. In fact, a lot of support for startups comes from people who live and breathe business andare often overlooked: other corporate leaders. They have a large role to play and a vested interest in being mentors to new entrepreneurs and providing an environment for the growth of new ideas, as building out their local ecosystems leads to opportunities to capitalize on the ensuing growth and development.

Governments can create the meeting grounds for these individuals to collaborate via trade organizations, mentorship programs, or sometimes state utility companies. M-Pesa, for example, was an offshoot of Safaricom, a Kenyan telecom operator. It was originally launched as a public-private sector initiative, and it tapped into international development funds as well. It functioned like a startup with autonomy from its inception but with enough access to the pipes at Safaricom to grow the business at a rapid scale.

Corporations can not only be a source of capital for small companies but also a landing path for acquisition for those that are less successful, which decreases the risk of starting.

The philanthropic and non-profit sectors in a region can become powerful allies. The social sector is also looking to solve intractable problems, often leveraging technological tools. Impact investors and philanthropies are becoming important innovation funders, particularly in the most frontier markets.

Principle 6: Support older siblings

Just as elder siblings often face unrelenting parental resistance, first generations of entrepreneurs in nascent ecosystems often find it challenging to succeed. As they forge ahead, they create the ecosystem and environment they need if they are to realize success, and, by breaking down barriers, they benefit their younger siblings. It should be a priority of the government to make life a little easier on these older siblings, as they are the same companies that come to be fundamental to the ecosystem building effort later on.

A few trailblazing older siblings can make all the difference. In Latin America, for example, older siblings from three companies, including MercadoLibre, the largest e-commerce platform in Latin America, are linked to 80 percent of startups in the region. After the IPO, one of MercadoLibre’s founders, Hernan Kazah started Kaszek Ventures, a VC firm aimed at getting new startups funding, mentorship, and encouragement. He also served on the board of LAVCA (Latin American Private Equity and Venture Capital Association) and co-founded ARCAP (an Argentinean association for private investing). Supporting these older siblings in their efforts to build the next generation of entrepreneurs is paramount to success.

Older siblings’ efforts are compounding, too, and they tend to have a disproportionate impact on their ecosystems. Endeavor refers to this phenomenon as the “multiplier effect.” As successful older siblings scale, they support many leaders of the next generation, who then go on to replicate their success, building upon the prior generation’s success. In China, after its first unicorn scaled in 2010, it took five years to achieve its fifth, and the very next year the count skyrocketed to twenty-one. A similar dynamic is unfolding in India, the United Kingdom, and Latin America, where similar numbers of unicorns cropped up. Looking at startup ecosystems around the world, there seems to be an inflection point when  three to five older siblings bring their companies to exit, depending on the size of the market.

The reason for this exponential success often lies in the networks that are created by older siblings. For example, in Silicon Valley, more than two thousand companies—including Instagram, Palantir, WhatsApp, and YouTube—can be linked to eight individuals who co-founded Fairchild Semiconductor back in 1957. A staggering 70 percent of public Bay Area technology companies have some link to Silicon Valley’s metaphorical patient zero, Fairchild. These key individuals at successful companies are able take the lessons learned, combine it with their own ideas, and beget success on an even greater scale.

Key Recommendations
Don’t replicate Silicon Valley. Understand the unique strengths and positions of one’s own ecosystem, and what can fit that world best.
Examine key infrastructure gaps in a country or region. Many startups in more nascent ecosystems are forced to build a range of enabling infrastructure, just to provide an end product. Solving these roadblocks will unblock innovation. Building platforms for the ecosystem can catalyze it.
Understand the global nature of innovation. This means being a friendly place to do business, and being a launching point for entrepreneurs, both as a market but critically also as a place to recruit to. Immigration is a key lever for team building and welcomed thoughtfully.
Foster the cross-pollination of ideas. This can come from education and exchange programs. Of course, immigrants are the ultimate cross pollinators
Work with corporations, philanthropies and the social sector in ecosystem to bring them at the table. Building an ecosystem cannot be done alone.
While it can be tempting to get points on the board, and support nascent ecosystems (new companies formed), what will really move the needle are companies that scale.
Perhaps most importantly, serve entrepreneurs. The idea is not for the government to create the ecosystem. Rather, it is to provide innovators and the entrepreneurial community with the tools and resources they need to succeed. Foster older siblings since they will be catalysts.

Parting thoughts

As in Part 1, innovation ecosystems can be incredible assets, not just for domestic strength but also for national competitiveness and foreign policy. To scale innovation ecosystems worldwide will take novel strategies. These principles should be used to start thinking about where to invest and where to start building; namely, in the tools and systems that entrepreneurs need to survive and subsequently thrive. New startup ecosystems can be fragile, so they benefit immensely from investments in infrastructure, education, exchange programs, and regulatory reforms, among other initiatives, to begin to reap the many benefits of creating a startup ecosystem–one that will strengthen a domestic economy, improve a country’s international standing, and shore up technology security.

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post Silicon Valley’s role in foreign policy and what others can learn from it, Part II: Ecosystem building advice and policy recommendations appeared first on Atlantic Council.

]]>