The director of the Cybersecurity Agency, He Quande, elaborates on the opportunities and challenges of AI, facing and adapting to the profound impact of generative AI.

Content

In the current wave of rapid technological development, generative AI has become one of the few revolutionary technologies in the history of human technology, with its depth, speed, and breadth of impact exceeding expectations. He Quande, the director of the National Institute of Cybersecurity Research, recently delivered a speech titled "Challenges and Opportunities of New Generation AI for Cybersecurity Governance," exploring in depth how AI is changing the governance model and application pattern of human society. He pointed out that AI is like a steam engine implanted in the human brain, not only bringing efficiency improvements but also driving comprehensive social change.

One of the characteristics of generative AI is its infinite scalability in application scenarios. This technology breaks through the limitations of traditional artificial intelligence, and can be used not only for text generation and language processing, but also involves various applications such as image creation, decision support, and medical diagnosis. He Quande further elaborated that generative AI is leading the rapid development of the digital society, and its potential impact is not limited to the technological field, but also profoundly changes social structures and industrial economies. He said: "We have never seen a technology penetrate every field at such a fast pace, and the scope of its influence is astonishing."

Generative AI brings the potential for various infinite applications

The development of AI technology is changing rapidly, from the initial chatbots to AI agents capable of controlling computers, each breakthrough is refreshing. He Quande emphasized that AI applications are in a phase of rapid growth, and will further bring technological innovations beyond imagination in the future. For example, the popularization of Digital Twins technology allows companies like Siemens to develop virtual digital hearts, injecting new momentum into healthcare and precision treatment.

He Quande quoted the research of Stanford University AI expert Fei-Fei Li, which once again confirmed the broad application prospects of AI in the medical field. From Fei-Fei Li's research, it can be found that through AI technology, real-time monitoring of healthcare workers' hand hygiene and patients' activity levels in the intensive care unit can be achieved, effectively improving the quality of medical care.

He Quande pointed out that although the application of generative AI in the medical field is still in its early stages, with the continuous iteration of technology, there will surely be more innovative applications of smart healthcare in the future, helping doctors make more accurate diagnoses and decisions.

The application potential of AI goes far beyond this. In the manufacturing sector, the application of digital twins also makes industrial production processes more transparent and precise. For example, by creating virtual models of products or systems, companies can simulate possible variables in the production process, achieve predictive maintenance, and maximize production efficiency. At the same time, in the field of environmental monitoring, AI technology is used to analyze large amounts of meteorological data, which can also assist governments in formulating more precise strategies to respond to climate change.

Dual-track Drive for Digital and Net Zero Transformation

He Quande further analyzed that "digital transformation" and "net zero transformation" are key directions for global future development. He pointed out that generative AI is rapidly becoming an important driver of digital transformation, not only enhancing industrial efficiency but also helping to optimize resource application. Digital transformation is an important strategy for many enterprises facing competitive pressures in the new era, "with AI as the core driving force, fundamentally changing traditional business models," he said.

Specifically, generative AI can assist companies in reducing energy consumption and waste during the production process. For example, some large enterprises have already begun to utilize AI technology for supply chain optimization, precisely planning product transportation routes and storage methods, which not only saves logistics costs but also reduces carbon emissions.

The net-zero transformation is an even greater global challenge, aimed at achieving carbon neutrality through technological innovation. He Quande mentioned that many countries are actively promoting the application of AI technology in the net-zero transformation, for example, using generative AI for data analysis of energy consumption to help companies improve factory design, thereby reducing negative impacts on the environment. He pointed out that AI is not only a tool for increasing efficiency but also a key technology that helps the world achieve sustainable development.

The development process of AI is both a spear and a shield for human society

However, the rapid development of AI has also raised concerns among humans about potential risks. He Quande uses the steam engine era as an example, pointing out that every technological revolution is accompanied by changes in the order of survival, bringing employment challenges and other negative impacts. He particularly emphasizes that the core of generative AI large language models lies in the quality of the data; if the model is trained using inaccurate data, it may lead to erroneous results. Therefore, how to enjoy the efficiency improvements brought by AI while reducing its risks is an important issue that must be addressed currently.

Therefore, he stated that the prevention and governance of the potential risks of AI has become an inevitable responsibility that humanity must bear in the process of AI development. He specifically cited the research discussions of 2024 Nobel Prize in Physics winner John Hopfield and the 'father of deep learning' Geoffrey Hinton in the field of neural networks and artificial intelligence (AI), emphasizing that a cautious attitude should be maintained in the face of AI development.

Jeffrey Hinton once warned, "Most people underestimate the potential benefits of artificial intelligence, just as they underestimate its risks." Whether these systems can be effectively controlled when AI systems' intelligence surpasses that of humans remains an unknown. He Quande further pointed out that regardless of pessimistic or optimistic predictions, AI risks and opportunities coexist, and therefore, AI technology applications should be promoted with a cautious and pragmatic attitude.

"The risks brought by generative AI are significantly different from traditional software security." He Quande pointed out that large language models compress global digital data, and their advantage lies in widespread application, but the risks cannot be ignored. He warned that when models are 'poisoned' by malicious data, it could have irreparable effects on critical decisions. Therefore, enterprises and governments must pay attention to the data sources and accuracy of core models when applying AI, ensuring the safety and credibility of the technology.

Because the release of AI technology marks a new chapter in human civilization, He Quande calls on the government and enterprises to pay attention to risk prevention and governance while promoting AI development, ensuring that the potential of AI technology can be safely and efficiently utilized. He emphasizes that AI is both a "spear" and a "shield"; the key lies in finding a balance to make it a driving force for the progress of human society rather than an obstacle.

Applying AI should focus on cybersecurity, ethics, and governance risks

In the rapid rise of generative AI, many large enterprises have taken proactive measures to address unpredictable risks. He Quande stated that these enterprises generally establish multi-layered protective measures internally, such as conducting Red Team exercises or implementing the so-called aggressive "Prompt Diversity Injection" mechanism to proactively guide and eliminate potential risks.

He pointed out that the role of the red team is to simulate the behavior of external attackers and test the defensive capabilities of internal AI systems; while the test of "prompt diversity injection" is to detect inappropriate or erroneous content that generative AI may produce, in order to eliminate potential issues before the product is officially launched.

In addition, some companies have established AI ethics committees specifically responsible for formulating relevant regulations for AI development and application, ensuring that technological development aligns with social values. He believes that the shift from prompt-based rules to data security training challenges indicates that companies applying AI have begun to examine the challenges posed by generative AI, which also demonstrates the high importance that companies place on AI safety issues.

From He Quande's 40 years of experience as a civil servant responsible for promoting government digitalization, generative AI is having an unprecedented impact on the world at an unprecedented speed. This impact is not limited to the technology sector but also extends to the economy, culture, and policy-making.

The emergence of generative AI has elevated automation and efficiency to a new level; its applications span content generation, language translation, medical diagnosis, and product design, quickly permeating all aspects of human life. This has not only attracted international attention but also prompted major countries to work together to address the risks of AI governance.

For example, countries such as the UK and the US are working together on cross-border cooperation to identify AI risks and implement classification and tiered governance. This kind of global cooperation has been extremely rare in past technological developments, which also highlights: the potential impact of generative AI is very significant.

He believes that the risks of generative AI are not limited to its use or misuse; the content it generates may involve infringement, improper information integration, and even negative impacts on the environment. For example, with the surge in demand for generative AI, the United States plans to build multiple data centers to meet the growing computational needs. While these data centers can enhance technological capabilities, they also pose significant challenges to energy supply and environmental protection, including increased carbon emissions and the consumption of large amounts of water resources and electricity. Therefore, while countries promote generative AI, they must also consider its long-term impact on the environment.

AI Technology Risks and Management Challenges

He Quande stated that the risks of generative AI can be divided into three categories: technical risks, managerial risks, and social risks. First, technical risks mainly arise from the vulnerabilities and limitations of the AI model itself, such as generating inaccurate content or being maliciously exploited for attacks.

Secondly, management risks involve how companies deploy AI systems, including how to ensure data security, privacy protection, and the transparency and interpretability of models; thirdly, social risks encompass the impact of generative AI on social structures, such as potential job displacement and ethical controversies.

He believes that the phenomenon of AI hallucination is one of the significant challenges among the current technical risks. The so-called AI hallucination refers to the content generated by AI systems that appears reasonable but is actually incorrect or meaningless. According to a survey by McKinsey, the two major challenges faced by American companies adopting AI are: AI inaccuracy and issues related to intellectual property rights (IPO). Among them, AI inaccuracy may lead to erroneous decision-making, while IPO issues involve the copyright ownership and usage rights of AI-generated content.

In response to these challenges, McKinsey has developed a set of guiding principles, proposing 10 considerations for addressing AI risks, including aspects such as model effectiveness, safety, reliability, and transparency. McKinsey emphasizes that establishing a Responsible AI framework is key to promoting the coordinated development of technology and society.

Establishing regulations lays the foundation for AI development

The United States and the European Union have successively introduced AI risk governance and application assessment standards, setting a good example for the world. The United States strengthens the regulation of AI technology at the national level, while the European Union proposes a series of regulations through the "AI Act," covering review mechanisms and transparency requirements for high-risk AI applications.

In this wave of trends, the Digital Industry Department of the Ministry of Digital Development in Taiwan has established the "AI Product and System Testing Center," dedicated to providing testing services for local AI products. The center has established a standardized evaluation system for 10 assessment items, including Safety, Explainability, Resiliency, Fairness, and Accuracy, to help enterprises enhance their technological competitiveness.

In addition, He Quande stated that the government is also promoting the "Draft Basic Law on AI", hoping to provide a legal basis for the development of AI technology in Taiwan and to address the cybersecurity and ethical challenges posed by generative AI. For example, the basic law provides clear guidance on the application scope and risk control of AI technology, laying the foundation for balancing technological innovation and social benefits.

Generative AI is considered a powerful tool for cybersecurity attacks and defenses, and its application characteristics can be divided into the following points. First, the speed of attacks has increased, and the difficulty of protection has risen: The development of AI technology has made attacks more precise and automated, requiring companies to invest more resources to cope with these new threats. For example, AI-generated deepfake technology can be used to create realistic fake videos and audio, posing a serious threat to information security.

Second, the collection and prediction of cybersecurity threat intelligence: AI can quickly analyze large amounts of data, identify potential threats, and make predictions. In the future, AI-driven Security Operations Centers (SOC) will become mainstream, helping businesses take action before threats occur.

Third, the design of the security product lifecycle: Security by Design has become the core concept of product development. This approach requires developers to consider security during the product design phase, rather than remedying it afterwards. Therefore, product developers need to provide Security by Default features, while users need to raise Security by Demand requirements, forming a positive interaction between upstream and downstream.

Fourth, the foundation of AI security: data credibility. Because secure and reliable data is the basis for trustworthy AI, traditional programs rely on manually written logic, while generative AI requires large-scale data support. This means that companies need to establish a sound data governance system to ensure that the sources of data are legal and the quality is reliable.

The Director of the Cybersecurity and Infrastructure Security Agency (CISA) in the United States, Jen Easterly, previously proposed the principle of "Secure-by-Design," which emphasizes the importance of introducing security mechanisms early in the development of AI tools to reduce the costs and efficiency issues of later remediation.

Four AI Risk Response Strategies

He Quande also quoted the four key aspects of AI risks that Jen Easterly is concerned about (Velocity, Vulnerability, Veracity, Vigilance), hoping to serve as a reference for Taiwan in formulating AI safety policies.

The first V is the speed of technological development (Velocity): The speed of AI technology advancement and the emergence of threats is accelerating, requiring proactive prevention strategies. For example, the rapid iteration of AI technology may make it difficult for companies to timely update their defense systems, thereby exposing them to new types of threats.

The second V refers to system vulnerability (Vulnerability): Many AI tools do not adequately consider security during the design phase and are susceptible to attacks. For example, AI-generated models may contain undiscovered vulnerabilities that can be maliciously exploited for data theft or denial-of-service attacks.

The third V emphasizes data veracity: The quality of input data for AI models has a direct impact on their results. To ensure the accuracy of results, companies need to establish a data source review mechanism to prevent poor data from affecting decision-making.

The last V is Vigilance: Governments and businesses need to remain highly alert and establish rapid detection and response mechanisms. For example, in response to the potential misuse of deepfake technology triggered by generative AI, a regular monitoring mechanism should be established to prevent the impact of false information on society.

The development of AI technology will rewrite the rules of the cybersecurity game, not only enhancing production efficiency but also bringing more challenges. For example, in the future construction of smart cities, AI will be responsible for the analysis and decision-making of big data. Without proper security mechanisms, it will face enormous risks. Therefore, he believes that Taiwan should continue to strengthen its AI security policies, combining both technological and managerial approaches to ensure that AI technology achieves the best balance between security and efficiency, paving the way for the sustainable development of future technology.

Emphasizing cybersecurity from the development stage and viewing cybersecurity protection from a hacker's perspective

He Quande pointed out that since the Internet began, many concepts of cybersecurity have continuously advanced with technological development. The past cybersecurity technology was mainly based on firewalls, which later evolved into Intrusion Detection Systems (IDS) and Intrusion Prevention Systems (IPS).

However, as online threats become increasingly diverse, he said, the current cybersecurity concept has gradually shifted from a trusted framework to a Zero Trust model. This shift reflects the complexity and diversity of cybersecurity challenges in the digital age and requires companies to make comprehensive changes from the technical level to the management structure.

He Quantai emphasized that we must comprehensively upgrade from security to resilience, which is not just a technological innovation but also a long-term strategic plan. He mentioned that resilience means the organization's ability to quickly resume operations when facing attacks or disasters, and it can reduce the impact of losses. "This is a necessary capability to combat current and future cybersecurity threats," he said.

In addition, he raised several important points, including: the importance of Shift Left Security. This assertion emphasizes the need to prioritize security considerations in the early stages of software development to avoid the high costs of remediation later. He Quande stated that many companies overlook the importance of early involvement in security, leading to numerous vulnerabilities after the product is launched, which not only affects user experience but may also damage brand reputation.

At the same time, he also suggested that cybersecurity strategies should be strengthened from the hacker's perspective, simulating the behavior of potential attackers, which helps companies more accurately identify system vulnerabilities and deploy countermeasures in advance. This is a more proactive approach to cybersecurity management, especially today when attack methods are becoming increasingly diverse, making prediction and prevention more important than responding after the fact.

Another important point that cannot be ignored is the importance of data protection. He pointed out that system failures can be rebuilt, but the loss of data may be irretrievable. Therefore, companies should not only focus on data storage and backup, but also need to design more comprehensive security mechanisms for data transmission, usage, and access.

Applications and Challenges of Generative AI

In the wave of generative AI, He Quande particularly pointed out that this technology brings us unprecedented opportunities and challenges. He emphasized that we should "Embrace Gen AI with eyes wide open." This statement means that while accepting the innovative value of AI, we must also remain highly vigilant and deeply understand the potential risks it may bring.

He further analyzed the multifaceted challenges of generative AI applications. First, there is the continuous growth in computing power demand, which is a significant barrier for small and medium-sized enterprises. Second, the global shortage of AI talent is an important factor limiting the promotion of technology. Third, he cited data indicating that the current proportion of Chinese data in AI training data is only 1.7%, with traditional Chinese being even lower than 0.5%. This results in most AI models performing poorly when processing traditional Chinese, failing to meet the needs of the local market in Taiwan.

To address the issue of this training model, the National Science and Technology Council promotes a large language model based on Traditional Chinese called "TAIDE" (Trustworthy AI Dialogue Engine). The purpose of this model is to create an AI technology platform that aligns with Taiwan's cultural characteristics and meets local needs. This is not only an important step in Taiwan's digital transformation but also provides a global example of AI that supports diverse languages and cultures.

In addition, he believes that the potential risks of generative AI also include: data security and model bias. For example, AI-generated content may inadvertently leak confidential information or lead to biased decisions due to imbalances in the training data. To address these issues, developers must be required to incorporate higher standards of security and fairness during the design phase.

Another highly discussed topic is: the duality of AI technology in the field of cybersecurity. He said that AI can be used for rapid detection and defense against potential threats, such as identifying abnormal behavior by analyzing massive amounts of data; on the other hand, AI can also be maliciously used to develop more sophisticated attack tools. "Therefore, how to use AI to combat AI has become an important part of current cybersecurity strategies," he said.

He Chuan-De firmly believes that AI technology will have a profound impact on Taiwan's future social and economic development. He mentioned that AI can not only improve production efficiency but also enhance the quality of life. For example, visually impaired individuals in the future may be able to hail a taxi through AI technology or use AI to assist with complex tasks in daily life. These application scenarios not only demonstrate the potential of AI but also show humanity the beautiful prospects of the combination of technology and humanity.

He further mentioned that AI will also drive Taiwan's industrial upgrading and enhance international competitiveness. For example, in the manufacturing sector, AI can achieve comprehensive optimization of the supply chain; in the medical field, AI can accelerate new drug development and improve diagnostic accuracy. He believes that these technological applications can not only bring economic benefits to enterprises but also create greater public value for society.

He Quande quoted the French writer Marcel Proust's famous saying: "The real voyage of discovery consists not in seeking new landscapes, but in having new eyes." This statement emphasizes the importance of innovative thinking. He believes that especially in the era of rapid AI development, traditional concepts and methods can no longer cope with emerging challenges. We need to re-examine the relationship between technology, governance, and social development from a completely new perspective.

Finally, he urged Taiwan to start with simple risk management, gradually adapt and act quickly. He emphasized that future success depends on how we view governance and the application of technology with new thinking, and inject new momentum into Taiwan's social and economic development.

He Quande said: "In the AI era, Taiwan should make good use of its own innovative capabilities to create application scenarios with local characteristics, allowing AI to become an important pillar for promoting social equity, economic development, and technological progress."

The director of the Cybersecurity Agency, He Quande, quoted the four aspects of AI risk concerns from Jen Easterly, the director of the U.S. CISA, and believes that in the future, the government can use these four aspects as a reference when formulating AI security policies.

Summary
In a recent speech, He Quande, director of the National Information Security Research Institute, discussed the transformative impact of generative AI on governance and society. He likened AI to a steam engine implanted in the human brain, enhancing efficiency and driving comprehensive social change. Generative AI's limitless application potential extends beyond text generation to areas like image creation, decision support, and medical diagnostics, significantly altering social structures and economic industries. He emphasized that AI is rapidly evolving, with innovations like Digital Twins enhancing healthcare and manufacturing processes. AI's role in digital transformation and achieving net-zero goals is crucial, as it optimizes resource use and reduces energy consumption. However, He also warned of the risks associated with AI's rapid development, citing historical precedents of technological revolutions leading to job challenges and societal shifts. He stressed the importance of data quality in AI models to prevent erroneous outcomes and highlighted the necessity of addressing potential risks while harnessing AI's benefits. Ultimately, He underscored that managing AI's risks is a responsibility humanity must embrace as the technology continues to evolve.