Cybersecurity Research Institute TOP日本語
  1. Center for Research on AI Security and Technology Evolution
  2. Research Activities

Research Activities

Security for AI, AI for Security, and AI Trust and Transparency :
Toward a Safe and Secure AI-Native Cyber Society

The Center for Research on AI Security and Technology Evolution (CREATE) serves as a core hub for advancing next-generation cybersecurity technologies powered by AI. In today’s environment, where cyberattacks are becoming increasingly sophisticated and diverse, CREATE promotes research and development based on three pillars: “Security for AI,” “AI for Security,” and “AI Trust and Transparency.” Through these initiatives, CREATE envisions a future in which AI can be safely and reliably used as a foundation of society, contributing to the realization of a secure and trustworthy AI-native world.

Research Theme

Security for AI

Building Safe and Robust AI Systems

CREATE conducts research to ensure the safety and reliability of AI itself. With the rapid adoption of large language models (LLMs) and other generative AI systems across society and industry, new risks such as adversarial attacks and unintended behaviors have emerged. We focus on Safety, Security, and Robustness to achieve resilient and trustworthy AI.

As part of this effort, CREATE has built an AI Security Evaluation Platform that systematically organizes and analyzes attack cases based on the global standard framework MITRE TTP (Tactics, Techniques, and Procedures). This platform provides a comprehensive understanding of threats faced by AI systems and enables multidimensional evaluation of commercial and open-source LLMs in terms of safety, robustness, privacy, and fairness. The insights gained are used to identify weaknesses, design countermeasures, and contribute to the development of secure and robust AI systems that can be trusted in real-world use.

In parallel, CREATE analyzes emerging attack techniques, including backdoor attacks and adversarial examples, to understand how malicious manipulations affect AI systems. These studies deepen our understanding of attack mechanisms and support the design of stronger defensive strategies.

Research on building an AI security evaluation platform, analyzing backdoor and adversarial attacks, and strengthening defensive strategies for robust and resilient AI systems.

AI for Security

Deriving Insights from Massive Security Data

CREATE integrates AI with cybersecurity technologies to extract valuable insights from vast amounts of security information. By performing real-time big data analysis, the center detects unknown threats, malware, and emerging attack campaigns, thereby enhancing the overall safety of cyberspace. In addition, CREATE addresses the growing challenge of cybersecurity workforce shortages by pursuing the automation and autonomy of defensive operations.

Specifically, AI is used to automatically triage and prioritize massive volumes of security alerts, reducing the operational burden on analysts and enabling faster, more accurate incident response. The center also utilizes darknet observation data to detect malware infection activities at an early stage and to collect and analyze global threat intelligence. Through these efforts, CREATE aims to build a next-generation AI-driven defensive infrastructure that supports secure and resilient digital operations.

Automated alert triage using AI for efficient security operations.
Early detection of malware infection activities using darknet observation data.

AI Trust and Transparency

Achieving Explainable and Transparent AI

Ensuring that humans can understand and trust AI decisions is another essential research goal. CREATE integrates Explainable AI (XAI) techniques into AI-based intrusion detection systems to make the reasoning behind AI decisions visible and interpretable. By analyzing feature contributions and visualizing the basis of detection results, the center enhances the transparency and accountability of AI systems, paving the way toward a society where AI can be used with confidence and understanding.

Integration of XAI into AI-based intrusion detection systems to visualize decision rationale, enabling experts to interpret and verify AI’s reasoning.
※The names of other companies, products or services are the trademarks or registered trademarks of their respective companies.
back to page top