Research Topics
Control Theory/Control Engineering I specialize in control theory and control engineering. Control theory refers to the mathematical methodologies and theoretical frameworks developed to ensure that a "system" operates correctly and remains stable, while control engineering represents the applied technical discipline built upon these theoretical foundations. But why does a field of study exist where the central goal is expressed in such seemingly vague terms as "making systems operate correctly and stably"? The answer lies in the fact that there exists a universal mechanism called "feedback," which is an essential concept for every artificial system, and control theory is precisely the discipline that systematically investigates the principles of this mechanism. The term "feedback" is often used in business contexts to refer to communication aimed at evaluation and improvement, but in its original sense, it denotes a mechanism whereby a system's output is observed and the input is adjusted in response. Real-world systems—such as autonomous vehicles, manufacturing plants, and power networks—are constantly subjected to disturbances and environmental changes. To maintain stable operation, these systems must adjust their behavior appropriately according to the situation. The most effective—and in practice, almost the only—means of adapting to such uncertainties is the feedback mechanism. Feedback is a universal principle independent of the specific system, and its properties can be rigorously analyzed using mathematics, the common foundation for quantitative reasoning. This mathematical exploration forms the core of control theory and control engineering, which is why the field is often celebrated as "the most beautiful engineering.""

Control Systems Control theory plays a foundational role across all areas of society and industry. Here, we present three representative systems. 1. Autonomous Vehicles Autonomous vehicles exemplify modern control theory in practice. At their core lies a feedback mechanism, through which the vehicle continuously observes and estimates both its own state and the surrounding environment using diverse sensors such as cameras, LiDAR, GPS, and IMUs. In recent years, machine learning has been integrated into the perception layer, enhancing the accuracy of object recognition and trajectory prediction. Meanwhile, the control layer relies on feedback control to optimize steering angles, acceleration, and braking in real time, based on environmental information from the perception layer and the target trajectory. The close interplay between perception through learning and stabilization via control theory ensures safe and smooth driving even amid disturbances and uncertainties. Such hierarchical feedback structures form the foundation of next-generation intelligent systems and represent the forefront of the evolution and integration of control theory. 2. Manufacturing Plants Manufacturing plants are among the earliest practical applications of control theory. To maintain physical quantities such as temperature, pressure, flow, and position within desired ranges, each process incorporates feedback mechanisms that adjust actuator outputs based on sensor measurements. This closed-loop structure ensures that the production line remains stable and maintains quality despite external disturbances or gradual changes in equipment. These control systems are built on classical methods such as PID and sequence control and are now standardized and packaged in Programmable Logic Controllers (PLCs). PLCs integrate communication and safety mechanisms among industrial devices, serving as the central control infrastructure for factory automation. The application of control theory in manufacturing demonstrates not only engineering utility but also the long-standing role of feedback in ensuring industrial reliability, efficiency, and safety. 3. Power Grids Power grids are among the largest and most systematically controlled systems. Generators, transmission lines, substations, and loads are widely distributed yet interconnected, with operation relying on a vast number of control loops. Each generator employs feedback control to maintain constant rotational speed and output voltage, supporting the stability of system-wide frequency and voltage. Higher-level controls, such as power flow management and system protection, are based on multilayered feedback structures that combine extensive measurement data with model-based predictions. Recently, the integration of large-scale renewable energy and fluctuating demand has made power networks more dynamic and uncertain, but distributed cooperative control and optimization techniques grounded in control theory continue to ensure stable operation. These mechanisms illustrate a mature example of feedback in critical social infrastructure.
Beneath diverse systems such as autonomous vehicles, manufacturing plants, and power networks lies the fundamental principle of control based on feedback. Control serves as a universal foundational technology that ensures stability and efficiency in the physical world, functioning as an overarching body of knowledge across industry and social infrastructure. With recent advances in AI and machine learning, control mechanisms have become increasingly adaptive and predictive, capable of autonomously responding to complex and uncertain environments. Control theory continues to evolve as a new interface connecting intelligence and the physical world, shaping the foundation of next-generation society.
Research Directions Control theory possesses both cross-sectional practicality as an engineering discipline and a systematic theoretical framework. I consider this duality and have established three pillars of research: the theoretical foundation of "Data-Driven Learning Control", and the engineering developments of "Security for Control" and "Control for Security". These are not merely independent domains; rather, the applied perspective brings new developments to theory, and theoretical insights open new directions for application—thus, they exist in a mutually influential and developmental relationship. Data-Driven Learning Control Recent advancements in artificial intelligence and machine learning have led to the emergence of "Data-Driven Learning Control," which focuses on leveraging vast amounts of data obtained from high-precision sensors to optimize the behavior of control targets in a learning-based manner. By updating control strategies in response to environmental changes and disturbances, more flexible and adaptive control becomes possible. The development of AI technologies has enabled real-time data analysis and decision-making, allowing control theory to evolve as the foundation for modern system design that is more advanced and intelligent. One of the fundamental challenges in machine learning is the complexity of architectures such as deep models, which makes it extremely difficult to guarantee the stability and performance of systems. My research utilizes the insights of control theory, which has a deep theoretical foundation, to develop control and learning methods that can guarantee various performance metrics even when observational data contains unpredictable noise and disturbances. Such control system design problems are naively formulated as optimization problems with infinitely many constraints, which cannot be solved by standard solvers. Our proposed method derives an equivalence transformation from infinitely many constraints to a single constraint by utilizing the inclusion relationships between matrix quadratic inequality constraints, providing a practical computational approach. Additionally, we are expanding our theoretical framework to achieve further reliability, such as guaranteeing the stability of the learning process. Published Papers T. Kaminaga and H. Sasahara, "Data informativity under data perturbation," arXiv, 2025. T. Kaminaga and H. Sasahara, "Data informativity for quadratic stabilization under data perturbation," American Control Conference, 2025. Security for Control As noted above, control mechanisms are a foundational technology supporting the operation of every system, and ensuring their security is an extremely important challenge. This is a matter of national significance, and real-world risks have already emerged—for example, in 2025, suspicious communication functions were detected in a foreign-made solar power control system, highlighting a security threat that cannot be ignored. At the research level, numerous attacks have been demonstrated. For instance, in autonomous vehicles, GPS spoofing or modification of traffic-sign data at the perception layer can mislead AI, potentially causing severe accidents. Similarly, modern PLCs in manufacturing plants are standardized and networked, meaning that cyberattacks can have extensive and severe consequences. In this way, the security of control systems is not merely a technical issue but an urgent problem directly affecting the safety of social infrastructure and the public. My research has been the first to investigate a new threat at the control layer: adversarial data perturbation attacks, and has clarified their serious impact. While adversarial attacks on AI models in the perception layer are well-known, the control layer has traditionally been considered safe because it is operated by relatively simple feedback controllers with few parameters. However, my research has demonstrated that, by carefully designing attacks using insights from control theory, even minimal perturbations of the data can destabilize system behavior. This vulnerability is not limited to specific systems; it has been confirmed across core control elements, including motor positioning, suspension control, rocket attitude control, aircraft pitch control, and liquid tank flow control. Furthermore, the same vulnerability has been observed in advanced autonomous driving systems, such as adaptive cruise control, indicating that it represents a universal threat regardless of system age. This research concurrently investigates these vulnerabilities and develops defense technologies leveraging the framework of data-driven learning control. Published Papers H. Sasahara, "Adversarial destabilization attacks to direct data-driven control," arXiv, 2025. T. Kaminaga and H. Sasahara, "Adversarial attack using projected gradient method to data-driven control," IEEE Conference on Control Technology and Applications, 2024. H. Sasahara, "Adversarial attacks to direct data-driven control for destabilization," IEEE Conference on Decision and Control, 2023.
Control for Security In addition to the security of control systems, I am also conducting research on the automation of cyber defense technologies utilizing control theory. In traditional network security, automation has only been achieved in certain areas such as intrusion detection, while many processes (such as intrusion host remediation, root cause analysis, and endpoint policy updates) still rely on manual intervention, creating bottlenecks in terms of accuracy and cost. To address these challenges, Autonomous Cyber Defense (ACD) systems, capable of advanced decision-making without human authentication, are gaining attention in the field of cybersecurity. My research focuses on leveraging control theory in the design of ACD systems. ACD is expected to be built using reinforcement learning, which is closely related to data-driven learning control. For example, by applying the concept of controllability, which is deeply studied in control theory, to ACD, performance analysis becomes possible. It is also feasible to analyze the performance of defense mechanisms using Bayesian inference, a standard approach in control theory. Additionally, I am concurrently developing efficient reinforcement learning algorithms based on control theory. Published Papers L. Burbano, H. Sasahara, and A. Cardenas, "Steerability of autonomous cyber-defense agents by meta-attackers," IEEE Conference on Artificial Intelligence, 2025. H. Sasahara and H. Sandberg, "Asymptotic security using Bayesian defense mechanism with application to cyber deception," IEEE Transactions on Automatic Control, 2024.