Distributed Escort Control of Multi-agent System with Intermittent Inter-agent Communication
Proceedings of 2021 5th Chinese Conference on Swarm Intelligence and Cooperative Control(2022)
Huazhong University of Science and Technology | Southeast University
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
A General Alignment Repulsion Algorithm for Flocking of Multi-Agent Systems.
被引用172
Distributed Average Tracking of Multiple Time-Varying Reference Signals with Bounded Derivatives
被引用300
On Consensus Algorithms for Double-Integrator Dynamics
被引用1677
Robust Dynamic Average Consensus Of Time-Varying Inputs
被引用138
Distributed Average Tracking for Reference Signals with Bounded Accelerations
被引用102
Dynamic Triggering Mechanisms for Event-Triggered Control
被引用1229
Stability and Convergence Properties of Dynamic Average Consensus Estimators
被引用358
Collaborative Control of Multivehicle Systems in Diverse Motion Patterns
被引用26
Dynamic Event-Triggered and Self-Triggered Control for Multi-agent Systems
被引用317
被引用38