%0 Journal Article %T 基于自注意力机制的动态全息声场生成方法
Dynamic Holographic Acoustic Field Generation Method Based on Self-Attention Mechanism %A 杨柳 %A 游福成 %J Software Engineering and Applications %P 302-311 %@ 2325-2278 %D 2024 %I Hans Publishing %R 10.12677/sea.2024.133030 %X 声场控制对于扬声器设计、超声成像和声学粒子操纵等多种应用至关重要。对微米和纳米尺度物体进行精确操纵的需求导致了非接触式操纵方法的发展。然而,关于给定全息声场的反向操纵的研究很少。在本文中,我们提出了一种在相控阵技术(PAT)背景下基于自注意力机制Transformer模型(VS3D- Transformer)的方法,以实现快速准确地全息声场生成。我们的方法解决了传统CNN仅考虑局部感受野且训练精度低的缺点。此外,我们降低了传统物理方法的迭代复杂性。为了模拟声场的产生,我们采用基于活塞模型的模拟方法来产生全息声场。在仿真研究中,与传统的IB迭代算法和深度学习Acousnet算法相比,我们的模型表现出更快的训练速度和更高的精度。我们提出的模型在各种条件下(即声场相位优化准确率、损失率和训练速度)的结果表明我们的模型可以作为一种高效的替代方案。
Acoustic field control is critical in applications as diverse as loudspeaker design, ultrasonic imaging, and acoustic particle manipulation. The need for precise manipulation of objects at the micron and nanoscale has led to the development of contactless manipulation methods. However, there are few studies on the reverse manipulation of a given holographic acoustic field. In this paper, we propose a method based on the attention mechanism transformer model (VS3D-Transformer) within the context of phased array technology (PAT) to achieve fast and accurate holographic acoustic field generation. Our method solves the shortcomings of traditional CNNs which only consider the local receptive field and possesses low training accuracy. Moreover, we reduce the iterative complexity of traditional physical methods. To simulate acoustic field generation, we use the simulation method based on the piston model to generate the holographic acoustic field. In the simulation study, our model demonstrates faster training speed and higher accuracy compared to both the traditional IB iterative algorithm and the deep learning Acousnet algorithm. The results of our proposed model under various conditions (i.e., overall field generation, loss rate, and training speed) indicate that our model could serve as a highly effective alternative. %K 全息声场,超声相控阵,深度学习,自注意力机制
Acoustic Holography %K Phased Array Transducer %K Deep Learning %K Self-Attention Mechanism %U http://www.hanspub.org/journal/PaperInformation.aspx?PaperID=89953