全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

基于自注意力机制的动态全息声场生成方法
Dynamic Holographic Acoustic Field Generation Method Based on Self-Attention Mechanism

DOI: 10.12677/sea.2024.133030, PP. 302-311

Keywords: 全息声场,超声相控阵,深度学习,自注意力机制
Acoustic Holography
, Phased Array Transducer, Deep Learning, Self-Attention Mechanism

Full-Text   Cite this paper   Add to My Lib

Abstract:

声场控制对于扬声器设计、超声成像和声学粒子操纵等多种应用至关重要。对微米和纳米尺度物体进行精确操纵的需求导致了非接触式操纵方法的发展。然而,关于给定全息声场的反向操纵的研究很少。在本文中,我们提出了一种在相控阵技术(PAT)背景下基于自注意力机制Transformer模型(VS3D- Transformer)的方法,以实现快速准确地全息声场生成。我们的方法解决了传统CNN仅考虑局部感受野且训练精度低的缺点。此外,我们降低了传统物理方法的迭代复杂性。为了模拟声场的产生,我们采用基于活塞模型的模拟方法来产生全息声场。在仿真研究中,与传统的IB迭代算法和深度学习Acousnet算法相比,我们的模型表现出更快的训练速度和更高的精度。我们提出的模型在各种条件下(即声场相位优化准确率、损失率和训练速度)的结果表明我们的模型可以作为一种高效的替代方案。
Acoustic field control is critical in applications as diverse as loudspeaker design, ultrasonic imaging, and acoustic particle manipulation. The need for precise manipulation of objects at the micron and nanoscale has led to the development of contactless manipulation methods. However, there are few studies on the reverse manipulation of a given holographic acoustic field. In this paper, we propose a method based on the attention mechanism transformer model (VS3D-Transformer) within the context of phased array technology (PAT) to achieve fast and accurate holographic acoustic field generation. Our method solves the shortcomings of traditional CNNs which only consider the local receptive field and possesses low training accuracy. Moreover, we reduce the iterative complexity of traditional physical methods. To simulate acoustic field generation, we use the simulation method based on the piston model to generate the holographic acoustic field. In the simulation study, our model demonstrates faster training speed and higher accuracy compared to both the traditional IB iterative algorithm and the deep learning Acousnet algorithm. The results of our proposed model under various conditions (i.e., overall field generation, loss rate, and training speed) indicate that our model could serve as a highly effective alternative.

References

[1]  Memoli, G., Caleap, M., Asakawa, M., et al. (2017) Metamaterial Bricks and Quantization of Meta-Surfaces. Nature Communications, 8, Article No. 14608.
https://doi.org/10.1038/ncomms14608
[2]  Li, B., Lu, M., Liu, C., et al. (2022) Acoustic Hologram Reconstruction with Unsupervised Neural Network. Frontiers in Materials, 9, Article ID: 916527.
https://doi.org/10.3389/fmats.2022.916527
[3]  Friend, J. and Yeo, L. (2011) Microscale Acoustofluidics: Microfluidics Driven via Acoustics and Ultrasonics. Reviews of Modern Physics, 83, 647-704.
https://doi.org/10.1103/RevModPhys.83.647
[4]  Wiklund, M., et al. (2006) Ultrasonic Standing Wave Manipulation Technology Integrated into a Dielectrophoretic Chip. Lab on a Chip, 6, 1537-1544.
https://doi.org/10.1039/B612064B
[5]  Shi, J., et al. (2009) Continuous Particle Separation in a Microfluidic Channel via Standing Surface Acoustic Waves (SSAW). Lab on a Chip, 9, 3354-3359.
https://doi.org/10.1039/b915113c
[6]  Frommelt, T., et al. (2008) Microfluidic Mixing via Acoustically Driven Chaotic Advection. Physical Review Letters, 100, Article ID: 034502.
https://doi.org/10.1103/PhysRevLett.100.034502
[7]  Gao, Y., Yang, B.Q., Shi, S.G., et al. (2023) Extension of Sound Field Reconstruction Based on Element Radiation Superposition Method in a Sparsity Framework. Chinese Physics B, 32, Article ID: 044302.
https://doi.org/10.1088/1674-1056/ac8e55
[8]  Dong, W., Chen, M. and Xiong, L. (2022) Research on Sound Transmission Performance of an Infinite Solid Plate Excited by a Vibrating Piston. The 32nd International Ocean and Polar Engineering Conference, Shanghai, June 2022. ISOPE-I-22-455.
[9]  Lahoud, J., Cao, J., Khan, F.S., et al. (2022) 3D Vision with Transformers: A Survey.
[10]  He, C., Li, R., Li, S., et al. (2022) Voxel Set Transformer: A Set-to-Set Approach to 3d Object Detection from Point Clouds. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, 18-24 June 2022, 8417-8427.
https://doi.org/10.1109/CVPR52688.2022.00823
[11]  Zhao, S., You, F. and Liu, Z.Y. (2020) Leveraging Pre-Trained Language Model for Summary Generation on Short Text. IEEE Access, 8, 228798-228803.
https://doi.org/10.1109/ACCESS.2020.3045748
[12]  Tang, Z., Cho, J., Nie, Y., et al. (2022) TVLT: Textless Vision-Language Transformer. Advances in Neural Information Processing Systems, Vol. 35, 9617-9632.
[13]  Cranston, D. (2015) A Review of High Intensity Focused Ultrasound in Relation to the Treatment of Renal Tumours and Other Malignancies. Ultrasonics Sonochemistry, 27, 654-658.
https://doi.org/10.1016/j.ultsonch.2015.05.035
[14]  Geng, J. (2013) Three-Dimensional Display Technologies. Advances in Optics and Photonics, 5, 456-535.
https://doi.org/10.1364/AOP.5.000456
[15]  Zhao, T. and Chi, Y. (2020) Modified Gerchberg-Saxton (GS) Algorithm and Its Application. Entropy, 22, Article No. 1354.
https://doi.org/10.3390/e22121354
[16]  Fushimi, T., Yamamoto, K. and Ochiai, Y. (2021) Acoustic Hologram Optimisation Using Automatic Differentiation. Scientific Reports, 11, Article No. 12678.
https://doi.org/10.1038/s41598-021-91880-2
[17]  Plasencia, D.M., Hirayama, R., Montano-Murillo, R., et al. (2020) GS-PAT: High-Speed Multi-Point Sound-Fields for Phased Arrays of Transducers. ACM Transactions on Graphics (TOG), 39, Article No. 138.
https://doi.org/10.1145/3386569.3392492
[18]  Long, B., Seah, S.A., Carter, T., et al. (2014) Rendering Volumetric Haptic Shapes in Mid-Air Using Ultrasound. ACM Transactions on Graphics (TOG), 33, Article No. 181.
https://doi.org/10.1145/2661229.2661257
[19]  Marzo, Y.A. and Drinkwater, B.W. (2019) Holographic Acoustic Tweezers. Proceedings of the National Academy of Sciences, 116, 84-89.
https://doi.org/10.1073/pnas.1813047115
[20]  Zhong, C., Jia, Y., Jeong, D.C., et al. (2021) A Deep Learning Based Approach to Dynamic 3d Holographic Acoustic Field Generation from Phased Transducer Array. IEEE Robotics and Automation Letters, 7, 666-673.
https://doi.org/10.1109/LRA.2021.3130368

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133