%0 Journal Article %T 梯度学习的参数控制帮助线程预取模型 %A 裴颂文 %A 张俊格 %A 宁静< %A /br> %A PEI Songwen %A ZHANG Junge %A NING Jing %J 国防科技大学学报 %D 2016 %R 10.11887/j.cn.201605010 %X 对于非规则访存的应用程序,当某个应用程序的访存开销大于计算开销时,传统帮助线程的访存开销会高于主线程的计算开销,从而导致帮助线程落后于主线程。于是提出一种改进的基于参数控制的帮助线程预取模型,该模型采用梯度下降算法对控制参数求解最优值,从而有效地控制帮助线程与主线程的访存任务量,使帮助线程领先于主线程。实验结果表明,基于参数选择的线程预取模型能获得1.1~1.5倍的系统性能加速比。</br>To the applications with irregular accessing memory, if the overhead of accessing memory for a given application is much greater than that of computation, it will make the helper thread lag behind the main thread. Hereby, an improved helper thread pre fetching model by adding control parameters was proposed. The gradient descent algorithm is one of the most popular machine learning algorithms, which was adopted to determine the optimal control parameters. The amount of the memory access tasks was controlled by the control parameters effectively, which makes the helper thread be finished ahead of the main thread. The experiment results show that the speedup of system performance is achieved by 1.1 times to 1.5 times. %K 数据预取 帮助线程 多核系统 访存延迟 梯度下降< %K /br> %K data pre-fetch helper thread multi-core system memory latency gradient descent %U http://journal.nudt.edu.cn/gfkjdxxb/ch/reader/view_abstract.aspx?file_no=201605010&flag=1