With the speed gap between storage system access and processor computing,
end-to-end data processing has become a bottleneck to improve the total performance
of computer systems over the Internet. Based on the analysis of data
processing behavior, an adaptive cache organization scheme is proposed with
fast address calculation. This scheme can make full use of the characteristics
of stack space data access, adopt fast address calculation strategy, and reduce
the hit time of stack access. Adaptively, the stack cache can be turned off from
beginning to end, when a stack overflow occurs to avoid the effect of stack
switching on processor performance. Also, through the instruction cache and
the failure behavior for the data cache, a prefetching policy is developed,
which is combined with the data capture of the failover queue state. Finally,
the proposed method can maintain the order of instruction and data access,
which facilitates the extraction of prefetching in the end-to-end data processing.
References
[1]
Wang, Y.G., et al. (2016) Design and Evaluation of the Optimal Cache Allocation for Content-Centric Networking. IEEE Transactions on Computers, 65, 95-107. https://doi.org/10.1109/TC.2015.2409848
[2]
Mayuresh, K., et al. (2017) ROBUS: Fair Cache Allocation for Data-Parallel Workloads. Proceedings of the 2017 ACM International Conference on Management of Data, Chicago, 14-19 May 2017.
[3]
Xu, M., et al. (2016) Analysis and Implementation of Global Preemptive Fixed-Priority Scheduling with Dynamic Cache Allocation. 2016 IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS), Vienna, 11-14 April 2016, 1-12. https://doi.org/10.1109/RTAS.2016.7461322
[4]
Herdrich, A., et al. (2016) Cache QoS: From Concept to Reality in the Intel Xeon Processor E5-2600 v3 Product Family. 2016 IEEE International Symposium on High Performance Computer Architecture (HPCA), Barcelona, 12-16 March 2016, 657-668. https://doi.org/10.1109/HPCA.2016.7446102
[5]
Son, D.O., et al. (2016) A New Prefetch Policy for Data Filter Cache in Energy-Aware Embedded Systems. 2016 Information Science and Applications (ICISA), 1409-1418. https://doi.org/10.1007/978-981-10-0557-2_134
[6]
Jo, D., et al. (2016) Enhanced Rolling Cache Architecture with Prefetch. IEEE International Conference on Consumer Electronics-Asia (ICCE-Asia), Seoul, 26-28 October 2016, 1-3. https://doi.org/10.1109/ICCE-Asia.2016.7804786
[7]
Maurice, C., et al. (2017) Hello from the Other Side: SSH over Robust Cache Covert Channels in the Cloud. NDSS, San Diego, CA, US.
[8]
Tao, M.X., et al. (2016) Content-Ceintric Sparse Multicast Beamforming for Cache-Enabled Cloud RAN. IEEE Transactions on Wireless Communications, 15, 6118-6131. https://doi.org/10.1109/TWC.2016.2578922
[9]
Liu, F.F., et al. (2016) Catalyst: Defeating Last-Level Cache Side Channel Attacks in Cloud Computing. 2016 IEEE International Symposium on High Performance Computer Architecture (HPCA), Barcelona, 12-16 March 2016, 406-418. https://doi.org/10.1109/HPCA.2016.7446082
[10]
Inci, M.S., et al. (2016) Cache Attacks Enable Bulk Key Recovery on the Cloud. International Conference on Cryptographic Hardware and Embedded Systems, 368-388. https://doi.org/10.1007/978-3-662-53140-2_18
[11]
Arteaga, D., et al. (2016) CloudCache: On-Demand Flash Cache Management for Cloud Computing. Proceedings of the 14th Usenix Conference on File and Storage Technologies (FAST), Santa Clara, 22-25 February 2016, 355-369.